Programmatically determine number of strokes in a

2020-06-03 00:56发布

问题:

Does Unicode store stroke count information about Chinese, Japanese, or other stroke-based characters?

回答1:

A little googling came up with Unihan.zip, a file published by the Unicode Consortium which contains several text files including Unihan_RadicalStrokeCounts.txt which may be what you want. There is also an online Unihan Database Lookup based on this data.



回答2:

In Python there is a library for that:

>>> from cjklib.characterlookup import CharacterLookup
>>> cjk = CharacterLookup('C')
>>> cjk.getStrokeCount(u'日')
4

Disclaimer: I wrote it



回答3:

You mean, is it encoded somehow in the actual code point? No. There may well be a table somewhere you can find on the net (or create one) but it's not part of the Unicode mandate to store this sort of metadata.



回答4:

If you want to do character recognition goggle HanziDict.

Also take a look at the Unihan data site:

http://www.unicode.org/charts/unihanrsindex.html

You can look up stroke count and then get character info. You might be able to build your own look up.



回答5:

UILocalizedIndexedCollation can be a total solution.

https://developer.apple.com/library/ios/documentation/iPhone/Reference/UILocalizedIndexedCollation_Class/UILocalizedIndexedCollation.html