I always work on Arabic text files and to avoid problems with encoding I transliterate Arabic characters into English according to Buckwalter's scheme (http://www.qamus.org/transliteration.htm)
Here is my code to do so but it's very SLOW even with small files like 400 kb. Ideas to make it faster?
Thanks
def transliterate(file):
data = open(file).read()
buckArab = {"'":"ء", "|":"آ", "?":"أ", "&":"ؤ", "<":"إ", "}":"ئ", "A":"ا", "b":"ب", "p":"ة", "t":"ت", "v":"ث", "g":"ج", "H":"ح", "x":"خ", "d":"د", "*":"ذ", "r":"ر", "z":"ز", "s":"س", "$":"ش", "S":"ص", "D":"ض", "T":"ط", "Z":"ظ", "E":"ع", "G":"غ", "_":"ـ", "f":"ف", "q":"ق", "k":"ك", "l":"ل", "m":"م", "n":"ن", "h":"ه", "w":"و", "Y":"ى", "y":"ي", "F":"ً", "N":"ٌ", "K":"ٍ", "~":"ّ", "o":"ْ", "u":"ُ", "a":"َ", "i":"ِ"}
for char in data:
for k, v in arabBuck.iteritems():
data = data.replace(k,v)
return data
You're redoing the same work for every character. When you do
data = data.replace(k, v)
, that replaces all occurrences of the given character in the entire file. But you do this over and over in a loop, when you only need to do it once for each transliteration pair. Just remove your outermost loop and it should speed your code up immensely.If you need to optimize it more you could look at the string translate method. I'm not sure how that is performance-wise.
Whenever I use
str.translate
on unicode objects it returns the same exact object. Perhaps this is due to the change in behavior alluded to by Martijn Peters.If anyone else out there is struggling to transliterate unicode such as arabic to ascii, I've found that mapping ordinals to unicode literals works well.
Extending @larapsodia's answer, here is the complete code with dictionary:
Incidentally, someone already wrote a script that does this, so you might want to check that out before spending too much time on your own: buckwalter2unicode.py
It probably does more than what you need, but you don't have to use all of it: I copied just the two dictionaries and the transliterateString function (with a few tweaks, I think), and use that on my site.
Edit: The script above is what I have been using, but I'm just discovered that it is much slower than using replace, especially for a large corpus. This is the code I finally ended up with, that seems to be simpler and faster (this references a dictionary buck2uni):
Whenever you have to do transliteration
str.translate
is the method to use:As you can see even for small strings
str.translate
is 2 times faster.