I need to get the Portuguese text content out of an Excel file and create an xml which is going to be used by an application that doesn't support characters such as "ç", "á", "é", and others. And I can't just remove the characters, but replace them with their equivalent ("c", "a", "e", for example).
I assume there's a better way to do it than check each character individually and replace it with their counterparts. Any suggestions on how to do it?
You could try something like
var decomposed = "áéö".Normalise(NormalizationForm.FormD);
var filtered = decomposed.Where(c => char.GetUnicodeCategory(c) != UnicodeCategory.NonSpacingMark);
var newString = new String(filtered.ToArray());
This decomposes accents from the text, filters them and creates a new string. Combining diacritics are in the Non spacing mark unicode category.
string text = {text to replace characters in};
Dictionary<char, char> replacements = new Dictionary<char, char>();
// add your characters to the replacements dictionary,
// key: char to replace
// value: replacement char
replacements.Add('ç', 'c');
...
System.Text.StringBuilder replaced = new System.Text.StringBuilder();
for (int i = 0; i < text.Length; i++)
{
char character = text[i];
if (replacements.ContainsKey(character))
{
replaced.Append(replacements[character]);
}
else
{
replaced.Append(character);
}
}
// 'replaced' is now your converted text
For future reference, this is exactly what I ended up with:
temp = stringToConvert.Normalize(NormalizationForm.FormD);
IEnumerable<char> filtered = temp;
filtered = filtered.Where(c => char.GetUnicodeCategory(c) != System.Globalization.UnicodeCategory.NonSpacingMark);
final = new string(filtered.ToArray());
The perform is better with this solution:
string test = "áéíóúç";
string result = Regex.Replace(test .Normalize(NormalizationForm.FormD), "[^A-Za-z| ]", string.empty);