This has been driving me batty for days, and I've finally got it down to a simple, reproducible issue.
I have a NUnit test project, which is .NET Core 2.1. It references a library (let's call it "Core") which is .NET Standard 2.0.
In my test project:
[TestCase(true, false)]
[TestCase(false, false)]
[TestCase(false, true)]
public void ShouldStartWith(bool useInternal, bool passStartsWith)
{
var result = useInternal ? StartsWithQ("¿Que?") : StringUtilities.StartsWithQ("¿Que?", passStartsWith ? "¿" : null);
result.ShouldBeTrue();
}
public static bool StartsWithQ(string s)
{
return _q.Any(q => s.StartsWith(q, StringComparison.InvariantCultureIgnoreCase));
}
and in the Core
project in the StringUtilities
class:
public static bool StartsWithQ(string s, string startsWith = null)
{
return startsWith == null
? _q.Any(q => s.StartsWith(q, StringComparison.InvariantCultureIgnoreCase))
: s.StartsWith(startsWith, StringComparison.InvariantCultureIgnoreCase);
}
Both classes have defined a list of special characters:
private static readonly List<string> _q = new List<string>
{
"¡",
"¿"
};
In a Windows environment, all test cases pass. But when the same tests run in the Linux environment, the test case ShouldStartWith(False,False)
fails!
That means that when everything is running in the test project, the string comparison works correctly, and even if you pass the special characters to the StringUtilities
method, the comparison works. But when you compare to a string that was compiled in the Core project, the special characters are no longer equivalent!
Anyone know why this is? Is this a .NET bug? How to work around it?