I am currently writing a calculator application. I know that a double is not the best choice for good math. Most of the functions in the application have great precision but the ones that don't get super ugly results. My solution is to show users only 12 decimals of precision. I chose 12 because my lowest precision comes from my numerical derive function.
The issue I am having is that if I multiply it by a scaler then round then divide by the scaler the precision will most likely be thrown out of whack. If I use DecimalFormat there is no way to show only 12 and have the E for scientific notation show up correctly, but not be there if it doesn’t need to be.
for example I want
1.23456789111213 to be 1.234567891112
but never
1.234567891112E0
but I also want
1.23456789111213E23 to be 1.234567891112E23
So basically I want to format the string of a number to 12 decimals places, preserving scientific notation, but not being scientific when it shouldn't