Using a microcontroller (PIC18F4580), I need to collect data and send it to an SD card for later analysis. The data it collects will have values between 0 and 1023, or 0x0 and 0x3FF.
So what I need to do is convert 1023 into a base 10 string of literal ASCII values (0x31, 0x30, 0x32, 0x33, ...).
My problem is that the only way I can think of to split the digits apart requires a lot of division.
char temp[4];
temp[0] = 1023 % 10;
temp[1] = (1023 % 100) / 10;
temp[2] = (1023 % 1000) / 100;
temp[3] = (1023 % 10000) / 1000;
Using this method, finding the ASCII values of an n digit decimal number requires 2n-1 divisions. Is there a method that would be faster?
The end goal of this is to wind up with a .csv file on the SD card that can quickly be plugged into any laptop to see a graph of the data in Excel.
The obvious solution is not to convert the data to ASCII at all but store it in binary format. That way all you need to worry about is the endianness of the data. If the system performing the later analysis is far more powerful than your embedded target, then it would make sense to let that deal with the conversion and and byte order.
On the other hand, it is possible that the execution time of the / and % is insignificant compared to the time taken to transfer the data to the SD card; so make sure that you are optimising the right thing.
Is there some reason that you're particularly concerned about this?
If your compiler and C library provide an
itoa()
function, use that, and then worry about writing this code (and associated tests and so forth to make sure you got it right!) if for some reason that turns out to be too slow or doesn't fit into RAM or something.I've replaced my previous answer with a better one. This code creates a 4-character string in the proper order, most significant digit in output[0] to least significant in output[3] with a zero terminator in output[4]. I don't know anything about your PIC controller or C compiler, but this code doesn't require anything more than 16-bit integers, addition/subtraction, and shifting.
The key to this is the magical function
DivideByTenReturnRemainder
. Without using division explicitly it's still possible to divide by powers of 2 by shifting right; the problem is that 10 isn't a power of 2. I've sidestepped that problem by multiplying the value by 25.625 before dividing by 256, letting integer truncation round down to the proper value. Why 25.625? Because it's easily represented by powers of 2. 25.625 = 16 + 8 + 1 + 1/2 + 1/8. Again, multiplying by 1/2 is the same as shifting right one bit, and multiplying by 1/8 is shifting right by 3 bits. To get the remainder, multiply the result by 10 (8+2) and subtract it from the original value.If the values are correctly in range (0..1023), then your last conversion is unnecessarily wasteful on the divisions; the last line could be replaced with:
or even:
Since division is repeated subtraction, but you have a very special case (not a general case) division to deal with, I'd be tempted to compare the timings for the following code with the division version. I note that you put the digits into the string in 'reverse order' - the least significant digit goes in
temp[0]
and the most intemp[4]
. Also, there is no chance of null-terminating the string given the storage. This code uses a table of 8 bytes of static data - considerably less than many of the other solutions.Performance testing - Intel x86_64 Core 2 Duo 3.06 GHz (MacOS X 10.6.4)
This platform is probably not representative of your microcontroller, but the test shows that on this platform, the subtraction is considerably slower than the division.
Compiling with GCC 4.5.1, and working in 32-bit, the average timings were (optimization '
-O
'):0.13
seconds using division0.65
seconds using subtractionCompiling and working in 64-bit, the average timings were:
0.13
seconds using division0.48
seconds using subtractionClearly, on this machine, using subtraction is not a winning proposition. You would have to measure on your machine to make a decision. And removing the modulo 10000 operation will only skew results in favour of the division (it knocks about 0.02 seconds off the time with division when replaced with the comparison; that's a 15% saving and worth having).
I agree with what Clifford said, that you shouldn't worry about optimizing it if you don't have to, and that you can push the log cleanup to your analysis platform, rather than worrying about formatting in an embedded application.
That being said, here's an article that might be useful to you. It uses a loop, shifts, additions and branches, with linear/constant complexity: http://www.johnloomis.org/ece314/notes/devices/binary_to_BCD/bin_to_bcd.html
Also, I thought it would be fun to make some code that doesn't perform any divides, multiplies, or branches, but still gives the correct answer [0 - 1024). No promises that this is any faster than other options. This sort of code is just an option to explore.
I'd love to see if anyone can provide some tricks to make the code smaller, require less memory, or require fewer operations, while keeping the rest of the counts equal, or shrinking them :)
Stats:
Perf:
Using the perf comparisons and itoa routines in Jonathan Leffler's answer, here are the stats I got:
I increased the iteration count to 200000 to ensure I didn't have any problems with timing resolution, and had to add
volatile
to the function signatures so that the compiler didn't optimize out the loop. I used VS2010 express w/ vanilla "release" settings, on a 3ghz dual core 64 bit Windows 7 machine (tho it compiled to 32 bit).The code:
Are you required to use an ASCII string of the decimal representation? It would be much easier to store it in hexadecimal format. No division required, only (relatively cheap) shift operations. Excel should be able to read it if you prepend a '0x' to each number.