I'm having trouble with creating a Maxim CRC-16 Algorithm that would match a specific output. I've listed the resources I've used to help me write the program below:
- Maxim App Note 27
- Sanity-Free CRC-16 Computation
- Julia CRC Computation (By Andrew Cooke)
- CRC-16 Lookup Table (in C)
- Another CRC Lookup Table in C
- CRC Wiki Page
With the above references, I wrote a simple program that would compute the CRC-16 using both a bit by bit approach, and a look-up table approach. The bit-by-bit approach is shown below
#include "stdafx.h"
#include <string>
#include <iostream>
#include <fstream>
#include <stdint.h>
using namespace std;
#define POLY 0x8005 // CRC-16-MAXIM (IBM) (or 0xA001)
unsigned int crc16(uint8_t data_p[])
{
unsigned char i,j;
unsigned int data;
unsigned int crc = 0x0000;//0xFFFF;
count = 0;
//for (j = 0; j < (sizeof(data_p)/sizeof(uint8_t)); j++)
for (j = 0; j < 11; j++)
{
for (i=0, data=(uint8_t)0xff & data_p[j];
i < 8;
i++, data >>= 1)
{
if ((crc & 0x0001) ^ (data & 0x0001))
{
crc = (crc >> 1) ^ POLY;
}
else crc >>= 1;
}
}
crc = ~crc;
data = crc;
crc = (crc << 8) | (data >> 8 & 0xff);
return (crc);
}
And below is the look-up table version of the CRC-16 computation
/*
* CRC lookup table for bytes, generating polynomial is 0x8005
* input: reflexed (LSB first)
* output: reflexed also...
*/
const uint16_t crc_ibm_table[256] = {
0x0000, 0xc0c1, 0xc181, 0x0140, 0xc301, 0x03c0, 0x0280, 0xc241,
0xc601, 0x06c0, 0x0780, 0xc741, 0x0500, 0xc5c1, 0xc481, 0x0440,
0xcc01, 0x0cc0, 0x0d80, 0xcd41, 0x0f00, 0xcfc1, 0xce81, 0x0e40,
0x0a00, 0xcac1, 0xcb81, 0x0b40, 0xc901, 0x09c0, 0x0880, 0xc841,
0xd801, 0x18c0, 0x1980, 0xd941, 0x1b00, 0xdbc1, 0xda81, 0x1a40,
0x1e00, 0xdec1, 0xdf81, 0x1f40, 0xdd01, 0x1dc0, 0x1c80, 0xdc41,
0x1400, 0xd4c1, 0xd581, 0x1540, 0xd701, 0x17c0, 0x1680, 0xd641,
0xd201, 0x12c0, 0x1380, 0xd341, 0x1100, 0xd1c1, 0xd081, 0x1040,
0xf001, 0x30c0, 0x3180, 0xf141, 0x3300, 0xf3c1, 0xf281, 0x3240,
0x3600, 0xf6c1, 0xf781, 0x3740, 0xf501, 0x35c0, 0x3480, 0xf441,
0x3c00, 0xfcc1, 0xfd81, 0x3d40, 0xff01, 0x3fc0, 0x3e80, 0xfe41,
0xfa01, 0x3ac0, 0x3b80, 0xfb41, 0x3900, 0xf9c1, 0xf881, 0x3840,
0x2800, 0xe8c1, 0xe981, 0x2940, 0xeb01, 0x2bc0, 0x2a80, 0xea41,
0xee01, 0x2ec0, 0x2f80, 0xef41, 0x2d00, 0xedc1, 0xec81, 0x2c40,
0xe401, 0x24c0, 0x2580, 0xe541, 0x2700, 0xe7c1, 0xe681, 0x2640,
0x2200, 0xe2c1, 0xe381, 0x2340, 0xe101, 0x21c0, 0x2080, 0xe041,
0xa001, 0x60c0, 0x6180, 0xa141, 0x6300, 0xa3c1, 0xa281, 0x6240,
0x6600, 0xa6c1, 0xa781, 0x6740, 0xa501, 0x65c0, 0x6480, 0xa441,
0x6c00, 0xacc1, 0xad81, 0x6d40, 0xaf01, 0x6fc0, 0x6e80, 0xae41,
0xaa01, 0x6ac0, 0x6b80, 0xab41, 0x6900, 0xa9c1, 0xa881, 0x6840,
0x7800, 0xb8c1, 0xb981, 0x7940, 0xbb01, 0x7bc0, 0x7a80, 0xba41,
0xbe01, 0x7ec0, 0x7f80, 0xbf41, 0x7d00, 0xbdc1, 0xbc81, 0x7c40,
0xb401, 0x74c0, 0x7580, 0xb541, 0x7700, 0xb7c1, 0xb681, 0x7640,
0x7200, 0xb2c1, 0xb381, 0x7340, 0xb101, 0x71c0, 0x7080, 0xb041,
0x5000, 0x90c1, 0x9181, 0x5140, 0x9301, 0x53c0, 0x5280, 0x9241,
0x9601, 0x56c0, 0x5780, 0x9741, 0x5500, 0x95c1, 0x9481, 0x5440,
0x9c01, 0x5cc0, 0x5d80, 0x9d41, 0x5f00, 0x9fc1, 0x9e81, 0x5e40,
0x5a00, 0x9ac1, 0x9b81, 0x5b40, 0x9901, 0x59c0, 0x5880, 0x9841,
0x8801, 0x48c0, 0x4980, 0x8941, 0x4b00, 0x8bc1, 0x8a81, 0x4a40,
0x4e00, 0x8ec1, 0x8f81, 0x4f40, 0x8d01, 0x4dc0, 0x4c80, 0x8c41,
0x4400, 0x84c1, 0x8581, 0x4540, 0x8701, 0x47c0, 0x4680, 0x8641,
0x8201, 0x42c0, 0x4380, 0x8341, 0x4100, 0x81c1, 0x8081, 0x4040,
};
static inline uint16_t crc_ibm_byte(uint16_t crc, const uint8_t c)
{
const unsigned char lut = (crc ^ c) & 0xFF;
return (crc >> 8) ^ crc_ibm_table[lut];
}
/**
* crc_ibm - recompute the CRC for the data buffer
* @crc - previous CRC value
* @buffer - data pointer
* @len - number of bytes in the buffer
*/
uint16_t crc_ibm(uint16_t crc, uint8_t const *buffer, size_t len)
{
while (len--)
crc = crc_ibm_byte(crc, *buffer++);
return crc;
}
With these equations, I can compute an array of 8-bit hex numbers to calculate a CRC-16 check-sum value. The code does compile and run without any errors.
The issue comes up when I try to verify the correctness of these computations. My goal is to try to make the CRC-16 work the same way as it does for this system. In other words, I would like to create a system that emulates the CRC-16 computation used on another system.
Below is a description of what message is sent to the original system's CRC-16 calculator:
"This CRC is generated using the CRC-16 polynomial by first clearing the CRC generator and then shifting in the command code (0Fh) of the Write Scratchpad command, the target addresses (TA1 and TA2), and all the data bytes...The data is written to the scratchpad starting at the beginning of the scratchpad."
With this, I use the following input:
uint8_t data1[11] =
{0x0F, 0x00, 0x00, 0x91, 0x0D, 0x38, 0xA0, 0x50, 0x00, 0x00, 0x00};
In the original system, the CRC-16 comes out as 0x4E2A
, which is not the output for either the look-up table or bit-by-bit CRC-16. In fact, the CRC-16 output from the look-up table calculator does not match the CRC-16 output from the bit-by-bit calculator. This doesn't come as a big surprise, since the table that I use for this calculation most likely was not calculated the same way as my other approach in calculating the CRC-16.
TL;DR: Ultimately, what I would like is a way to calculate the CRC-16 so that it matches the output the original system gives when I send the given input. I'm also interested in learning about how I can make a CRC-16 look-up table so that it would match my -bit-by-bit approach (with also using my array of 8-bit numbers as inputs). Any advice would be appreciated.
Here is crc method is calculating crc-16 with A001- Poly
This code:
will print the example result shown on the maxim page,
6390
.data_p is a pointer. applying sizeof to it returns the number of bytes in a pointer on your system. It does not return the size of your array, which is what I expect you want. You need to pass this as an calling list parameter along with the array pointer.