We have an interesting situation here at work, concerning our internally developed binary protocol.
There are two libraries to parse the protocol. These have different ways to calculate the length of a block of data. And now I want to know which way is the correct. Of course, that's debatable, with a history of hardware that uses the protocol for different purposes.
In the current firmware/software combination, the firmware generates a large number of packets that each fit into the Ethernet maximum packet length (i.e. 1500 bytes). Considering the packet length is 1500 bytes, and our data consists of 2-byte samples, we get 730 samples in a packet. The rest is overhead, namely: 6 bytes primary header, 9 bytes secondary header and 24 bytes data header, plus an end-of-packet byte containing the value 0xA5.
A number of these packets are received by an intermediate piece of software, called the Ethernet Daemon, and then concatenated into a larger packet, where the Data Header/Data part are repeated.
The discussion piece currently, is how to calculate the length of the block of data. You need to know this in order to find the next Data Header. The following fields in the data header can help:
Basically samples64 says something about how much samples are contained in 64 bits (eight bytes), and this can be represented graphically in the way below:
The above fields are pretty much explained now, except that the data_bits explanation is rather terse. The thing is, we software engineers round off everything to one byte. But in the real world, that's nonsense. For example, measuring a voltage will result in, say, 12 bits. For a 12-bits sample, we set the data_bits field to 12. Rounding off to bytes, this means that each sample will occupy two bytes. Thus, the samples64 field will be set to "4", meaning: four samples will fit in 64 bits.
The developer of our Python library said that the samples64 field implies, that the length of the Data field is always divisible by 8. Or in other words, that the Data field is created using 8-byte pieces. After discussion, it turns out that's not what the firmware does, in our projects.
His calculation of the length of the Data field was:
Data field length (in bytes) = 8 * ( (sample_count + samples64 - 1) // samples64 )
(The double slash means: throw away the remainder)
The firmware would send a packet with a number of Data Header/Data fields. The first Data Header says contains 31390 samples, each 16-bits. The second Data Header contains 8760 samples. The above calculation would yield for the first Data Header:
Data field length (in bytes) = 8 * ( (31390 + 4 - 1) // 4 ) Data field length (in bytes) = 8 * 7848 Data field length (in bytes) = 62784
And for the second Data Header:
Data field length (in bytes) = 8 * ( (8760 + 4 - 1) // 4 ) Data field length (in bytes) = 8 * 2190 Data field length (in bytes) = 17520
The Perl library does things differently; its way of calculating the length of the Data field is:
Data field length (in bytes) = sample_count * (data_bits / 8)
Thus for the first and second Data Headers, this looks as follows:
Data field length (in bytes) = 31390 * (16 / 8) Data field length (in bytes) = 62780
Data field length (in bytes) = 8760 * (16 / 8) Data field length (in bytes) = 17520
For this to work, the data_bits has to be rounded off to the nearest power of two. We don't have firmware that does otherwise, but in theory it's possible. This implies that the following combinations should be expected:
samples64 field | data_bits field |
1 | Between 64 and 33 |
2 | Between 32 and 17 |
4 | Between 16 and 9 |
8 | Between 8 and 5 |
16 | Either 4 or 3 |
32 | 2 |
64 | 1 |
We assume here, that the firmware packs the data as efficient as possible, of course. But that's not a guarantee. So the way to calculate the byte length of the Data field, should not include data_bits. It should be as simple as sample count times sample size (in bytes).
The sample size basically comes from the samples64 field:
samples/64 field | sample size in bytes |
1 | 8 |
2 | 4 |
4 | 2 |
8 | 1 |
16 | 0.5 |
32 | 0.25 |
64 | 0.125 |
Of course, now we're pushing the boundaries of other libraries and hardware. Computer and networking equipment dictates that the minimum transfer unit is the byte. Thus when the samples64 field is higher than 8, software must start ignoring the remainder of the Data field.
Data field length (in bytes) = sample_count * sample_size_in_bytes Data field length (in bytes) = sample_count * ( 8 / samples64 )
Because we want to make sure that we're getting whole bytes, the result should be rounded up.