Dumb question: Why is the data compressed? Did you enable compression in the negotiations of the connection with the client (I'm guessing here this has something to do with your iHttpServer)? How?
Dumb question: Why is the data compressed? Did you enable compression in the negotiations of the connection with the client (I'm guessing here this has something to do with your iHttpServer)? How?
IHttpServer currently does not negotiate with compression for the reason that I cannot decompress browser messages.
To understand the problem I sniffed the communication between jServer and the Browser. JServer accepts deflate compression when negotiating the protocol with the HandShake.
The compressed data is what I put in post # 5. I can read them in clear text because the SW sniffer allows me to also have the decompression of the data sent. So I have the data both in clear and compressed.
I'm trying to understand what compression jServer uses with the Browser when communicating so I can apply this compression to my iHttpServer.
As you can see the Header changes, instead of AA is AB, also because the resultant is a longer Byte and therefore changes the header
If I pass the sniffed string to the online tools it fails to decompress it, but if in the sniffed string I change the first byte, the header, to AB then it manages to decompress it.
Both CompressedStreams.DecompressBytes and java.util.zip.Inflater do not uncompress either string
This isn't really my expert area, but I remembered me pounding my head into the wall a couple of years back as I was making a bash script, piping things in multiple steps. And the end result wasn't at all what I expected. Turns out that one of the commands took the liberty of adding a newline at the end of its output, which messed up the process and my day.
So my naïve suggestion is to perhaps make sure that you and Erel actually are working on identical input? It's so easy to miss a newline last in the file, which in turn might make the output data one byte longer than expected.
In java I solved it using java.util.zip.Inflater. Correctly it gave error for the header, in fact to be recognized as inflate the byte array must start with two specific Hex bytes 78,9C (decimal 120,156).
I put them at the beginning of the array and it works in Java.
Deflate.DecompressBytes doesn't work ... so in fact for B4i I'm still on the high seas.
This isn't really my expert area, but I remembered me pounding my head into the wall a couple of years back as I was making a bash script, piping things in multiple steps. And the end result wasn't at all what I expected. Turns out that one of the commands took the liberty of adding a newline at the end of its output, which messed up the process and my day.
So my naïve suggestion is to perhaps make sure that you and Erel actually are working on identical input? It's so easy to miss a newline last in the file, which in turn might make the output data one byte longer than expected.
The input is the same, but from the last test I think I understand that it depends on the algorithm. Adding in the right header zip.inflater decompresses correctly.
As for Erel's test, you are right he used dynamic encoding, whereas websocket appears to use Fixed encoding. I delete the post
In java I solved it using java.util.zip.Inflater. Correctly it gave error for the header, in fact to be recognized as inflate the byte array must start with two specific Hex bytes 78,9C (decimal 120,156).
I put them at the beginning of the array and it works in Java.
Deflate.DecompressBytes doesn't work ... so in fact for B4i I'm still on the high seas.
Update: It seems that in iOs the addition of these header bytes should be avoided. See here
After several compression / decompression comparisons between the CompressedStreams class and the various tools I have come to the final conclusion. That CompressedStreams also requires the 78.9C header which must be added if it is not there.
But CompressedStreams version B4i unlike that B4J and B4a requires at the end of the sequence of the compressed bytes an additional 4-byte queue for data verification or CRC.
In the websocket protocol they are not transmitted, so to make the strings received from the Browser compatible with CompressedStreams, this queue of 4 verification data must be added.
In the various websocket documentations it is not specified, I guess, it is because the decompression algorithm of the other platforms does not require it.
Now don't ask me how it's calculated that I don't even know yet. Now I need a good decryptor mathematician ...
+---+---+
|CMF|FLG| (2 bytes - Defines the compression mode - More details below)
+---+---+---+---+
| DICTID | (4 bytes. Present only when FLG.FDICT is set.) - Mostly not set
+---+---+---+---+
|...compressed data...| (variable size of data)
+---+---+---+---+
| ADLER32 | (4 bytes of checksum)
+---+---+---+---+
bits 0 to 4 FCHECK (check bits for CMF and FLG)
bit 5 FDICT (preset dictionary)
bits 6 to 7 FLEVEL (compression level)
The dictionary is a sequence of bytes which are initially fed to the compressor without producing any compressed output. DICT is the Adler-32 checksum of this sequence of bytes
9C = 10011100 10= FLevel 0 = FDIC (CheckSUM) 11100=FCHECK (value must be such that CMF and FLG, when viewed as a 16-bit unsigned integer stored in MSB order is a multiple of 31.)
So correctly the java decompressor interprets the header without a checsum request, while the iOs decoder interprets whatever the bit is as if the checksum is required.
I need to implement a special ZLib implementation which should run under .Net and Mono. The data /string messages are received via a socket and thus the checksum is missing. This is about raw strin...
At this point there seems to be no way out, not being able to retrieve the checksum and taking into account that CompressedStreams requires it.
Also considering that I have studied and studied and re-studied RFCs many times to understand the problem and having understood the coding algorithm, I try to create the decoding procedure based on this example code.
Huffman coding (also known as Huffman Encoding) is an algorithm for doing data compression, and it forms the basic idea behind file compression. This post talks about the fixed-length and variable-length encoding, uniquely decodable codes, prefix rules, and Huffman Tree construction.
I have already started to implement the Delfate algorithm, the code works but still missing the creation of the dictionary which do not go found documentation. Of course, if a ready-made library worked, I would save time.
But I'll come back to this in a few days, I stopped work to fix this and now I have to recover