How to inflate a gzip with a really old zlib? - gzip

I'm using a PPC platform that has an older version of zlib ported to it. Is it possible to use zlib 1.1.3 to inflate an archive made with gzip 1.5?
$ gzip --list --verbose vmlinux.z
method crc date time compressed uncompressed ratio uncompressed_name
defla 12169518 Apr 29 13:00 4261643 9199404 53.7% vmlinux
The first 32 bytes of the archive are
00000000 1f 8b 08 08 29 f4 8a 60 00 03 76 6d 6c 69 6e 75 |....)..`..vmlinu|
00000010 78 00 ec 9a 7f 54 1c 55 96 c7 6f 75 37 d0 fc 70 |x....T.U..ou7..p|
I've tried using this code (where source is a pointer to the first byte at 1f 8b) with the three options A, B, and C for the WBIT initialization.
int ZEXPORT gunzip (dest, destLen, source, sourceLen)
Bytef *dest;
uLongf *destLen;
const Bytef *source;
uLong sourceLen;
{
z_stream stream;
int err;
stream.next_in = (Bytef*)source;
stream.avail_in = (uInt)sourceLen;
/* Check for source > 64K on 16-bit machine: */
if ((uLong)stream.avail_in != sourceLen) return Z_BUF_ERROR;
stream.next_out = dest;
stream.avail_out = (uInt)*destLen;
if ((uLong)stream.avail_out != *destLen) return Z_BUF_ERROR;
stream.zalloc = (alloc_func)my_alloc;
stream.zfree = (free_func)my_free;
/* option A */
err = inflateInit(&stream);
/* option B */
err = inflateInit2(&stream, 15 + 16);
/* option C */
err = inflateInit2(&stream, -MAX_WBITS);
if (err != Z_OK) return err;
err = inflate(&stream, Z_FINISH);
if (err != Z_STREAM_END) {
inflateEnd(&stream);
return err == Z_OK ? Z_BUF_ERROR : err;
}
*destLen = stream.total_out;
err = inflateEnd(&stream);
return err;
}
Option A:
zlib inflate() fails with error Z_DATA_ERROR. "unknown compression method"
z_stream.avail_in = 4261640
z_stream.total_in = 1
z_stream.avail_out = 134152192
z_stream.total_out = 0
Option B:
zlib inflateInit2_() fails at line 118 with a Z_STREAM_ERROR.
/* set window size */
if (w < 8 || w > 15)
{
inflateEnd(z);
return Z_STREAM_ERROR;
}
Option C:
zlib inflate() fails with error Z_DATA_ERROR. "invalid block type"
z_stream.avail_in = 4261640
z_stream.total_in = 1
z_stream.avail_out = 134152192
z_stream.total_out = 0

Your option B would work for zlib 1.2.1 or later.
With zlib 1.1.3, there are two ways.
Use the gzopen(), gzread(), and gzclose() to read the gzip stream from a file and decompress into memory.
To decompress from the gzip stream in memory, use your option C, raw inflate, after manually decoding the gzip header. Use crc32() to calculate the CRC-32 of the decompressed data as you inflate it. When the inflation completes, manually decode the gzip trailer, checking the CRC-32 and size of the decompressed data.
Manual decoding of the gzip header and trailer is simple to implement. See RFC 1952 for the description of the header and trailer.

Related

wolftpm all time recieving TPM_RC_BAD_TAG

Trying to use infenion slb9670 with wolftpm. When porting lib to custom spi functions and os recieving 0x01e(dec 30) which means TPM_RC_BAD_TAG. Is my Spi connection correct if I have already received caps?
(Same code works fine on STM32f7 board with STM HAL spi implementation)
Thanks
rc = TPM2_Init(&tpm2Ctx, TPM2_IoCb, userCtx);
if (rc != 0)
{
tst_printf("\r\nTPM init failed! rc = %i;", rc);
break;
}
else
{
tst_printf("\r\nTPM init success!");
tst_printf("\r\nTPM2: Caps 0x%08x, Did 0x%04x, Vid 0x%04x, Rid 0x%2x \n",
tpm2Ctx.caps,
tpm2Ctx.did_vid >> 16,
tpm2Ctx.did_vid & 0xFFFF,
tpm2Ctx.rid);
}
/* define the default session auth */
XMEMSET(tpm_session, 0, sizeof(tpm_session));
tpm_session[0].sessionHandle = TPM_RS_PW;
TPM2_SetSessionAuth(tpm_session);
if (rc != TPM_RC_SUCCESS &&
rc != TPM_RC_INITIALIZE /* TPM_RC_INITIALIZE = Already started */ ) {
tst_printf("TPM2_SetSessionAuth failed 0x%x: %s\n", rc, TPM2_GetRCString(rc));
break;
}
Startup_In startup;
XMEMSET(&startup, 0, sizeof(Startup_In));
startup.startupType = TPM_SU_STATE;
rc = TPM2_Startup(&startup);
if (rc != TPM_RC_SUCCESS &&
rc != TPM_RC_INITIALIZE /* TPM_RC_INITIALIZE = Already started */ ) {
tst_printf("TPM2_Startup failed %i: %s\n", rc, TPM2_GetRCString(rc));
//break;
}
tst_printf("\r\nTPM2_Startup pass!\f");
Output:
TPM init success!
TPM2: Caps 0x30000697, Did 0x001b, Vid 0x15d1, Rid 0x10
TPM2_Startup failed 30: Unknown
edited
Values of cmd in TPM2_TIS_SendCommand:
80 01 00 00 00 0c 00 00 01 44 00 00 (working example)
00 00 00 00 00 0c 00 00 01 44 00 00 (my case)
80 01 - TPM_ST_NO_SESSIONS which has to be added by TPM2_Packet_Finalize!
The mistake was in functions which are preparing packet. My version of IAR compiler cannot handle __REV() for 16 bit values. I used small macro for handling, now everything works fine.

Efficient Go serialization of struct to disk

I've been tasked to replace C++ code to Go and I'm quite new to the Go APIs. I am using gob for encoding hundreds of key/value entries to disk pages but the gob encoding has too much bloat that's not needed.
package main
import (
"bytes"
"encoding/gob"
"fmt"
)
type Entry struct {
Key string
Val string
}
func main() {
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
e := Entry { "k1", "v1" }
enc.Encode(e)
fmt.Println(buf.Bytes())
}
This produces a lot of bloat that I don't need:
[35 255 129 3 1 1 5 69 110 116 114 121 1 255 130 0 1 2 1 3 75 101 121 1 12 0 1 3 86 97 108 1 12 0 0 0 11 255 130 1 2 107 49 1 2 118 49 0]
I want to serialize each string's len followed by the raw bytes like:
[0 0 0 2 107 49 0 0 0 2 118 49]
I am saving millions of entries so the additional bloat in the encoding increases the file size by roughly x10.
How can I serialize it to the latter without manual coding?
If you zip a file named a.txt containing the text "hello" (which is 5 characters), the result zip will be around 115 bytes. Does this mean the zip format is not efficient to compress text files? Certainly not. There is an overhead. If the file contains "hello" a hundred times (500 bytes), zipping it will result in a file being 120 bytes! 1x"hello" => 115 bytes, 100x"hello" => 120 bytes! We added 495 byes, and yet the compressed size only increased by 5 bytes.
Something similar is happening with the encoding/gob package:
The implementation compiles a custom codec for each data type in the stream and is most efficient when a single Encoder is used to transmit a stream of values, amortizing the cost of compilation.
When you "first" serialize a value of a type, the definition of the type also has to be included / transmitted, so the decoder can properly interpret and decode the stream:
A stream of gobs is self-describing. Each data item in the stream is preceded by a specification of its type, expressed in terms of a small set of predefined types.
Let's return to your example:
var buf bytes.Buffer
enc := gob.NewEncoder(&buf)
e := Entry{"k1", "v1"}
enc.Encode(e)
fmt.Println(buf.Len())
It prints:
48
Now let's encode a few more of the same type:
enc.Encode(e)
fmt.Println(buf.Len())
enc.Encode(e)
fmt.Println(buf.Len())
Now the output is:
60
72
Try it on the Go Playground.
Analyzing the results:
Additional values of the same Entry type only cost 12 bytes, while the first is 48 bytes because the type definition is also included (which is ~26 bytes), but that is a one-time overhead.
So basically you transmit 2 strings: "k1" and "v1" which are 4 bytes, and the length of strings also has to be included, using 4 bytes (size of int on 32-bit architectures) gives you the 12 bytes, which is the "minimum". (Yes, you could use a smaller type for length, but that would have its limitations. A variable-length encoding would be a better choice for small numbers, see encoding/binary package.)
All in all, encoding/gob does a pretty good job for your needs. Don't get fooled by initial impressions.
If this 12 bytes for one Entry is too "much" for you, you can always wrap the stream into a compress/flate or compress/gzip writer to further reduce the size (in exchange for slower encoding/decoding and slightly higher memory requirement for the process).
Demonstration:
Let's test the following 5 solutions:
Using a "naked" output (no compression)
Using compress/flate to compress the output of encoding/gob
Using compress/zlib to compress the output of encoding/gob
Using compress/gzip to compress the output of encoding/gob
Using github.com/dsnet/compress/bzip2 to compress the output of encoding/gob
We will write a thousand entries, changing keys and values of each, being "k000", "v000", "k001", "v001" etc. This means the uncompressed size of an Entry is 4 byte + 4 byte + 4 byte + 4 byte = 16 bytes (2x4 bytes text, 2x4 byte lengths).
The code looks like this:
for _, name := range []string{"Naked", "flate", "zlib", "gzip", "bzip2"} {
buf := &bytes.Buffer{}
var out io.Writer
switch name {
case "Naked":
out = buf
case "flate":
out, _ = flate.NewWriter(buf, flate.DefaultCompression)
case "zlib":
out, _ = zlib.NewWriterLevel(buf, zlib.DefaultCompression)
case "gzip":
out = gzip.NewWriter(buf)
case "bzip2":
out, _ = bzip2.NewWriter(buf, nil)
}
enc := gob.NewEncoder(out)
e := Entry{}
for i := 0; i < 1000; i++ {
e.Key = fmt.Sprintf("k%3d", i)
e.Val = fmt.Sprintf("v%3d", i)
enc.Encode(e)
}
if c, ok := out.(io.Closer); ok {
c.Close()
}
fmt.Printf("[%5s] Length: %5d, average: %5.2f / Entry\n",
name, buf.Len(), float64(buf.Len())/1000)
}
Output:
[Naked] Length: 16036, average: 16.04 / Entry
[flate] Length: 4120, average: 4.12 / Entry
[ zlib] Length: 4126, average: 4.13 / Entry
[ gzip] Length: 4138, average: 4.14 / Entry
[bzip2] Length: 2042, average: 2.04 / Entry
Try it on the Go Playground.
As you can see: the "naked" output is 16.04 bytes/Entry, just little over the calculated size (due to the one-time tiny overhead discussed above).
When you use flate, zlib or gzip to compress the output, you can reduce the output size to about 4.13 bytes/Entry, which is about ~26% of the theoretical size, I'm sure that satisfies you. If not, you can reach out to libs providing compression with higher efficiency like bzip2, which in the above example resulted in 2.04 bytes/Entry, being 12.7% of the theoretical size!
(Note that with "real-life" data the compression ratio would probably be a lot higher as the keys and values I used in the test are very similar and thus really well compressible; still ratio should be around 50% with real-life data).
Use protobuf to efficiently encode your data.
https://github.com/golang/protobuf
Your main would look like this:
package main
import (
"fmt"
"log"
"github.com/golang/protobuf/proto"
)
func main() {
e := &Entry{
Key: proto.String("k1"),
Val: proto.String("v1"),
}
data, err := proto.Marshal(e)
if err != nil {
log.Fatal("marshaling error: ", err)
}
fmt.Println(data)
}
You create a file, example.proto like this:
package main;
message Entry {
required string Key = 1;
required string Val = 2;
}
You generate the go code from the proto file by running:
$ protoc --go_out=. *.proto
You can examine the generated file, if you wish.
You can run and see the results output:
$ go run *.go
[10 2 107 49 18 2 118 49]
"Manual coding", you're so afraid of, is trivially done in Go using the standard encoding/binary package.
You appear to store string length values as 32-bit integers in big-endian format, so you can just go on and do just that in Go:
package main
import (
"bytes"
"encoding/binary"
"fmt"
"io"
)
func encode(w io.Writer, s string) (n int, err error) {
var hdr [4]byte
binary.BigEndian.PutUint32(hdr[:], uint32(len(s)))
n, err = w.Write(hdr[:])
if err != nil {
return
}
n2, err := io.WriteString(w, s)
n += n2
return
}
func main() {
var buf bytes.Buffer
for _, s := range []string{
"ab",
"cd",
"de",
} {
_, err := encode(&buf, s)
if err != nil {
panic(err)
}
}
fmt.Printf("%v\n", buf.Bytes())
}
Playground link.
Note that in this example I'm writing to a byte buffer, but that's for demonstration purposes only—since encode() writes to an io.Writer, you can pass it an opened file, a network socket and anything else implementing that interface.

3des authentication no response

I send the command 1A:00 to the Mifare Ultralight C tag by using APDU command
Here is the log:
inList passive target
write: 4A 1 0
read: 4B 1 1 0 44 0 7 4 C2 35 CA 2C 2C 80
write: 40 1 1A 0
I don't know why when I send 1A 00, it did not respond with RndA?
My code is this:
bool success = nfc.inListPassiveTarget();
if (success) {
uint8_t auth_apdu[] = {
0x1A,
0x00
};
uint8_t response[255];
uint8_t responseLength = 255;
success = nfc.inDataExchange(auth_apdu, sizeof(auth_apdu), response, &responseLength);
if (success) {
Serial.println("\n Successfully sent 1st auth_apdu \n");
Serial.println("\n The response is: \n");
nfc.PrintHexChar(response, responseLength);
}
When I try to read pages with command 0x30, , it works OK, but not the authentication command: 1A:00
I don't know what I am doing wrong here
The answer is that I should use inCommunicateThru ( 0x42 ) instead of inDataExchange ( 0x40 ).
Thus the correct command should be : 0x42 1A 0

formating string to hex and validate

I have an application that accepts hex values from a C++/CLI richtextbox.
The string comes from a user input.
Sample input and expected output.
01 02 03 04 05 06 07 08 09 0A //good input
0102030405060708090A //bad input but can automatically be converted to good by adding spaces.
XX ZZ DD AA OO PP II UU HH SS //bad input this is not hex
01 000 00 00 00 00 00 01 001 0 //bad input hex is only 2 chars
How to write function:
1. Detect if input is good or bad input.
2. If its a bad input check what kind of bad input: no spaces, not hex, must be 2 chars split.
3. If its no spaces bad input then just add the spaces automatically.
So far I made a space checker by searching for a space like:
for ( int i = 2; i < input.size(); i++ )
{
if(inputpkt[i] == ' ')
{
cout << "good input" << endl;
i = i+2;
}
else
{
cout << "bad input. I will format for you" << endl;
}
}
But it doesn't really work as expected because it returns this:
01 000 //bad input
01 000 00 00 00 00 00 01 001 00 //good input
update
1 Check if input is actually hex:
bool ishex(std::string const& s)
{
return s.find_first_not_of("0123456789abcdefABCDEF ", 0) == std::string::npos;
}
Are you operating in C++/CLI, or in plain C++? You've got it tagged C++/CLI, but you're using std::string, not .Net System::String.
I suggest this as a general plan: First, split your large string into smaller ones based on any whitespace. For each individual string, make sure it only contains [0-9a-fA-F], and is a multiple of two characters long.
The implementation could go something like this:
array<Byte>^ ConvertString(String^ input)
{
List<System::Byte>^ output = gcnew List<System::Byte>();
// Splitting on a null string array makes it split on all whitespace.
array<String^>^ words = input->Split(
(array<String^>)nullptr,
StringSplitOptions::RemoveEmptyEntries);
for each(String^ word in words)
{
if(word->Length % 2 == 1) throw gcnew Exception("Invalid input string");
for(int i = 0; i < word->Length; i += 2)
{
output->Add((Byte)(GetHexValue(word[i]) << 4 + GetHexValue(word[i+1])));
}
}
return output->ToArray();
}
int GetHexValue(Char c) // Note: Upper case 'C' makes it System::Char
{
// If not [0-9a-fA-F], throw gcnew Exception("Invalid input string");
// If is [0-9a-fA-F], return integer between 0 and 15.
}

Using IOKit to return Mac's Serial number returns 4 extra characters

I'm playing with IOKit and have the following code, the general idea is to pass a platformExpert key to this small core foundation command line application and have it print the decoded string. The test case is "serial-number". The code below when run like:
./compiled serial-number
Almost works but returns the last 4 characters of the serial number at the beginning of the string i.e. for an example serial such as C12D2JMPDDQX it would return
DDQXC12D2JMPDDQX
Any ideas?
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOKitLib.h>
int main (int argc, const char * argv[]) {
CFStringRef parameter = CFSTR("serial-number");
if (argv[1]) {
parameter = CFStringCreateWithCString(
NULL,
argv[1],
kCFStringEncodingUTF8);
}
CFDataRef data;
io_service_t platformExpert = IOServiceGetMatchingService(kIOMasterPortDefault, IOServiceMatching("IOPlatformExpertDevice"));
if (platformExpert)
{
data = IORegistryEntryCreateCFProperty(platformExpert,
parameter,
kCFAllocatorDefault, 0);
}
IOObjectRelease(platformExpert);
CFIndex bufferLength = CFDataGetLength(data);
UInt8 *buffer = malloc(bufferLength);
CFDataGetBytes(data, CFRangeMake(0,bufferLength), (UInt8*) buffer);
CFStringRef string = CFStringCreateWithBytes(kCFAllocatorDefault,
buffer,
bufferLength,
kCFStringEncodingUTF8,
TRUE);
CFShow(string);
return 0;
}
A more simplified solution:
#include <CoreFoundation/CoreFoundation.h>
#include <IOKit/IOKitLib.h>
int main()
{
CFMutableDictionaryRef matching = IOServiceMatching("IOPlatformExpertDevice");
io_service_t service = IOServiceGetMatchingService(kIOMasterPortDefault, matching);
CFStringRef serialNumber = IORegistryEntryCreateCFProperty(service,
CFSTR("IOPlatformSerialNumber"), kCFAllocatorDefault, 0);
const char* str = CFStringGetCStringPtr(serialNumber,kCFStringEncodingMacRoman);
printf("%s\n", str); //->stdout
//CFShow(serialNumber); //->stderr
IOObjectRelease(service);
return 0;
}
compile with:
clang -framework IOKit -framework ApplicationServices cpuid.c -o cpuid
Fork from github if you like ;)
https://github.com/0infinity/IOPlatformSerialNumber
You may be misinterpreting the value of the serial-number parameter. If I use ioreg -f -k serial-number, I get this:
| "serial-number" =
| 00000000: 55 51 32 00 00 00 00 00 00 00 00 00 00 XX XX XX XX UQ2..........XXXX
| 00000011: XX XX XX XX 55 51 32 00 00 00 00 00 00 00 00 00 00 XXXXUQ2..........
| 00000022: 00 00 00 00 00 00 00 00 00 .........
(I've X'd out my Mac's serial number except for the repeated part.)
You don't see the null characters when you show the string because, well, they're null characters. I don't know why it has what seems like multiple fields separated by null characters, but that's what it seems to be.
I recommend doing further investigation to make sure there isn't a specification for how this data is supposed to be interpreted; if you don't find anything, I'd skip through the first run of nulls and get everything after that up to the next run of nulls.