How to decompress pbzip2 data in memory buffer by using libbz2 library in C++ - bzip2

I have a working version of decompressing bzip2 data where I call the bz2_bzdecompress API. It goes something like this
while (bytes_input < len) {
isDone = false;
// Initialize the input buffer and its length
size_t in_buffer_size = len -bytes_input;
the_bz2_stream.avail_in = in_buffer_size;
the_bz2_stream.next_in = (char*)data +bytes_input;
size_t out_buffer_size =
output_size -bytes_uncompressed; // size of output buffer
if (out_buffer_size == 0) { // out of space in the output buffer
break;
}
the_bz2_stream.avail_out = out_buffer_size;
the_bz2_stream.next_out =
(char*)output +bytes_uncompressed; // output buffer
ret = BZ2_bzDecompress(&the_bz2_stream);
if (ret != BZ_OK && ret != BZ_STREAM_END) {
throw Bzip2Exception("Bzip2 failed. ", ret);
}
bytes_input += in_buffer_size - the_bz2_stream.avail_in;
bytes_uncompressed += out_buffer_size - the_bz2_stream.avail_out;
*data_consumed =bytes_input;
if (ret == BZ_STREAM_END) {
ret = BZ2_bzDecompressEnd(&the_bz2_stream);
if (ret != BZ_OK) {
throw Bzip2Exception("Bzip2 fail. ", ret);
}
isDone = true;
}
}
This works great for native bzip2 compressed files, but for pbzip2 (Parallel Bzip2) and "Splittable" bzip2 data, it throws a "BZ_PARAM_ERROR".
I see that pbzip2 in their documentation says this-
Data compressed with pbzip2 is broken into multiple streams and each
stream is bzip2 compressed looking like this:
[-----|-----|-----|-----|-----|-----|-----|-----|-----]
If you are writing software with libbzip2 to decompress data created
with pbzip2, you must take into account that the data contains
multiple bzip2 streams so you will encounter end-of-stream markers
from libbzip2 after each stream and must look-ahead to see if there
are any more streams to process before quitting. The bzip2 program
itself will automatically handle this condition.
Source:http://compression.ca/pbzip2/
Can someone please tell me how to handle this? Should I be using some other libzip2 API?
Also, pbzip2 files are compatible with the normal "bunzip2" command. How is that bzip2 handles this gracefully while my code throws a BZ_PARAM_ERROR?
Thanks.

After your BZ2_bzDecompressEnd() you need to call BZ2_bzDecompressInit() again (you must have called it initially before that loop), if there is still data left to decompress, i.e. bytes_input < len.
To decompress each of the |-----| blocks, you need to do an init, some number of decompress calls, and an end. So if you still have input left, then you need to do another init, n*decompress, end.
Make sure that you do a final end, in order to avoid a big memory leak.
You're getting a BZ_PARAM_ERROR because you are trying to use an uninitialized bz_stream to decompress. Once you do BZ2_bzDecompressEnd(), you can't use that bz_stream any more, unless you do a BZ2_bzDecompressInit() on it.

Related

How can I read \x1a from a file? [duplicate]

I am attempting to write a bittorrent client. In order to parse the file etc. I need to read a torrent file into memory. I have noticed that fread is not reading the entire file into my buffer. After further investigation it appears that whenever the symbol shown below is encountered in the file, fread stops reading the file. Calling the feof function on the FILE* pointer returns 16 indicating that the end of file has been reached. This occurs no matter where the symbol is placed. Can somebody explain why this happens and any solutions that may work.
The symbol is highlighted below:
Here is the code that does the read operation:
char *read_file(const char *file, long long *len){
struct stat st;
char *ret = NULL;
FILE *fp;
//store the size/length of the file
if(stat(file, &st)){
return ret;
}
*len = st.st_size;
//open a stream to the specified file
fp = fopen(file, "r");
if(!fp){
return ret;
}
//allocate space in the buffer for the file
ret = (char*)malloc(*len);
if(!ret){
return NULL;
}
//Break down the call to fread into smaller chunks
//to account for a known bug which causes fread to
//behave strangely with large files
//Read the file into the buffer
//fread(ret, 1, *len, fp);
if(*len > 10000){
char *retTemp = NULL;
retTemp = ret;
int remaining = *len;
int read = 0, error = 0;
while(remaining > 1000){
read = fread(retTemp, 1, 1000, fp);
if(read < 1000){
error = feof(fp);
if(error != 0){
printf("Error: %d\n", error);
}
}
retTemp += 1000;
remaining -= 1000;
}
fread(retTemp, 1, remaining, fp);
} else {
fread(ret, 1, *len, fp);
}
//cleanup by closing the file stream
fclose(fp);
return ret;
}
Thank you for your time :)
Your question is oddly relevant as I recently ran into this problem in an application here at work last week!
The ASCII value of this character is decimal 26 (0x1A, \SUB, SUBSTITUTE). This is used to represent the CTRL+Z key sequence or an End-of-File marker.
Change your fopen mode ("In [Text] mode, CTRL+Z is interpreted as an end-of-file character on input.") to get around this on Windows:
fp = fopen(file, "rb"); /* b for 'binary', disables Text-mode translations */
You should open the file in binary mode. Some platforms, in text (default) mode, interpret some bytes as being physical end of file markers.
You're opening the file in text rather than raw/binary mode - the arrow is ASCII for EOF. Specify "rb" rather than just "r" for your fopen call.

How to detect termination character in SChannel-based HTTPS client

I've searched StackOverflow trying to find a similar problem, but haven't come across it, so I am posting this question.
I am trying to write an C++ HTTPS client using Microsoft's SChannel libraries, and I'm getting stochastic errors with chunked message transfer. This issue only seems to occur on very long downloads -- short ones generally work OK. Most of the time the code works properly -- even for long downloads -- but occasionally the recv() command gracefully timesout, disconnecting my TLS session, and other times, I get an incomplete last packet. The stochastic errors appear to be the result of the different size chunks and encryption blocks the server is using to pass the data. I know I need to handle this variation, but while this would be easy to solve on an unencrypted HTTP connection, the encryption aspect is causing me problems.
First, the timeout problem, which occurs about 5% of the time I request large HTTP requests (about 10 MB of data from a single HTTP GET request).
The timeout is resulting because on the last chunk I have specified a bigger receive buffer than the data remaining on a blocking socket. The obvious fix to this is to only request exactly the number of bytes I need for the next chunk, and that is what I did. But for some reason, the amount received from each request is less than what I request, yet appears to be missing no data after decryption. I'm guessing this must be due to some compression in the data stream, but I don't know. IN any event, if it is using compression, I have no idea how to translate the size of the decrypted uncompressed byte stream into the size of compressed encrypted byte stream including the encryption headers and trailers to request the exact right number of bytes. Can anyone help me do that?
The alternative approach is for me to just look for two CR+LFs in a row, which would also signal the end of the HTTPS response. But because the data is encrypted, I can't figure out how to look byte by byte. SChannel's DecryptMessage() seems to do its decryptions in blocks, not byte by byte. Can anyone in this forum provide any advice on how to do byte-by-byte decryption to enable me to look for the end of the chunked output?
The second problem is DecryptMessage sometimes erroneously thinks it is done decrypting before I reach the actual end of the message. The resultant behavior is I go on to the next HTTP request, and I get the rest of the previous response where I am expecting to see the header of the new request.
The obvious solution to this is to check the contents of the decrypted message to see if we actually reached the end, and if not, try to receive more data before sending the next HTTP request. But when I do this, and try to decrypt, I get a decryption error message.
Any advice/help anyone can provide on a strategies would be appreciated. I've attached the relevant code sections for the read/decrypt process of the HTTP body -- I'm not including the header read and parsing because that is working without any problems.
do
{
// Note this receives large files OK, but I can't tell when I hit the end of the buffer, and this
// hangs. Need to consider a non-blocking socket?
// numBytesReceived = recv(windowsSocket, (char*)inputBuffer, inputBufSize, 0);
m_ErrorLog << "Next read size expected " << nextReadSize << endl;
numBytesReceived = recv(windowsSocket, (char*)inputBuffer, nextReadSize, 0);
m_ErrorLog << "NumBytesReceived = " << numBytesReceived << endl;
if (m_BinaryBufLen + numBytesReceived > m_BinaryBufAllocatedSize)
::EnlargeBinaryBuffer(m_BinaryBuffer,m_BinaryBufAllocatedSize,m_BinaryBufLen,numBytesReceived+1);
memcpy(m_BinaryBuffer+m_BinaryBufLen,inputBuffer,numBytesReceived);
m_BinaryBufLen += numBytesReceived;
lenStartDecryptedChunk = decryptedBodyLen;
do
{
// Decrypt the received data.
Buffers[0].pvBuffer = m_BinaryBuffer;
Buffers[0].cbBuffer = m_BinaryBufLen;
Buffers[0].BufferType = SECBUFFER_DATA; // Initial Type of the buffer 1
Buffers[1].BufferType = SECBUFFER_EMPTY; // Initial Type of the buffer 2
Buffers[2].BufferType = SECBUFFER_EMPTY; // Initial Type of the buffer 3
Buffers[3].BufferType = SECBUFFER_EMPTY; // Initial Type of the buffer 4
Message.ulVersion = SECBUFFER_VERSION; // Version number
Message.cBuffers = 4; // Number of buffers - must contain four SecBuffer structures.
Message.pBuffers = Buffers; // Pointer to array of buffers
scRet = m_pSSPI->DecryptMessage(phContext, &Message, 0, NULL);
if (scRet == SEC_E_INCOMPLETE_MESSAGE)
break;
if( scRet == SEC_I_CONTEXT_EXPIRED )
{
m_ErrorLog << "Server shut down connection before I finished reading" << endl;
m_ErrorLog << "# of Bytes Requested = " << nextReadSize << endl;
m_ErrorLog << "# of Bytes received = " << numBytesReceived << endl;
m_ErrorLog << "Decrypted data to this point = " << endl;
m_ErrorLog << decryptedBody << endl;
m_ErrorLog << "BinaryData just decrypted: " << endl;
m_ErrorLog << Buffers[0].pvBuffer << endl;
break; // Server signalled end of session
}
if( scRet != SEC_E_OK &&
scRet != SEC_I_RENEGOTIATE &&
scRet != SEC_I_CONTEXT_EXPIRED )
{
DisplaySECError((DWORD)scRet,errmsg);
m_ErrorLog << "CSISPDoc::ReadDecrypt(): " << "Failed to decrypt message--Error=" << errmsg;
if (decryptedBody)
m_ErrorLog << decryptedBody << endl;
return scRet;
}
// Locate data and (optional) extra buffers.
pDataBuffer = NULL;
pExtraBuffer = NULL;
for(i = 1; i < 4; i++)
{
if( pDataBuffer == NULL && Buffers[i].BufferType == SECBUFFER_DATA )
pDataBuffer = &Buffers[i];
if( pExtraBuffer == NULL && Buffers[i].BufferType == SECBUFFER_EXTRA )
pExtraBuffer = &Buffers[i];
}
// Display the decrypted data.
if(pDataBuffer)
{
length = pDataBuffer->cbBuffer;
if( length ) // check if last two chars are CR LF
{
buff = (PBYTE)pDataBuffer->pvBuffer; // printf( "n-2= %d, n-1= %d \n", buff[length-2], buff[length-1] );
if (decryptedBodyLen+length+1 > decryptedBodyAllocatedSize)
::EnlargeBuffer(decryptedBody,decryptedBodyAllocatedSize,decryptedBodyLen,length+1);
memcpy_s(decryptedBody+decryptedBodyLen,decryptedBodyAllocatedSize-decryptedBodyLen,buff,length);
decryptedBodyLen += length;
m_ErrorLog << buff << endl;
}
}
// Move any "extra" data to the input buffer -- this has not yet been decrypted.
if(pExtraBuffer)
{
MoveMemory(m_BinaryBuffer, pExtraBuffer->pvBuffer, pExtraBuffer->cbBuffer);
m_BinaryBufLen = pExtraBuffer->cbBuffer; // printf("inputStrLen= %d \n", inputStrLen);
}
}
while (pExtraBuffer);
if (decryptedBody)
{
if (incompletePacket)
p1 = decryptedBody + lenStartFragmentedPacket;
else
p1 = decryptedBody + lenStartDecryptedChunk;
p2 = p1;
pEndDecryptedBody = decryptedBody+decryptedBodyLen;
if (lastDecryptRes != SEC_E_INCOMPLETE_MESSAGE)
chunkSizeBlock = true;
do
{
while (p2 < pEndDecryptedBody && (*p2 != '\r' || *(p2+1) != '\n'))
p2++;
// if we're here, we probably found the end of the current line. The pattern we are
// reading is chunk length, chunk, chunk length, chunk,...,chunk lenth (==0)
if (*p2 == '\r' && *(p2+1) == '\n') // new line character -- found chunk size
{
if (chunkSizeBlock) // reading the size of the chunk
{
pStartHexNum = SkipWhiteSpace(p1,p2);
pEndHexNum = SkipWhiteSpaceBackwards(p1,p2);
chunkSize = HexCharToInt(pStartHexNum,pEndHexNum);
p2 += 2; // skip past the newline character
chunkSizeBlock = false;
if (!chunkSize) // chunk size of 0 means we're done
{
bulkReadDone = true;
p2 += 2; // skip past the final CR+LF
}
nextReadSize = chunkSize+8; // chunk + CR/LF + next chunk size (4 hex digits) + CR/LF + encryption header/trailer
}
else // copy the actual chunk
{
if (p2-p1 != chunkSize)
{
m_ErrorLog << "Warning: Actual chunk size of " << p2 - p1 << " != stated chunk size = " << chunkSize << endl;
}
else
{
// copy over the actual chunk data //
if (m_HTTPBodyLen + chunkSize > m_HTTPBodyAllocatedSize)
::EnlargeBuffer(m_HTTPBody,m_HTTPBodyAllocatedSize,m_HTTPBodyLen,chunkSize+1);
memcpy_s(m_HTTPBody+m_HTTPBodyLen,m_HTTPBodyAllocatedSize,p1,chunkSize);
m_HTTPBodyLen += chunkSize;
m_HTTPBody[m_HTTPBodyLen] = 0; // null-terminate
p2 += 2; // skip over chunk and end of line characters
chunkSizeBlock = true;
chunkSize = 0;
incompletePacket = false;
lenStartFragmentedPacket = 0;
}
}
p1 = p2; // move to start of next chunk field
}
else // got to end of encrypted body with no CR+LF found --> fragmeneted chunk. So we need to read and decrypt at least one more chunk
{
incompletePacket = true;
lenStartFragmentedPacket = p1-decryptedBody;
}
}
while (p2 < pEndDecryptedBody);
lastDecryptRes = scRet;
}
}
while (scRet == SEC_E_INCOMPLETE_MESSAGE && !bulkReadDone);
TLS does not support byte-by-byte decryption.
TLS 1.2 breaks its input into blocks of up to 16 kiB, then encrypts them into ciphertext blocks that are slightly larger due to the need for encryption IVs/nonces and integrity protection tags/MACs. It is not possible to decrypt a block until the entire block is available. You can find the full details at https://www.rfc-editor.org/rfc/rfc5246#section-6.2.
Since you're already able to decrypt the first few blocks (containing the headers), you should be able to read the HTTP length so that you at least know the plaintext length that you're expecting, which you can then compare to the number of bytes that you've decrypted from the stream. That won't tell you how many bytes of ciphertext you need, though -- you can get an upper bound on the size of a fragment by calling m_pSPPI->QueryContextAttributes() and then should read either at least that number of bytes or until end of stream before trying to decrypt.
Have you tried looking at other examples? http://www.coastrd.com/c-schannel-smtp appears to contain a detailed example of an SChannel-based TLS client.
I was finally able to figure this out. I fixed this by decrypting each TCP/IP packet as it came in to check for the CR+LF+CR+LF in the decrypted packet instead of what I had been doing -- trying to consolidate all of the encrypted packets into one buffer prior to decrypting it.
On the "hang" problem, what I thought was happening was that recv() wasn't returning because the amount of data actually received was smaller than my expected receive size. But what actually happened was I had actually received the entire transmission, but I didn't realize it. Thus, I was making additional recv() calls when there was actually no more data to receive. The fact that there was no more data to receive was what caused the connection to time out (causing a "hang").
The truncation problem was occurring because I couldn't detect the CR+LF+CR+LF sequence in the encrypted stream, and I erroneously thought SChannel returned SEC_E_OK on DecryptMessage() only when the entire response was processed.
Both problems were eliminated once I was able to detect the true end of the message by decrypting in piecemeal fashion vs. in bulk.
In order to figure this out, I had to completely restructure the sample SChannel code from www.coastRD.com. While the www.coastRD.com code was very helpful in general, it was written for SMTP transfers, not chunked HTTP encoding. In addition, the way it was written, it was hard to follow the logic for processing variations in how messages were received and processed. Lastly, I spent a lot of time "hacking" Schannel to understand how it behaves and which codes are returned under which conditions, because unfortunately none of that is discussed in any of the Microsoft documentation (that I've seen).
The first thing I needed to understand was how SChannel tries to decrypt a message. In Schannel, the 1st 13 bytes of an encrypted message are the encryption header, and the last 16 bytes are the encryption trailer. I still don't know what the trailer does, but I did realize that the encryption header is never actually encrypted/decrypted. The 1st 5 bytes are just the TLS record header for "application data" (hex code 0x17), followed by two bytes defining the TLS version used, followed by 2 bytes of the TLS record fragment size, followed by leading 0s and one byte which I still haven't figured out.
The reason this matters is that DecryptMessage() only works if the record type is "application data". For any other record type (such as a TLS handshake "finished message), DecryptMessage() won't even try to decrypt it-- it will just return a SEC_E_DECRYPT_FAILURE code.
In addition, I needed to understand that DecryptMessage() often can't decrypt the entire contents of the receive buffer in one pass when using chunked transfer encoding. In order to successfully process the entire contents of the receive buffer and the remainder of the server HTTPS response, I needed to understand two key return codes from DecryptMessage() -- SEC_E_OK and SEC_E_INCOMPLETE_MESSAGE.
When I received SEC_E_OK, it meant DecryptMessage() was able to successfully decrypt at least part of the receive buffer. When this occurred, the 1st 13 bytes (the encryption header) remained unchanged. However, the bytes immediately following the header were decrypted in-place, followed by the encryption trailer (which is also unchanged). Often, there will be additional encrypted data still in the receive buffer after the end of the encryption trailer, which is also unchanged.
Since I was using the SecBufferDesc output buffer structures and 4 SecBuffer structures described in www.coastRD.com's code, I needed to understand that these are not actually 4 separate buffers -- they are just pointers to different locations within the receive buffer. The first buffer is a pointer to the encryption header. The second buffer is a pointer to the beginning of the decrypted data. The 3rd buffer is a pointer to the beginning of the encryption trailer. Lastly, the 4th buffer is a pointer to the "extra" encrypted data that DecryptMessage() was not able to process on the last call.
Once I figured that out, I realized that I needed to copy the decrypted data (the pointer in the second buffer) into a separate buffer, because the receive buffer would probably be overwritten later.
If there was no "extra" data in the 4th buffer, I was done done for the moment -- but this was the exception rather than the rule.
If there was extra data (the usual case), I needed to move that data forward to the very beginning of the receive buffer, and I needed to call DecryptMessage() again. This decrypted the next chunk, and I appended that data to the data I already copied to the separate buffer, and repeated this process until there was either no more data left in the receive buffer to decrypt, or I received a SEC_E_INCOMPLETE_MESSAGE.
If I received a SEC_E_INCOMPLETE_MESSAGE, the data remaining in the receive buffer was unchanged. It wasn't decrypted because it was an incomplete encryption block. Thus, I needed to call recv() again to get more encrypted data from the server to complete the encryption block.
Once that occurred, I appended newly received data to the receive buffer. I appended it to the contents of the receive buffer vs. overwriting it because the latter approach would have overwritten the beginning of the encryption block, producing a SEC_E_DECRYPT_FAILURE message the next time I called DecryptMessage().
Once I appended this new block of data to the receive buffer, I repeated the steps above to decrypt the contents of the receive buffer, and continued to repeat this whole process until I got a SEC_E_OK message on the last chunk of data left in the receive buffer.
But I wasn't necessarily done yet -- there may still be data being sent by the server. Stopping at this point is what caused the truncation issue I had occasionally encountered.
So I now checked the last 4 bytes of the decrypted data to look for CR+LF+CR+LF. If I found that sequence, I knew I had received and decrypted a complete HTTPS response.
But if I hadn't, I needed to call recv() again and repeat the process above until I saw the CR+LF+FR+LF sequence at the end of the data.
Once I implemented this process, I was able to definitively identify the end of the encrypted HTTPS response, which prevented me from making an unnecessary recv() call when no data was remaining, preventing a "hang", as well as prematurely truncating the response.
I apologize for the long answer, but given the lack of documentation on SChannel and its functions like DecryptMessage(), I thought this description of what I learned might be helpful to others who may have also been struggling to use SChannel to process TLS HTTP responses.
Thank you again to user3553031 for trying to help me with this over 7 months ago -- those attempts helped me narrow down the problem.

Linux-Xenomai Serial Communication using xeno_16550A module

I'm starter of RTOS and I'm using Xenomai v2.6.3.
I'm trying to get some data using Serial communication.
I did my best on the task following the xenomai's guide and open sources, but it doesn't work well.
the link of the guide --> (https://xenomai.org//serial-16550a-driver/)
I just followed the sequence to use the module xeno_16550A. (with port io = 0x2f8 and irq=3)
I followed open source http://www.acadis.org/pages/captain.at/serial-port-example
It works well in write task, but read task doesn't work well.
It gave me the error sentence with error while RTSER_RTIOC_WAIT_EVENT, code -110 (it means connection timed out)
Moreover I checked the irq number3 by typing command 'cat /proc/xenomai/irq', but the interrupt number doesn't increase.
In my case, I don't need to write data, so I erase the write task code.
The read task proc is follow
void read_task_proc(void *arg) {
int ret;
ssize_t red = 0;
struct rtser_event rx_event;
while (1) {
/* waiting for event */
ret = rt_dev_ioctl(my_fd, RTSER_RTIOC_WAIT_EVENT, &rx_event );
if (ret) {
printf(RTASK_PREFIX "error while RTSER_RTIOC_WAIT_EVENT, code %d\n",ret);
if (ret == -ETIMEDOUT)
continue;
break;
}
unsigned char buf[1];
red = rt_dev_read(my_fd, &buf, 1);
if (red < 0 ) {
printf(RTASK_PREFIX "error while rt_dev_read, code %d\n",red);
} else {
printf(RTASK_PREFIX "only %d byte received , char : %c\n",red,buf[0]);
}
}
exit_read_task:
if (my_state & STATE_FILE_OPENED) {
if (!close_file( my_fd, READ_FILE " (rtser)")) {
my_state &= ~STATE_FILE_OPENED;
}
}
printf(RTASK_PREFIX "exit\n");
}
I could guess the causes of the problem.
buffer size or buffer is already full when new data is received.
rx_interrupt doesn't work....
I want to check whether the two things are wrong or not, but How can I check?
Furthermore, does anybody know the cause of the problem? Please give me comments.

redis bitset -- how to upload an existing bitset array

I have huge data of bitset, stored in db. I want to upload the same to redis bitset, so I can perform bit operations on it. Is there a way to upload this data from either redis-cli or javascript code? I am using bitset.js npm module to load the bitset in my program from db.
One obvious way is to iterate my bitset array within my javascript code and keep calling redis.setbit(...) multiple times. Is there a way to upload all of them at once? If so how?
A bitset in Redis is actually just a string, so you can assign to it directly all at once. The bits in the string are the bits of the bitfield, set in left-to-right order. I.e. setting bit number 0 to 1 yields the binary number 10000000, or a single byte with the value 128. This looks like "\x80" when Redis prints it, which you can see for yourself by running setbit foo 0 1 and then get foo in Redis.
So to construct the right string to send to Redis, we just need to read the bits out of your BitSet and construct a buffer, one byte at a time, with the appropriate bits set.
Below is code that uses bitset.js and the redis npm module to transfer a BitSet in JavaScript into a Redis key. Note that this code assumes that the bitfield fits comfortably in memory.
let redis = require('redis'),
BitSet = require('./bitset');
let client = redis.createClient();
// create some data
let bs = new BitSet;
bs.set(0, 1);
bs.set(31, 1);
// calculate how many bytes we'll need
var numBytes = Math.ceil(bs.msb()/8);
// construct a buffer with that much space
var buffer = new Buffer(numBytes);
// for each byte
for (var i = 0; i < numBytes; i++) {
var byte = 0;
// iterate over each bit
for (var j = 0; j < 8; j++) {
// slide previous bits to the left
byte <<= 1;
// and set the rightmost bit
byte |= bs.get(i*8+j);
}
// put this byte in the buffer
buffer[i] = byte;
}
// now we have a complete buffer to use as our value in Redis
client.set('bitset', buffer, function (err, result) {
client.getbit('bitset', 31, function (err, result) {
console.log('Bit 31 = ' + result);
client.del('bitset', function () {
client.quit();
});
});
});

/proc/[pid]/cmdline file size

i'm trying to get the filesize of the cmdline file in proc/[pid]. For example porc/1/cmdline. The file is not empty, it contains "/sbin/init". But i get file_size = 0.
int main(int argc, char **argv) {
int file_size;
FILE *file_cmd;
file_cmd = fopen("/proc/1/cmdline", "r");
if(file_cmd == NULL) {
perror("proc/1/cmdline");
exit(1);
}else {
if(fseek(file_cmd, 0L, SEEK_END)!=0) {
perror("proc/1/cmdline");
exit(1);
}
file_size = ftell(file_cmd);
}
printf("fs: %d\n",file_size);
fclose(file_cmd);
}
Regards
That's normal. /proc files (most of them, there are a few exceptions) are generated by the kernel at the moment you read from them. That means it's impossible to know the size before reading from the file. Think of it as Quantum Mechanics on files. You won't get a state unless you read the information, but there's no guarantee that reading again will give you the same information twice ;-)
In other words, the EOF is only generated when you try to read it. It's not there before that, so there's no way a file size can be determined.
This is really just communication with the kernel disguised as file I/O.