/proc/[pid]/cmdline file size - size

i'm trying to get the filesize of the cmdline file in proc/[pid]. For example porc/1/cmdline. The file is not empty, it contains "/sbin/init". But i get file_size = 0.
int main(int argc, char **argv) {
int file_size;
FILE *file_cmd;
file_cmd = fopen("/proc/1/cmdline", "r");
if(file_cmd == NULL) {
perror("proc/1/cmdline");
exit(1);
}else {
if(fseek(file_cmd, 0L, SEEK_END)!=0) {
perror("proc/1/cmdline");
exit(1);
}
file_size = ftell(file_cmd);
}
printf("fs: %d\n",file_size);
fclose(file_cmd);
}
Regards

That's normal. /proc files (most of them, there are a few exceptions) are generated by the kernel at the moment you read from them. That means it's impossible to know the size before reading from the file. Think of it as Quantum Mechanics on files. You won't get a state unless you read the information, but there's no guarantee that reading again will give you the same information twice ;-)
In other words, the EOF is only generated when you try to read it. It's not there before that, so there's no way a file size can be determined.
This is really just communication with the kernel disguised as file I/O.

Related

How can I read \x1a from a file? [duplicate]

I am attempting to write a bittorrent client. In order to parse the file etc. I need to read a torrent file into memory. I have noticed that fread is not reading the entire file into my buffer. After further investigation it appears that whenever the symbol shown below is encountered in the file, fread stops reading the file. Calling the feof function on the FILE* pointer returns 16 indicating that the end of file has been reached. This occurs no matter where the symbol is placed. Can somebody explain why this happens and any solutions that may work.
The symbol is highlighted below:
Here is the code that does the read operation:
char *read_file(const char *file, long long *len){
struct stat st;
char *ret = NULL;
FILE *fp;
//store the size/length of the file
if(stat(file, &st)){
return ret;
}
*len = st.st_size;
//open a stream to the specified file
fp = fopen(file, "r");
if(!fp){
return ret;
}
//allocate space in the buffer for the file
ret = (char*)malloc(*len);
if(!ret){
return NULL;
}
//Break down the call to fread into smaller chunks
//to account for a known bug which causes fread to
//behave strangely with large files
//Read the file into the buffer
//fread(ret, 1, *len, fp);
if(*len > 10000){
char *retTemp = NULL;
retTemp = ret;
int remaining = *len;
int read = 0, error = 0;
while(remaining > 1000){
read = fread(retTemp, 1, 1000, fp);
if(read < 1000){
error = feof(fp);
if(error != 0){
printf("Error: %d\n", error);
}
}
retTemp += 1000;
remaining -= 1000;
}
fread(retTemp, 1, remaining, fp);
} else {
fread(ret, 1, *len, fp);
}
//cleanup by closing the file stream
fclose(fp);
return ret;
}
Thank you for your time :)
Your question is oddly relevant as I recently ran into this problem in an application here at work last week!
The ASCII value of this character is decimal 26 (0x1A, \SUB, SUBSTITUTE). This is used to represent the CTRL+Z key sequence or an End-of-File marker.
Change your fopen mode ("In [Text] mode, CTRL+Z is interpreted as an end-of-file character on input.") to get around this on Windows:
fp = fopen(file, "rb"); /* b for 'binary', disables Text-mode translations */
You should open the file in binary mode. Some platforms, in text (default) mode, interpret some bytes as being physical end of file markers.
You're opening the file in text rather than raw/binary mode - the arrow is ASCII for EOF. Specify "rb" rather than just "r" for your fopen call.

How to decompress pbzip2 data in memory buffer by using libbz2 library in C++

I have a working version of decompressing bzip2 data where I call the bz2_bzdecompress API. It goes something like this
while (bytes_input < len) {
isDone = false;
// Initialize the input buffer and its length
size_t in_buffer_size = len -bytes_input;
the_bz2_stream.avail_in = in_buffer_size;
the_bz2_stream.next_in = (char*)data +bytes_input;
size_t out_buffer_size =
output_size -bytes_uncompressed; // size of output buffer
if (out_buffer_size == 0) { // out of space in the output buffer
break;
}
the_bz2_stream.avail_out = out_buffer_size;
the_bz2_stream.next_out =
(char*)output +bytes_uncompressed; // output buffer
ret = BZ2_bzDecompress(&the_bz2_stream);
if (ret != BZ_OK && ret != BZ_STREAM_END) {
throw Bzip2Exception("Bzip2 failed. ", ret);
}
bytes_input += in_buffer_size - the_bz2_stream.avail_in;
bytes_uncompressed += out_buffer_size - the_bz2_stream.avail_out;
*data_consumed =bytes_input;
if (ret == BZ_STREAM_END) {
ret = BZ2_bzDecompressEnd(&the_bz2_stream);
if (ret != BZ_OK) {
throw Bzip2Exception("Bzip2 fail. ", ret);
}
isDone = true;
}
}
This works great for native bzip2 compressed files, but for pbzip2 (Parallel Bzip2) and "Splittable" bzip2 data, it throws a "BZ_PARAM_ERROR".
I see that pbzip2 in their documentation says this-
Data compressed with pbzip2 is broken into multiple streams and each
stream is bzip2 compressed looking like this:
[-----|-----|-----|-----|-----|-----|-----|-----|-----]
If you are writing software with libbzip2 to decompress data created
with pbzip2, you must take into account that the data contains
multiple bzip2 streams so you will encounter end-of-stream markers
from libbzip2 after each stream and must look-ahead to see if there
are any more streams to process before quitting. The bzip2 program
itself will automatically handle this condition.
Source:http://compression.ca/pbzip2/
Can someone please tell me how to handle this? Should I be using some other libzip2 API?
Also, pbzip2 files are compatible with the normal "bunzip2" command. How is that bzip2 handles this gracefully while my code throws a BZ_PARAM_ERROR?
Thanks.
After your BZ2_bzDecompressEnd() you need to call BZ2_bzDecompressInit() again (you must have called it initially before that loop), if there is still data left to decompress, i.e. bytes_input < len.
To decompress each of the |-----| blocks, you need to do an init, some number of decompress calls, and an end. So if you still have input left, then you need to do another init, n*decompress, end.
Make sure that you do a final end, in order to avoid a big memory leak.
You're getting a BZ_PARAM_ERROR because you are trying to use an uninitialized bz_stream to decompress. Once you do BZ2_bzDecompressEnd(), you can't use that bz_stream any more, unless you do a BZ2_bzDecompressInit() on it.

How to write a Linux Driver, that only forwards file operations?

I need to implement a Linux Kernel Driver, that (in the first step) only forwards all file operations to another file (in later steps, this should be managed and manipulated, but I don't want to discuss this here).
My idea is the following, but when reading, the kernel crashes:
static struct {
struct file *file;
char *file_name;
int open;
} file_out_data = {
.file_name = "/any_file",
.open = 0,
};
int memory_open(struct inode *inode, struct file *filp) {
PRINTK("<1>open memory module\n");
/*
* We don't want to talk to two processes at the same time
*/
if (file_out_data.open)
return -EBUSY;
/*
* Initialize the message
*/
Message_Ptr = Message;
try_module_get(THIS_MODULE);
file_out_data.file = filp_open(file_out_data.file_name, filp->f_flags, filp->f_mode); //here should be another return handling in case of fail
file_out_data.open++;
/* Success */
return 0;
}
int memory_release(struct inode *inode, struct file *filp) {
PRINTK("<1>release memory module\n");
/*
* We're now ready for our next caller
*/
file_out_data.open--;
filp_close(file_out_data.file,NULL);
module_put(THIS_MODULE);
/* Success */
return 0;
}
ssize_t memory_read(struct file *filp, char *buf,
size_t count, loff_t *f_pos) {
PRINTK("<1>read memory module \n");
ret=file_out_data.file->f_op->read(file_out_data.file,buf,count,f_pos); //corrected one, false one is to find in the history
return ret;
}
So, can anyone please tell me why?
Don't use set_fs() as there is no reason to do it.
Use file->f_fop->read() instead of the vfs_read. Take a look at the file and file_operations structures.
Why are you incrementing file_out_data.open twice and decrementing it once? This could cause you to use file_out_data.file after it has been closed.
You want to write memory in your file ou read?
Because you are reading and not writing...
possible i'm wrong

What is a FILE * type in Cocoa,and how properly use it?

I'm trying to run Bash commands from my Cocoa APP. And receive the output. I'm executing all that commands, with Admin Privilege.
How to get output from Admin Priveleges bash script, called from Cocoa?
I guess I need FILE * type to store output, but I don't know how to use it.
What is FILE * type? And how should I use it?
FILE * is a C type and it hasn't got anything to do with Cocoa. It is a handle for an opened file. Here is an example:
#include <stdio.h>
int main () {
FILE *file;
file = fopen("myfile.txt", "w"); // open file
if (!file) { // file couldn't be opened
return 1;
}
fputs("fopen example", file); // write to file
fclose(file);
return 0;
}
In Cocoa, you should normally use NSString's and NSData's writeToURL:atomically:encoding:error: and writeToURL:atomically: methods, respectively.
FILE is an ANSI C structure is used for file handling. fopen function return a file pointer. This pointer, points to a structure that contains information about the file, such as the location of a buffer, the current character position in the buffer, whether the file is being read or written, and whether errors or end of file have occurred. Users don't need to know the details, because the definitions obtained from stdio.h include a structure declaration called FILE. The only declaration needed for a file pointer is exemplified by
FILE *fp;
FILE *fopen(char *name, char *mode);
This says that fp is a pointer to a FILE, and fopen returns a pointer to a FILE. Notice that
FILE is a type name, like int, not a structure tag; it is defined with a typedef.
#include <stdio.h>
int main()
{
FILE * pFile;
char buffer [100];
pFile = fopen ("myfile.txt" , "r");
if (pFile == NULL) perror ("Error opening file");
else
{
while ( ! feof (pFile) )
{
if ( fgets (buffer , 100 , pFile) != NULL )
fputs (buffer , stdout);
}
fclose (pFile);
}
return 0;
}
This example reads the content of a text file called myfile.txt and sends it to the standard output stream.

GNU Radio File Format for the recorded samples

Do you know the format in which GNU Radio ( File Sink in GNU Radio Companion) stores the samples in the Binary File?
I need to read these samples in Matlab, but the problem is the file is too big to be read in Matlab.
I am writing the program in C++ to read this binary file.
The file sink is just a dump of the data stream. If the data stream content was simple bytes then the content of the file is straightforward. If the data stream contained complex numbers then the file will contain a list of complex numbers where each complex number is given by two floats and each float by (usually) 4 bytes.
See the files gnuradio/gnuradio-core/src/lib/io/gr_file_sink.cc and gr_file_source.cc for the implementations of the gnuradio file reading and writing blocks.
You could also use python and gnuradio to convert the files into some other format.
from gnuradio import gr
# Assuming the data stream was complex numbers.
src = gr.file_source(gr.sizeof_gr_complex, "the_file_name")
snk = gr.vector_sink_c()
tb = gr.top_block()
tb.connect(src, snk)
tb.run()
# The complex numbers are then accessible as a python list.
data = snk.data()
Ben's answer still stands – but it's from a time long past (the module organization points at GNU Radio 3.6, I think). Organizationally, things are different now; data-wise, the File Sink remained the same.
GNU Radio now has relatively much block documentation in their wiki. In particular, the File Sink documentation page has a section on Handling File Sink data; not to overquote that:
// This is C++17
#include <algorithm>
#include <cmath>
#include <complex>
#include <cstddef>
#include <filesystem>
#include <fstream>
#include <string_view>
#include <vector>
#include <fmt/format.h>
#include <fmt/ranges.h>
using sample_t = std::complex<float>;
using power_t = float;
constexpr std::size_t read_block_size = 1 << 16;
int main(int argc, char *argv[]) {
// expect exactly one argument, a file name
if (argc != 2) {
fmt::print(stderr, "Usage: {} FILE_NAME", argv[0]);
return -1;
}
// just for convenience; we could as well just use `argv[1]` throughout the
// code
std::string_view filename(argv[1]);
// check whether file exists
if (!std::filesystem::exists(filename.data())) {
fmt::print(stderr, "file '{:s}' not found\n", filename);
return -2;
}
// calculate how many samples to read
auto file_size = std::filesystem::file_size(std::filesystem::path(filename));
auto samples_to_read = file_size / sizeof(sample_t);
// construct and reserve container for resulting powers
std::vector<power_t> powers;
powers.reserve(samples_to_read);
std::ifstream input_file(filename.data(), std::ios_base::binary);
if (!input_file) {
fmt::print(stderr, "error opening '{:s}'\n", filename);
return -3;
}
// construct and reserve container for read samples
// if read_block_size == 0, then read the whole file at once
std::vector<sample_t> samples;
if (read_block_size)
samples.resize(read_block_size);
else
samples.resize(samples_to_read);
fmt::print(stderr, "Reading {:d} samples…\n", samples_to_read);
while (samples_to_read) {
auto read_now = std::min(samples_to_read, samples.size());
input_file.read(reinterpret_cast<char *>(samples.data()),
read_now * sizeof(sample_t));
for (size_t idx = 0; idx < read_now; ++idx) {
auto magnitude = std::abs(samples[idx]);
powers.push_back(magnitude * magnitude);
}
samples_to_read -= read_now;
}
// we're not actually doing anything with the data. Let's print it!
fmt::print("Power\n{}\n", fmt::join(powers, "\n"));
}