Limit input size in chars from stdin - input

I want to write an application in Rust that deals with input from terminal and I want to prevent it from crashing/being killed by running out of memory. It displays a prompt, processes a command and displays prompt again.
Basically I am looking for read_line_max(n) or read_until(delimiter, max_chars) API where at most n bytes or until delimiter is reached are read.
Possibilities I considered:
for io::stdin().lock().take(n).lines() takes n bytes total at most, I want unlimited chars but limit line size.
for io::stdin().lock().lines().take(n) limits number of lines
for line in io::stdin().lock().lines() {
let line = line?.chars().take(n);
println!("{}", respond_to(line));
}
too late, hogging 50GB of memory, kill imminent
I found out that BufReader used to have .chars() iterator that could be used in this way but it was removed.
With io::stdin().lock().read(buff) there is an issue with bytes vs chars but it may be my best bet. And then try to throw it into a String to check UTF-8 validity but that seems like something I would do in C and very unidiomatic.
Actually while writing this I quickly put together this thing:
let inp = io::stdin();
let mut bufinp = inp.lock();
let mut linebytes = [0_u8; 10];
loop {
match bufinp.read(&mut linebytes) {
Ok(bytes_read) => {
match String::from_utf8(linebytes[..bytes_read].to_vec()) {
Ok(line) => println!("processed line: {}", &line),
Err(err) => eprintln!("utf8 err: {:?}", err),
}
},
Err(err) => {
eprintln!("line read err: {:?}", err);
}
}
}
So... that kind of does what I want but I have some issues with it:
1) I need to trim the '\n' if input is smaller than buffer.
2) It doesn't clear rest of the stdin if input is larger than buffer. I'm guessing I need to put a skip_while() there at the end so that it doesn't spill over to the next read. Is there a nicer way to clear it?
3) It may split graphemes while I could in in fact handle those additional 3 bytes. I don't really care about reading up until specific hard limit. I just want to prevent the input/memory usage from being "too much".
4) It's just too low-level and complicated and not in line with "make good and safe choices easy to code and unsafe ones less available" which makes me think I'm not doing it right. But at least cat /dev/zero | ./target/debug/test doesn't result in SIGQUIT anymore.
I find it strange that a language that prides itself on safety wouldn't provide a fool-proof way to deal with potentially large input. Am I missing something or thinking too much about it? Every article I found just closes its eyes and fires read_to_end() or read_line() without much thought.
How should I read user input safely and idiomatically?

Related

How can I improve this code? Prompt user input [Rust]

I am learning Rust by just coding right after the first 4 chapters of The Book. Getting started I am still getting used to how borrowing and sharing work and how we can take advantage of them in code.
This snippet of code is supposed to prompt user for an IP address, and if enter is pressed, then to return the loopback address. It works fine, but I am curious to know how this could be improved in any ways, because I definitely know it can. Thank you!
fn prompt_host() -> String {
let mut input_text = String::new();
println!(" input host IP, press enter for loopback:");
io::stdin()
.read_line(&mut input_text)
.expect(" ERROR: failed to read from stdin");
let len = input_text.len();
input_text.truncate(len - 1);
if input_text == "" {
return String::from("127.0.0.1");
}
return input_text as String;
}
Just some hints:
I think I would return something related to std::net::IpAddr instead of String (if there's a type for your needs, I would use it).
std::net::IpAddr implements FromStr, so you can use input_text.parse() and obtain a Result<IpAddr, Err> (since the conversion from string may fail).
I would use trim to get rid of spaces.
I would use is_empty to test if the string is empty. - Or even cover this case by just using parse.
There are (at least) two places that may fail: read_line and parse, so I would think about returning an Option<IpAddr> or even Result<IpAddr, ErrorType> for an appropriate ErrorType.

Is it safe, to share an array between threads?

Is it safe, to share an array between promises like I did it in the following code?
#!/usr/bin/env perl6
use v6;
sub my_sub ( $string, $len ) {
my ( $s, $l );
if $string.chars > $len {
$s = $string.substr( 0, $len );
$l = $len;
}
else {
$s = $string;
$l = $s.chars;
}
return $s, $l;
}
my #orig = <length substring character subroutine control elements now promise>;
my $len = 7;
my #copy;
my #length;
my $cores = 4;
my $p = #orig.elems div $cores;
my #vb = ( 0..^$cores ).map: { [ $p * $_, $p * ( $_ + 1 ) ] };
#vb[#vb.end][1] = #orig.elems;
my #promise;
for #vb -> $r {
#promise.push: start {
for $r[0]..^$r[1] -> $i {
( #copy[$i], #length[$i] ) = my_sub( #orig[$i], $len );
}
};
}
await #promise;
It depends how you define "array" and "share". So far as array goes, there are two cases that need to be considered separately:
Fixed size arrays (declared my #a[$size]); this includes multi-dimensional arrays with fixed dimensions (such as my #a[$xs, $ys]). These have the interesting property that the memory backing them never has to be resized.
Dynamic arrays (declared my #a), which grow on demand. These are, under the hood, actually using a number of chunks of memory over time as they grow.
So far as sharing goes, there are also three cases:
The case where multiple threads touch the array over its lifetime, but only one can ever be touching it at a time, due to some concurrency control mechanism or the overall program structure. In this case the arrays are never shared in the sense of "concurrent operations using the arrays", so there's no possibility to have a data race.
The read-only, non-lazy case. This is where multiple concurrent operations access a non-lazy array, but only to read it.
The read/write case (including when reads actually cause a write because the array has been assigned something that demands lazy evaluation; note this can never happen for fixed size arrays, as they are never lazy).
Then we can summarize the safety as follows:
| Fixed size | Variable size |
---------------------+----------------+---------------+
Read-only, non-lazy | Safe | Safe |
Read/write or lazy | Safe * | Not safe |
The * indicating the caveat that while it's safe from Perl 6's point of view, you of course have to make sure you're not doing conflicting things with the same indices.
So in summary, fixed size arrays you can safely share and assign to elements of from different threads "no problem" (but beware false sharing, which might make you pay a heavy performance penalty for doing so). For dynamic arrays, it is only safe if they will only be read from during the period they are being shared, and even then if they're not lazy (though given array assignment is mostly eager, you're not likely to hit that situation by accident). Writing, even to different elements, risks data loss, crashes, or other bad behavior due to the growing operation.
So, considering the original example, we see my #copy; and my #length; are dynamic arrays, so we must not write to them in concurrent operations. However, that happens, so the code can be determined not safe.
The other posts already here do a decent job of pointing in better directions, but none nailed the gory details.
Just have the code that is marked with the start statement prefix return the values so that Perl 6 can handle the synchronization for you. Which is the whole point of that feature.
Then you can wait for all of the Promises, and get all of the results using an await statement.
my #promise = do for #vb -> $r {
start
do # to have the 「for」 block return its values
for $r[0]..^$r[1] -> $i {
$i, my_sub( #orig[$i], $len )
}
}
my #results = await #promise;
for #results -> ($i,$copy,$len) {
#copy[$i] = $copy;
#length[$i] = $len;
}
The start statement prefix is only sort-of tangentially related to parallelism.
When you use it you are saying, “I don't need these results right now, but probably will later”.
That is the reason it returns a Promise (asynchrony), and not a Thread (concurrency)
The runtime is allowed to delay actually running that code until you finally ask for the results, and even then it could just do all of them sequentially in the same thread.
If the implementation actually did that, it could result in something like a deadlock if you instead poll the Promise by continually calling it's .status method waiting for it to change from Planned to Kept or Broken, and only then ask for its result.
This is part of the reason the default scheduler will start to work on any Promise codes if it has any spare threads.
I recommend watching jnthn's talk “Parallelism, Concurrency,
and Asynchrony in Perl 6”.
slides
This answer applies to my understanding of the situation on MoarVM, not sure what the state of art is on the JVM backend (or the Javascript backend fwiw).
Reading a scalar from several threads can be done safely.
Modifying a scalar from several threads can be done without having to fear for a segfault, but you may miss updates:
$ perl6 -e 'my $i = 0; await do for ^10 { start { $i++ for ^10000 } }; say $i'
46785
The same applies to more complex data structures like arrays (e.g. missing values being pushed) and hashes (missing keys being added).
So, if you don't mind missing updates, changing shared data structures from several threads should work. If you do mind missing updates, which I think is what you generally want, you should look at setting up your algorithm in a different way, as suggested by #Zoffix Znet and #raiph.
No.
Seriously. Other answers seem to make too many assumptions about the implementation, none of which are tested by the spec.

Byte InputRange from file

How to construct easily a raw byte-by-byte InputRange/ForwardRange/RandomAccessRange from a file?
file.byChunk(4096).joiner
This reads a file in 4096-byte chunks and lazily joins the chunks together into a single ubyte input range.
joiner is from std.algorithm, so you'll have to import it first.
The easiest way to make a raw byte range from a file is to just read it all right into memory:
import std.file;
auto data = cast(ubyte[]) read("filename");
// data is a full-featured random access range of the contents
If the file is too large for that to be reasonable, you could try a memory-mapped file http://dlang.org/phobos/std_mmfile.html and use the opSlice to get an array off it. Since it is an array, you get full range features, but since it is memory mapped by the operating system, you get lazy reading as you touch the file.
For a simple InputRange, there's LockingTextReader (undocumented) in Phobos, or you could construct one yourself over byChunk or even fgetc, the C function. fgetc would be the easiest to write:
struct FileByByte {
ubyte front;
void popFront() { front = cast(ubyte) fgetc(fp); }
bool empty() { return feof(fp); }
FILE* fp;
this(FILE* fp) { this.fp = fp; popFront(); /* prime it */ }
}
I haven't actually tested that but i'm pretty sure it'd work. (BTW the file open and close is separate from this because ranges are supposed to be just views into data, not managed containers. You wouldn't want the file closed just because you passed this range into a function.)
This is not a forward nor random access range though. Those are trickier to do on streams without a lot of buffering code and I think that'd be a mistake to try to write - generally, ranges should be cheap, not emulating features the underlying container doesn't natively support.
EDIT: The other answer has a non-buffering way! https://stackoverflow.com/a/30278933/1457000 That's awesome.

How to handle GSM buffer on the Microcontroller?

I have a GSM module hooked up to PIC18F87J11 and they communicate just fine . I can send an AT command from the Microcontroller and read the response back. However, I have to know how many characters are in the response so I can have the PIC wait for that many characters. But if an error occurs, the response length might change. What is the best way to handle such scenario?
For Example:
AT+CMGF=1
Will result in the following response.
\r\nOK\r\n
So I have to tell the PIC to wait for 6 characters. However, if there response was an error message. It would be something like this.
\r\nERROR\r\n
And if I already told the PIC to wait for only 6 characters then it will mess out the rest of characters, as a result they might appear on the next time I tell the PIC to read the response of a new AT command.
What is the best way to find the end of the line automatically and handle any error messages?
Thanks!
In a single line
There is no single best way, only trade-offs.
In detail
The problem can be divided in two related subproblems.
1. Receiving messages of arbitrary finite length
The trade-offs:
available memory vs implementation complexity;
bandwidth overhead vs implementation complexity.
In the simplest case, the amount of available RAM is not restricted. We just use a buffer wide enough to hold the longest possible message and keep receiving the messages bytewise. Then, we have to determine somehow that a complete message has been received and can be passed to further processing. That essentially means analyzing the received data.
2. Parsing the received messages
Analyzing the data in search of its syntactic structure is parsing by definition. And that is where the subtasks are related. Parsing in general is a very complex topic, dealing with it is expensive, both in computational and laboriousness senses. It's often possible to reduce the costs if we limit the genericity of the data: the simpler the data structure, the easier to parse it. And that limitation is called "transport layer protocol".
Thus, we have to read the data to parse it, and parse the data to read it. This kind of interlocked problems is generally solved with coroutines.
In your case we have to deal with the AT protocol. It is old and it is human-oriented by design. That's bad news, because parsing it correctly can be challenging despite how simple it can look sometimes. It has some terribly inconvenient features, such as '+++' escape timing!
Things become worse when you're short of memory. In such situation we can't defer parsing until the end of the message, because it very well might not even fit in the available RAM -- we have to parse it chunkwise.
...And we are not even close to opening the TCP connections or making calls! And you'll meet some unexpected troubles there as well, such as these dreaded "unsolicited result codes". The matter is wide enough for a whole book. Please have a look at least here:
http://en.wikibooks.org/wiki/Serial_Programming/Modems_and_AT_Commands. The wikibook discloses many more problems with the Hayes protocol, and describes some approaches to solve them.
Let's break the problem down into some layers of abstraction.
At the top layer is your application. The application layer deals with the response message as a whole and understands the meaning of a message. It shouldn't be mired down with details such as how many characters it should expect to receive.
The next layer is responsible from framing a message from a stream of characters. Framing is extracting the message from a stream by identifying the beginning and end of a message.
The bottom layer is responsible for reading individual characters from the port.
Your application could call a function such as GetResponse(), which implements the framing layer. And GetResponse() could call GetChar(), which implements the bottom layer. It sounds like you've got the bottom layer under control and your question is about the framing layer.
A good pattern for framing a stream of characters into a message is to use a state machine. In your case the state machine includes states such as BEGIN_DELIM, MESSAGE_BODY, and END_DELIM. For more complex serial protocols other states might include MESSAGE_HEADER and MESSAGE_CHECKSUM, for example.
Here is some very basic code to give you an idea of how to implement the state machine in GetResponse(). You should add various types of error checking to prevent a buffer overflow and to handle dropped characters and such.
void GetResponse(char *message_buffer)
{
unsigned int state = BEGIN_DELIM1;
bool is_message_complete = false;
while(!is_message_complete)
{
char c = GetChar();
switch(state)
{
case BEGIN_DELIM1:
if (c = '\r')
state = BEGIN_DELIM2;
break;
case BEGIN_DELIM2:
if (c = '\n')
state = MESSAGE_BODY:
break;
case MESSAGE_BODY:
if (c = '\r')
state = END_DELIM;
else
*message_buffer++ = c;
break;
case END_DELIM:
if (c = '\n')
is_message_complete = true;
break;
}
}
}

File reading and checksums in go. Difference between methods

Recently I'm into creating checksums for files in go. My code is working with small and big files. I tried two methods, the first uses ioutil.ReadFile("filename") and the second is working with os.Open("filename").
Examples:
The first function is working with the io/ioutil and works for small files. When I try to copy a big file my ram gets blastet and for a 1.5GB iso it uses 3GB of ram.
func byteCopy(fileToCopy string) {
file, err := ioutil.ReadFile(fileToCopy) //1.5GB file
omg(err) //error handling function
ioutil.WriteFile("2.iso", file, 0777)
os.Remove("2.iso")
}
Even worse when I want to create a checksum with crypto/sha512 and io/ioutil.
It will never finish and abort because it runs out of memory.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
fmt.Printf("%x", h.Sum(file))
}
When using the function below everything works fine.
func ioHash() {
f, err := os.Open(iso) //iso is a big ~ 1.5tb file
omg(err) //error handling function
defer f.Close()
h := sha512.New()
io.Copy(h, f)
fmt.Printf("%x", h.Sum(nil))
}
My Question:
Why is the ioutil.ReadFile() function not working right? The 1.5GB file should not fill my 16GB of ram. I don't know where to look right now.
Could somebody explain the differences between the methods? I don't get it with reading the go-doc and examples.
Having usable code is nice, but understanding why its working is way above that.
Thanks in advance!
The following code doesn't do what you think it does.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
fmt.Printf("%x", h.Sum(file))
}
This first reads your 1.5GB iso. As jnml pointed out, it continuously makes bigger and bigger buffers to fill it. In the end, And total buffer size is no less than 1.5GB and no greater than 1.875GB (by the current implementation).
However, after that you then make another buffer! h.Sum(file) doesn't hash file. It appends the current hash to file! This may or may not cause yet another allocation.
The real problem is that you are taking that file, now appended with the hash, and printing it with %x. Fmt actually pre-computes using the same type of method jnml pointed out that ioutil.ReadAll used. So it constantly allocated bigger and bigger buffers to store the hex of your file. Since each letter is 4 bits, that means we are talking about no less than a 3GB buffer for that and no greater than 3.75GB.
This means your active buffers may be as big 5.625GB. Combine that with the GC not being perfect and not removing all the intermediate buffers, and it could very easily fill your space.
The correct way to write that code would have been.
func ioutilHash() {
file, _ := ioutil.ReadFile(iso)
h := sha512.New()
h.Write(file)
fmt.Printf("%x", h.Sum(nil))
}
This doesn't do nearly the number the allocations.
The bottom line is that ReadFile is rarely what you want to use. IO streaming (using readers and writers) is always the best way when it is an option. Not only do you allocate much less when you use io.Copy, you also hash and read the disk concurrently. In your ReadFile example, the two resources are used synchronously when they don't depend on each other.
ioutil.ReadFile is working right. It's your fault to abuse the system resources by using that function for things you know are huge.
ioutil.ReadFile is a handy helper for files you're pretty sure in advance that they're going to be small. Like configuration files, most source code files etc. (Actually it's optimizing things for files <= 1e9 bytes, but that's an implementation detail and not part of the API contract. Your 1.5GB file forces it to use slice growing and thus allocating more than one big buffer for your data in the process of reading the file.)
Even your other approach using os.File is not okay. You definitely should be using the "bufio" package for sequential processing of large files, see bufio.NewReader.