Is boost asio appropriate for this use case? [closed] - boost-asio

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am decoding an image file: the file has tagged header info mixed with 4K pixel code blocks.
Platform is primarily windows, but could be osx or linux.
Once I read in a code block, I can launch (asynchronously) my decode routine on this block,
while continuing to read the file for header info and code blocks.
Currently, I do synchronous reads using fread(...).
Is is worthwhile to switch to boost asio to asynchronously read in the code blocks?
The read callback could trigger my decode routine. But I wouldn't have to wait for the read
before I carry on to the next code block.
If so, can anyone point me to a reference/tutorial covering boost::asio asynch reads from disk?

There is nothing specific to asio in order to read from disk, you can continue to use your code with fread, at the end of reading you post two new "jobs", one for decoding the previous read, another one for reading next block.
If you want to do "real" multithreading you need a pool of two "io.run()" threads .
Or you can create a pool of x threads, while splitting the read or your entire file in x sections.
Anyway asio is powerfull and very easy to use, but requires a learning curve, may be documentation is missing a tutorial.
IMHO that post is a very good introduction to understand the way to use asio, You should read, and read again ...

Related

Binary serialisation of Rust data strucutures [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What is the current state of serialisation-to-binary in Rust?
I have some large (1-10MB) data structure to be sent across a network, and don't want to encode them as JSON or hex (the two serialisers I have found).
I have found #[repr(packed)]. Is this what I should use, or is there something more portable?
#[repr(packed)] only makes your data small. It does not offer any format guarantees or serialization help.
You have a few choices here (ordered by my opinion from best to worst solution):
You can use the Cap'n proto implementation for Rust
https://github.com/dwrensha/capnproto-rust
It's not really serialization, more of a forced format for structs that are then sent over the network without any conversion
fast
You could write your own Serializer and Deserializer.
you have full control over the format
runtime overhead for every single datum
you need to implement lots of stuff
You can transmute your structs to a [u8] and send that
probably the fastest solution
you need to make sure that the compiler for the program on both sides is exactly the same, otherwise the formats don't match up.
Someone evil may send you bad data. When you transmute that back, you get buffer overflows and stuff
references in your data-structure will cause wild pointers and undefined behaviour
Don't use references

Why would a Scripting language be made 'purposefully Turing non-complete'? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
So, I was reading about Bitcoin Script on their official documentation and found this line: "Script is simple, stack-based, and processed from left to right. It is purposefully not Turing-complete, with no loops." I tried to reason hard but couldn't understand why would someone make a language "purposefully non Turing-complete". What is the reason for this? What happens if a language become Turing Complete?
And extending further, whether "with no loops" has anything to do with the script being non-Turing Complete?
possible reasons:
security: if there is no loops program will always terminate. user can't hang up the interpreter. if, in addition there is a limit on size of the script you can have pretty restrictive time constraints. another example of a language without loops is google queries. if google allowed loops in , users would be able to kill their servers
simplicity: no loops make language much easier to read and write by non-programmers
no need: if there is no business need for it then why bother?
The main reason is because Bitcoin scripts are executed by all miners when processing/validating transactions, and we don't want them to get stuck in an infinite loop.
Another reason is that according to this message from Mike Hearn, Bitcoin script was an afterthought of Satoshi to try to incorporate a few types of transactions he had had in mind. This might explain the fact that it is not so well designed and and has little expressiveness.
Ethereum has a different approach by allowing arbitrary loops but making the user pay for execution steps.

Best practice with Notifications on iphone [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm implementing a system to view the progress of a download in Objective-C, using NSUrlConnetion.
Each time that I receive a part of a file I will send a notification by NSNotificationCenter, but with a file of 500-600 KB, how many message will I have? One for each byte or less? Is this a good way or is it too heavy?
The size of the packets that NSURLConnection receives in the connection:didReceiveData method vary based on the speed of your connection. I've used NSURLConnection for downloading files up to 1.5GB and have always had good results updating a progress bar whenever connection:didReceiveData: is called.
The NSData* that you'll receive ranges from 2kb to 40kb. For small files you're likely to only get one or two connection:didReceiveData: calls before connectionDidFinishLoading: is called.
You will definately have fewer. I'd say that it sounds like a solution that would work.

How could I represent a interrupt (for microcontrollers) in a flowchart? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Does anyone have any visual examples?
You would have to have a separate flow chart for the interrupt processing. Flowcharts are meant for showing flow of control, and interrupts, by their very nature, are a break in control flow.
Typically interrupts communicate with your "main" function (or other interrupts for that matter) through the use of "shared" global variables in C-based embedded systems. I think a sensible way to represent this in a flow chart is to use a dashed line between processing blocks where such "communications" impact program flow.
I would set up a finite state diagram that represents the normal states of control and the interrupt states; each state would be a block-level element that contained a flowcharty kind of diagram.
Depending on flowchart structure, it would probably make most sense to have the interrupt originate from a node/box that doesn't derive from another, since, by definition, an interrupt doesn't spring from normal software flow (unless it's a software-triggered interrupt). It might make sense to have it on a separate flow chart, or to show it with the rest of the flowchart depending on whether it might trigger behavior in the main flow of the chart.
Usually, without a tasking OS or library, the interrupts just flag a variable that then effects the flow. I think #JustJeff has it right.

How do you decide which API function documentations to read and how seriously? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Suppose that you are writing or maintaining a piece of code that uses some API that you are not 100% familiar with.
How do you decide whether to read the documentation of a certain call target, and how much time to spend reading it? How do you decide not to read it?
(Let's assume you can read it by opening the HTML documentation, inspecting the source code, or using the hover mechanism in the IDE).
Ideally you should read all of it, but we know that's a pain in the... you know. What I normally do on those cases (and I did that a lot while I worked as a freelancer) is weight some factors and depending on the result, I read the docs.
Factors that tell me I shouldn't read the docs:
What the function does is easy to guess from the name.
It isn't relevant to the code I'm maintaining: for example, you are checking how some code deletes files, and you have some function that obviously does some UI update. You don't care about that for now.
If debugging: the function didn't change the program state in a way meaningful to the task at hand. As before, you don't want to learn what SetOverlayIcon does, if you are debugging the deletion code because it's dying with a file system error.
The API is just a special case of an API you already know and you can guess what the special case is, and what the special arguments (if any) do. For example, let's say you have WriteToFile(string filename) and WriteToFile(string filename, boolean overwrite).
Of course, everything depends on the context, so even those rules have exceptions.