I'm writing a directory tree scanning function in D that tries to combine tools such as grep and file and conditionally grep for things in a file only if it's not matching a set of magic bytes indicating filetypes such as ELF, images, etc.
What is the best approach to making such an exclusion logic run as fast as possible with regards to minimizing file io? I typically don't want to read in the whole file if I only need to read some magic bytes in the beginning. However to make the code more future-general (some magics may lie at the end or somewhere else than at the beginning) it would be nice if I could use a mmap-like interface to lazily fetch data from the disk only when I it is read. The array interface also simplifies my algorithms.
Is D's std.mmfile the best option in this case?
Update: According to this post I guess mmap is adviced: http://forum.dlang.org/thread/dlrwzrydzjusjlowavuc#forum.dlang.org
If I only need read-access as an array (opIndex) are there any cons to using std.mmfile over std.stdio.File or std.file?
If you want to lazily read a file with Phobos, you pretty much have three options
Use std.stdio.File's byLine and read a line at a time.
Use std.stdio.File's byChunk and read a particular number of bytes at a time.
Use std.mmfile.MmFile and operate on the file as an array, taking advantage of mmap underneath the hood to avoid reading in the whole file.
I fully expect that #3 is going to be the fastest (profiling could prove differently, but I'd be very surprised given how fantastic mmap is). It's also probably the easiest to use, because you get an array to operate on. The only problem with MmFile that I'm aware of is that it's a class when it should arguably be a ref-counted struct so that it would clean itself up when you were done. Right now, if you don't want to wait for the GC to clean it up, you'd have to manually call unmap on it or use destroy to destroy it without freeing its memory (though destroy should be used with caution). There may be some sort of downside to using mmap (which would then naturally mean that there was a downside to using MmFile), but I'm not aware of any.
In the future, we're going to end up with some range-based streaming I/O stuff, which might be closer to what you need without actually using mmap, but that hasn't been completed yet, and mmap is so incredibly cool that there's a good chance that it would still be better to use MmFile.
you can combine seek and rawread of std.stdio.File to do what you want
you can then do a rawRead for only the first few bytes
File file=//...
ubyte[1024] buff;
ubtye[] magic=file.rawRead(buff[0..4]);//only the first 4 bytes are read
//check magic
then depending on the OS' caching/read-ahead strategy this can be nearly as fast as mmfile, however multiple seeks will ruin the read-ahead behavior
Related
So I'm in the process of writing some code that needs to be both memory efficient and fast. I have a working reference in java already, but was rewriting it in kotlin.
I basically need to load a lot of csv files and load them into a tree once and then traverse them repeatedly once they're loaded.
I originally wrote the whole thing using sequences, but found it cause the GC to spike repeatedly.
I can't really share this code, but was wondering if yall know what would cause this to happen.
I'll be happy to add details as you need them, but here's my basic pattern.
step1: inputStream -> csvLines: List<String>
step2: csvLines.drop(x).fold(emptySequence()) -> callOtherFunctionWithFold -> callOtherFunctionWithFold -> Sequence<OutputObjects>
I keep the csvLines as a seperate list because I'm access specific rows based on the rules I need.
step3: Sequence<OuputObjects> -> nodes
The result is functional, but this code is much less memory efficient and less performant than the java equivalent only using arraylists and modifying them in place.
After looking at the visualvm output, I created a ton of kotlin.*.ArrayIterators. It looks like I create one every time I use a lamda.
So what can I do to make this more efficient? I though sequences were supposed to reduce object creation lazily, but it looks like I'm doing things that break its ability to do so.
Do sequences reevaluate after ever GC run or run in general? If so, that would make them unsuitable to use in objects that are loaded at startup, right?
To use Kotlin sequences, you need to start with asSequence()
csvLines.asSequence()
.drop(x)
.fold(...)
...
If you leave that out, it uses Collection functions instead which creates a new (intermediate) collection after every function.
I'm very new to Go and am currently porting a PHP program.
I understand that Go is not a dynamically-typed language and I like that about it. It seems very structured and easy to keep track of everything.
But I've been coming across situations that seem to be a little ... ugly. Is there a better way of performing this sort of process:
plyr := builder.matchDetails.plyr[i]
plyrDetails := strings.Split(plyr, ",")
details := map[string]interface{}{
"position": plyrDetails[0], "id": plyrDetails[1],
"xStart": plyrDetails[2], "zStart": plyrDetails[3],
}
EDIT:
Is there a better way to achieve a map containing the strings from plyr than to create two additional variables, to be destroyed straight afterwards? Or is this the correct way?
tl;dr:
If possible, choose a different format and let a library do the string parsing/generation for you
Use structs rather than maps for anything you use a few times, for more compiler checks
The common way of using encoding/json accomplishes both of those.
Meanwhile, don't sweat perf too much because you'll probably vastly improve the old app's speed regardless; there's no indication speed of parsing or GC is a problem yet; and the syntactical differences you mentioned in the first rev. of the post don't necessarily actually relate to GC.
So, I understand you may be porting piece-for-piece, and that may limit what you can change now.
But if/when you can change things, a really clean solution would be to use the encoding/json package and a struct: the json package will parse input/generate output from structs without any manual string manipulation on your part, and using a struct gives you compile-time checking rather than only the runtime checking you get with a map. Lots of Go apps (and others) use JSON to expose their services.
An intermediate step could be to introduce struct types for any internal structure you use at least a few times, rather than maps, so even without updating the parsing, at least the internals of the app get the benefits of compile-time checking. structs are also what things like the gorm object/relational mapper expect to deal with. They happen to use less memory than maps, and be quicker (and more concise syntactically) to access, but those aren't even necessarily the most important considerations here.
On the performance of what you have now, and particularly whether different syntax would make it faster: don't sweat that, for a bunch of reasons: the port's likely to be faster than the PHP was whatever you do; we don't yet have any indication that parsing or GC is actually slow or your bottleneck; and the syntactical differences you talked about in the first revision of your question may not relate to GC much or at all. More/fewer var names in your code may not correspond to more/fewer heap allocations, 'cause often Go can allocate on the stack, briefly discussed under 'escape analysis' in Dave Cheney's Gocon Tokyo slides. And as peterSO said, we seem to be looking at allocations of smallish references, not, say, copying all of the string bytes from the request each time.
Go is NOT PHP. Write Go programs in Go. Write PHP programs in PHP.
Interface values are represented as a two-word pair giving a pointer
to information about the type stored in the interface and a pointer to
the associated data. Go Data Structures:
Interfaces
Reusing Go interface variables to "increase performance" makes no sense.
I'm currently writing a web crawler (using the python framework scrapy).
Recently I had to implement a pause/resume system.
The solution I implemented is of the simplest kind and, basically, stores links when they get scheduled, and marks them as 'processed' once they actually are.
Thus, I'm able to fetch those links (obviously there is a little bit more stored than just an URL, depth value, the domain the link belongs to, etc ...) when resuming the spider and so far everything works well.
Right now, I've just been using a mysql table to handle those storage action, mostly for fast prototyping.
Now I'd like to know how I could optimize this, since I believe a database shouldn't be the only option available here. By optimize, I mean, using a very simple and light system, while still being able to handle a great amount of data written in short times
For now, it should be able to handle the crawling for a few dozen of domains, which means storing a few thousand links a second ...
Thanks in advance for suggestions
The fastest way of persisting things is typically to just append them to a log -- such a totally sequential access pattern minimizes disk seeks, which are typically the largest part of the time costs for storage. Upon restarting, you re-read the log and rebuild the memory structures that you were also building on the fly as you were appending to the log in the first place.
Your specific application could be further optimized since it doesn't necessarily require 100% reliability -- if you miss writing a few entries due to a sudden crash, ah well, you'll just crawl them again. So, your log file can be buffered and doesn't need to be obsessively fsync'ed.
I imagine the search structure would also fit comfortably in memory (if it's only for a few dozen sites you could probably just keep a set with all their URLs, no need for bloom filters or anything fancy) -- if it didn't, you might have to keep in memory only a set of recent entries, and periodically dump that set to disk (e.g., merging all entries into a Berkeley DB file); but I'm not going into excruciating details about these options since it does not appear you will require them.
There was a talk at PyCon 2009 that you may find interesting, Precise state recovery and restart for data-analysis applications by Bill Gribble.
Another quick way to save your application state may be to use pickle to serialize your application state to disk.
I'm using VB.Net, and I have a set of data which I have to able to filter through fairly quickly. Basically, the program is like google sugest, but instead of a drop-down menu, I'm using a listbox. When a user enters a word, I compare the word using LINQ and filter those that contain the user's input. The data are all strings of variable length (from 0 to 200 characters, most on 150 character mark), and I have 240,000+ of this strings and counting- all stored in an XML file.
A colleague of mine told me that loading all of that to memory (using VB.Net's XML serializer plus collections of string/objects) is not practical, and would slow the 'startup' time of the program. I haven't finished building the program yet and I'm having second thoughts about continuing this path.
So, my question is: Should I continue with my current approach on the problem (which is load everything to memory on startup), or is there a better way of solving my dilemma?
If you want to prevent startup time and keeping it in memory isn't an issue on performance, then load it asynchronously. Although loading 240.000+ strings from an XML and keeping it in memory doesn't sound like the greatest idea. Probably a database would be the better approach. Or at least some format like JSON that's faster to parse.
Depends on a number of things:
If
((you know the strings will not hugely increase in number) &&
(you know the spec of the machines that will run your app) &&
(you are able to test that the load time is *good enough* on the above spec))
{
**don't bother changing approach.**
}
else
{
**change approach.**
}
The alternative approach is obviously some kind of asynch lazy-load.
You're talking about loading roughly 36MB of strings. While this isn't a daunting amount by any means (though you could probably load it faster reading the XML yourself...I wouldn't go with the serialization engine if I was worried about performance), it's also a non-trivial amount. You're looking a adding a couple of seconds to your startup time, assuming you don't do it asynchronously as Mircea suggests.
If you do do it asynchronously, you'll have to ensure that any UI process that relies on the data doesn't occur until after it has loaded. That may be a difficult thing to ensure.
The question seems to imply an online application. A few suggestions if that is the case:
The data could / should be zipped. I suspect it would compress very nicely.
Maybe the data could be cached accross multiple sessions, possibly be delivered as html content with a expiry cache date as appropriate. This would save systematic loading, and may be feasible if the data isn't updated frequently.
The suggestion feature feature could be initially disabled (i.e. say showing a "loading..." message while the application initializes the cache, asynchronously). In this fashion the application would be quickly available upon startup, even though the suggest feature may lag by up to say 30 seconds or so.
Edit: Independently of how the data gets downloaded and cached, I second the opinion of Mircea Grelus that an xml file of this size is a poor substitute for a database.
It may not be a bad idea to load the XML into memory when the app starts up. But if you go this route I'd look into using the BackgroundWorker thread. The idea would be to load the XML into memory asynchronously so the UI is still responsive as this is going on. As far as the user is concerned the app shouldn't appear to start any slower, and yet once done the Google-suggest-like feature should be significantly faster.
I must say that even in memory this is an inherently inefficient operation since you have no advantage of using an index when querying an XML file in this way. This is something that would be 10X faster in SQL with full-text searching.
Of course XML has the advantage of being self-contained and requiring no additional components. And that makes it a decent choice for small desktop apps that query small amounts of data. Otherwise I would consider using a database for better performance.
You might be better served by using binary serialization rather than XML serialization to persist the data that your app reads on startup, particularly if you end up implementing a data structure that's faster to search than a `StringCollection. You'd still maintain the XML version of the data somewhere, of course.
And by all means, use a BackgroundWorker to load the data asynchronously if that'll make your application feel more responsive.
I created this simple textpad program in WPF/VB.NET 2008 that automatically saves the content of the forms to an XML file on every keystroke.
Now, I'm trying to make the program see the changes on the XML file in realtime.. example, If I open two of my textpads, when I write on the first one, it will automatically reflect on the other textpad.
How can I do this?
One of my colleagues told me to read about iNotifyPropertyChanged (which I did) but how can I apply it to my application..?
:( help~
btw, I got the idea from a Google Wave demo, and I'm actually trying to do something bigger..
Note - this approach will be really, really expensive in terms of disk I/O, memory usage and CPU time. Why are you using XML is that the native format of the data you are editing? You may want to look at a more compact format - one that will use less memory, generate fewer I/Os and use less CPU.
Also note that you writer may need to flush the file for the watcher to notice any changes. This is expensive as well - especially if you re doing it on every key stroke.
Be sure to use the correct file open attributes (sharing, reading and writing).
You may want to consider using shared memory to communicate between your processes. This will be less expensive. You can avoid large ammounts of disk I/O by only writing changes to disk when the use asks to commit them, or there is a hint to do so. I suggest avoiding doing this on every key stroke.
Remember, your app needs to be a good system citizen and consume a reasonable amount of system resources. This is especially true running on netbooks and other 'low spec' systems.
You will probably need to use the FileSystemWatcher to watch the file on the disk rather than a property in the running instance of the application.
Or you could use some custom message passing between different instances of your application.
INotifyPropertyChanged isn't going to work for your application. That interfaced is used when data binding some element to a UI object.
Your best bet is going to be to attach a FileSystemWatcher to the file when you open it for editing. You can then use the change events to reload the file as needed in each instance of your application.
This will also load changes made from external editors.
It sounds like you are using file IO as a form of interprocess communication, if so, IMO you need to rethink your design, especially if you are doing something "bigger" than google wave (whatever bigger means in this context) as what you are proposing is terribly ineficient.
Do some searching on Interprocess communication and you will get a whole bunch of idea's #foredecker's idea (+1) of shared memory is a good possibility for example.