Dictionary/Client VS Application Variables - optimization

Hi i got a question about my server performance ... i got a classic asp cms hosting ~250 websites, for each website we build a Classic ASP dictionary using
set dict = CreateObject("Scripting.Dictionary")
dict.add "test1","Value Test1"
dict.add "test2","Value Test2"
dict.add "test3","Value Test3"
that dictionary is then loaded on every page for every user ...
lets say we got about ~150 000 users visiting those websites monthly loading those dictionary of about ~100k each every load ...
should i use application variable as dictionary instead of loading my dictionary every time?
and is it really gonna improve my server performance?

Certainly loading a dictionary for every ASP request is definitely a bad idea and will be hurting not only your performance but also fragmenting your Virtual Memory.
Using an array instead still has much the same problem, each request would need to allocate all the memory needed to hold it and it still needs populating on each request.
The simple answer would be yes use the application object as the dictionary. This will cost you much less in memory and CPU. The downside is does it collide with existing application object usage? You may need to prefix your keys in order to avoid this problem.

I'd absolutely suggest loading the dictionary only the one time, as the Dictionary object is heavy in terms of memory, slow in terms of lookup and the big one: isn't always destroyed in memory when you think it should be. Thus even after a user has left the page this object can still linger in memory waiting to be disposed of (even if you explicitly "destroy" it). Now multiply that times number of page hits per visit per user...
An alternative and more memory-light method would be to use an array -- one-dimensional if you can maintain track of the index somewhere (best), or two-dimensional with a lookup function if you need to (certainly if others are maintaining the code now or in the future).

I'm pretty sure that instantiating a single scripting.dictionary on each page shouldn't be a problem on any website. If performance is an issue I suggest profiling youre page first to see where the problem is. Big chance there is an unoptimised query somewhere taking 100+ ms to finish.
We run a classic ASP site that handles 200k pageviews a day and use scriping.dictionary extensively on every page (25+ instances). We use it as a base for all kinds of things. Do you have any example script to show that the dict's aren't always destroyed by the garbagecollector? Or that it's lookups are slow compared to any alternative? The only inconvenience we encountered is the lack of a 'clone' method.

Related

Use an SQL database as a word dictionary

I am creating a mobile game that takes words from users and then validates them to see if they are valid words in the English dictionary. I have created a similar game like this in the past using a dictionary that I loaded into the games local memory.
The problem with that approach was that I would often need to update the dictionary with new words. Since the dictionary was in memory, adding new words required me to completely update the app. If I were to use an SQL database as the dictionary, I could add words very easily without having to update the app and have to rely on users to go and download the new update.
My question is, is there any thing wrong with this approach (design or performance wise)? I have not seen something like this being done before. Also, I don't need definitions. I just need to make sure that the word is a valid English word.
If this is bad design, are there any better alternatives? Or am I better off just dealing with the in memory dictionary?
A SQL database seems overkill. Have you looked at a key-value store like Berkley DB?
The answer depends to a large extent on the overhead of the database for your application. It may take a lot of processing power and memory for adding a small amount of functionality.
If you are already using a file based approach, perhaps the simplest solution is to periodically poll the file to check for updates (size or modify time). When one is found, load it into memory.
The database would be valuable in an environment where the data is too big to fit in memory, because databases do a good job managing memory and disk space.

Can Parallel.ForEach be used safely with CloudTableQuery

I have a reasonable number of records in an Azure Table that I'm attempting to do some one time data encryption on. I thought that I could speed things up by using a Parallel.ForEach. Also because there are more than 1K records and I don't want to mess around with continuation tokens myself I'm using a CloudTableQuery to get my enumerator.
My problem is that some of my records have been double encrypted and I realised that I'm not sure how thread safe the enumerator returned by CloudTableQuery.Execute() is. Has anyone else out there had any experience with this combination?
I would be willing to bet the answer to Execute returning a thread-safe IEnumerator implementation is highly unlikely. That said, this sounds like yet another case for the producer-consumer pattern.
In your specific scenario I would have the original thread that called Execute read the results off sequentially and stuff them into a BlockingCollection<T>. Before you start doing that though, you want to start a separate Task that will control the consumption of those items using Parallel::ForEach. Now, you will probably also want to look into using the GetConsumingPartitioner method of the ParallelExtensions library in order to be most efficient since the default partitioner will create more overhead than you want in this case. You can read more about this from this blog post.
An added bonus of using BlockingCollection<T> over a raw ConcurrentQueueu<T> is that it offers the ability to set bounds which can help block the producer from adding more items to the collection than the consumers can keep up with. You will of course need to do some performance testing to find the sweet spot for your application.
Despite my best efforts I've been unable to replicate my original problem. My conclusion is therefore that it is perfectly OK to use Parallel.ForEach loops with CloudTableQuery.Execute().

how to create a system-wide independent universal counter object primarily for Database keys?

I would like to create/use a system-wide independent universal 'counter object' that can be called via COM in a thread-safe manner.
The counter object will be passed an ID to identify which counter to return, handle the counting, 'persist' the count (occasionally), have reasonable performance (as fast as possible) perhaps capable of 1000 counts per second or better (1mS) and be accessible cross-process/out-of-process. The current count status must be persisted between object restarts/shutdowns.
The counter object is liklely to be a 'singleton' type object implemented in some form of free-threaded dictionary, containing maybe 10 counters (perhaps 50 max). The count needs to be monotonic and consistent, (ie: guaranteed unique sequential values).
Each counter should have a few methods, like reset, inc, dec, set, clear, remove. As a luxury, I would like to have a variable-increment (ie: 'step by' value). To support thread-safefty, perhaps some sorm of critical-section or mutex call. It just needs to return a long/4byte signed integer.
I really want something that can be called from anywhere, including VBScript, so I figure COM is my preferred solution.
The primary use of this is for database keys. I am unable to use autoinc or guid type keys and have ruled out database-generated counting systems at this point.
I've spent days researching this and I have really struggled to find a solution. The best I can find is a free-threaded dictionary object that can be instantiated using COM+ from Motobit - it seems to offer all the 'basics' and I guess I could create some form of wrapper for this.
So, here are my questions:
Does such a 'general purpose
counter-object already exist? Can you direct me to it? (MS did
do an IIS/ASP object called
'MSWC.Counter' but this isn't
'cross-process'/ out-of-process
component and isn't thread-safe. (but if it was, it would do!)
What is the best way of creating such
a Component? (I'd prefer VB6
right-now, [don't ask!] but can do in VB.NET2005
if I had to). I don't have the
skills/knowledge/tools to use
anything else.
I am desparate for a workable solution. I need specific guidance! If anybody can code something up for me I am prepared to pay for it.
Update:
Whats wrong with GUIDs? a) 16bytes if I'm lucky (Binary storage), 32+bytes if I'm not (ANSI without formatting) or even worse(64bytes Unicode). b) I have an high-volume replicated app where the GUID is just too big (compared to the actual row data) and c) the overhead of indexing and inserts d) I want a readable number! - I only need 4 byte integer, so why not try and get that? I know you will say that disc-space is cheap, but for my application the cost is in slow inserts, and guids don't help (and I have tried/tested) but would prefer not to use if I have a choice.
Autonumber/autoincs are evil: a) don't get the value until after the insert, b) session specific, c) easy to lose/screw up on a table alter, d) no good for mutli-table inserts, (its not MS-SQL Svr) plus I have a need for counters outside my DB...
By the sound of it, what you're looking to create is an ActiveX EXE. They run in their own process but can be accessed from any other process by instantiating an object from it as though it is just another COM object. It handles all the marshaling necessary to sync its internal thread with the threads of any process calling it. Since all you planning on using is integers, there's no need to worry about the thread safety of objects passed between the threads.
More than likely you can use the MSWC.Counter object within that ActiveX EXE and let it do the counter work.
A database engine is already very good at generating unique primary key values for a dbase table. Either by marking the column auto-increment or by using a Guid. Trying to create your own is a grave mistake. System wide is just not wide enough, it fails miserably when your app grows and more than one machine starts using the database.
Nevertheless, you can get what you want in VB6 by creating a COM server. It's been to long, I forgot the exact names of the project options, something resembling "single use".
I have implemented a similar solution implemented as a REST web service - accessible from any technology that supports http.
Simple c sharp backend implementation using a singleton pattern and will scale nicely under IIS.
The whole thing sounds like a twisted idea, so why should I not add another twisted one. :P
Host an old-skool ASP page.
You can use Application.Lock with a counter then, just like in the sample.
Added benefit: use it from any platform/language. (e.g. other HTML pages with XMLHttpRequest. :)
If you save the value at say every 100th request to a file, you do not even have to worry about IIS resets.
Just set the starting value to last saved value + 100 in Application_OnStart. :P

Most optimized way to store crawler states?

I'm currently writing a web crawler (using the python framework scrapy).
Recently I had to implement a pause/resume system.
The solution I implemented is of the simplest kind and, basically, stores links when they get scheduled, and marks them as 'processed' once they actually are.
Thus, I'm able to fetch those links (obviously there is a little bit more stored than just an URL, depth value, the domain the link belongs to, etc ...) when resuming the spider and so far everything works well.
Right now, I've just been using a mysql table to handle those storage action, mostly for fast prototyping.
Now I'd like to know how I could optimize this, since I believe a database shouldn't be the only option available here. By optimize, I mean, using a very simple and light system, while still being able to handle a great amount of data written in short times
For now, it should be able to handle the crawling for a few dozen of domains, which means storing a few thousand links a second ...
Thanks in advance for suggestions
The fastest way of persisting things is typically to just append them to a log -- such a totally sequential access pattern minimizes disk seeks, which are typically the largest part of the time costs for storage. Upon restarting, you re-read the log and rebuild the memory structures that you were also building on the fly as you were appending to the log in the first place.
Your specific application could be further optimized since it doesn't necessarily require 100% reliability -- if you miss writing a few entries due to a sudden crash, ah well, you'll just crawl them again. So, your log file can be buffered and doesn't need to be obsessively fsync'ed.
I imagine the search structure would also fit comfortably in memory (if it's only for a few dozen sites you could probably just keep a set with all their URLs, no need for bloom filters or anything fancy) -- if it didn't, you might have to keep in memory only a set of recent entries, and periodically dump that set to disk (e.g., merging all entries into a Berkeley DB file); but I'm not going into excruciating details about these options since it does not appear you will require them.
There was a talk at PyCon 2009 that you may find interesting, Precise state recovery and restart for data-analysis applications by Bill Gribble.
Another quick way to save your application state may be to use pickle to serialize your application state to disk.

Should I load everything in memory upon application start?

I'm using VB.Net, and I have a set of data which I have to able to filter through fairly quickly. Basically, the program is like google sugest, but instead of a drop-down menu, I'm using a listbox. When a user enters a word, I compare the word using LINQ and filter those that contain the user's input. The data are all strings of variable length (from 0 to 200 characters, most on 150 character mark), and I have 240,000+ of this strings and counting- all stored in an XML file.
A colleague of mine told me that loading all of that to memory (using VB.Net's XML serializer plus collections of string/objects) is not practical, and would slow the 'startup' time of the program. I haven't finished building the program yet and I'm having second thoughts about continuing this path.
So, my question is: Should I continue with my current approach on the problem (which is load everything to memory on startup), or is there a better way of solving my dilemma?
If you want to prevent startup time and keeping it in memory isn't an issue on performance, then load it asynchronously. Although loading 240.000+ strings from an XML and keeping it in memory doesn't sound like the greatest idea. Probably a database would be the better approach. Or at least some format like JSON that's faster to parse.
Depends on a number of things:
If
((you know the strings will not hugely increase in number) &&
(you know the spec of the machines that will run your app) &&
(you are able to test that the load time is *good enough* on the above spec))
{
**don't bother changing approach.**
}
else
{
**change approach.**
}
The alternative approach is obviously some kind of asynch lazy-load.
You're talking about loading roughly 36MB of strings. While this isn't a daunting amount by any means (though you could probably load it faster reading the XML yourself...I wouldn't go with the serialization engine if I was worried about performance), it's also a non-trivial amount. You're looking a adding a couple of seconds to your startup time, assuming you don't do it asynchronously as Mircea suggests.
If you do do it asynchronously, you'll have to ensure that any UI process that relies on the data doesn't occur until after it has loaded. That may be a difficult thing to ensure.
The question seems to imply an online application. A few suggestions if that is the case:
The data could / should be zipped. I suspect it would compress very nicely.
Maybe the data could be cached accross multiple sessions, possibly be delivered as html content with a expiry cache date as appropriate. This would save systematic loading, and may be feasible if the data isn't updated frequently.
The suggestion feature feature could be initially disabled (i.e. say showing a "loading..." message while the application initializes the cache, asynchronously). In this fashion the application would be quickly available upon startup, even though the suggest feature may lag by up to say 30 seconds or so.
Edit: Independently of how the data gets downloaded and cached, I second the opinion of Mircea Grelus that an xml file of this size is a poor substitute for a database.
It may not be a bad idea to load the XML into memory when the app starts up. But if you go this route I'd look into using the BackgroundWorker thread. The idea would be to load the XML into memory asynchronously so the UI is still responsive as this is going on. As far as the user is concerned the app shouldn't appear to start any slower, and yet once done the Google-suggest-like feature should be significantly faster.
I must say that even in memory this is an inherently inefficient operation since you have no advantage of using an index when querying an XML file in this way. This is something that would be 10X faster in SQL with full-text searching.
Of course XML has the advantage of being self-contained and requiring no additional components. And that makes it a decent choice for small desktop apps that query small amounts of data. Otherwise I would consider using a database for better performance.
You might be better served by using binary serialization rather than XML serialization to persist the data that your app reads on startup, particularly if you end up implementing a data structure that's faster to search than a `StringCollection. You'd still maintain the XML version of the data somewhere, of course.
And by all means, use a BackgroundWorker to load the data asynchronously if that'll make your application feel more responsive.