Storage limit for indexeddb on IE10 - internet-explorer-10

We are building a web-app that store lots of files as blobs with indexedDB. If the user uses our app at its maximum, we could store as much as 15GB of file in indexeddb.
We ran into a problem with IE10, that I strongly suspect is a quota issue.
After having successfully saved some files, a new call to store.put(data, key); will never ends.
Basically, the function will be called, but no success event nor error event will be called.
If I look into the IndexedDB folder of IE 10 I'll see a handfull of what looks like temporary files (of 512 kB each) getting created and removed indefinitely.
When looking at the "Cache and Database" paramaters window, I see that my site's database has reached 250 MB.
Looking further, I found this blog entry http://msdnrss.thecoderblogs.com/2012/12/using-html5javascript-in-windows-store-apps-data-access-and-storage-mechanism-ii/ which incidently says that the storage limit for Windows Store apps is 250 MB.
I am not using any Windows Store mechanism, but I figured I could be victim of the same arbitrary limit.
So, my question is :
Is there any way to bypass this limit ? User is asked for permission to exceed a 10 MB limit, but I saw no question popping to the user when the 250 MB was reached.
Is there any other way to store more than 250 MB of data with IE10.
Thanks, I'll take any clues.

I afraid you can't. Providing the storage limit and asking the user to allow more space is the responsibility of the browser vendor. So I don't think the first option is applicable.
I know the user can allow a website to exceed a give limit (internet options > General > Browsing history > settings > caches and databases), but I don't know if that will overrule the 250MB. It can be that this is a hardcoded limit you can't exceed.
This limit is bound to a domain meaning you can't solve it by creating multiple databases. The only solution would be to store on multiple domains, but in that case you can't cross access them. Also as I see the 250MB limit will be for indexeddb API and File API combined

Related

Static files as API GET targets

I'm creating a RESTful backend API for eventual use by a phone app, and am toying with the idea of making some of the API read functions nothing more than static files, created and periodically updated by my server-side code, that the app will simply GET directly.
Is this a good idea?
My hope is to significantly reduce the CPU and memory load on the server by not requiring any code to run at all for many of the API calls. However, there could potentially be a huge number of these files (at least one per user of the phone app, which will be a public app listed in the app stores that I naturally hope will get lots of downloads) and I'm wondering if that alone will lead to latency issues I'm trying to avoid.
Here are more details:
It's an Apache server
The hardware is a hosting provider's VPS with about 1gb memory and 20gb free disk space
The average file size (in terms of content and not disk footprint) will probably be < 1kb
I imagine my server-side code might update a given user's data once a day or so at most.
The app will probably do GETs on these files just a few times a day. (There's no real-time interaction going on.)
I might password protect the directory the files will be in at the .htaccess level, though there's no personal or proprietary information in any of the files, so maybe I don't need to, but if I do, will that make a difference in terms of the main question of feasibility and performance?
Thanks for any help you can give me.
This is generally a good thing to do: anything that can be static rather than dynamic is a win for performance and cost (it's why we do caching!), but the main issue with with authorization (which you'll still need to do for each incoming request).
You might also want to consider using a cloud service for storage of the static data (e.g., Amazon S3 or Google Cloud Storage). There are neat ways to provide temporary authorized URLs that you can pass to users so that they can read the data for a short time and then must re-authorize to continue having access.

Avoiding Safari's repeated requests for IndexedDB permission beyond 50 MB

I'm trying to get Safari to run a series of tests (web-platform-tests) served from my local machine. The tests create a large amount of data in IndexedDB for which Safari requires permission (requests are for larger than 50 MB) but this gets too cumbersome to approve permission each time when cycling through hundreds of tests.
In Preferences->Privacy->Cookies and website data->Manage Website Data..., there is an entry, "Local documents on your computer (Databases)" apparently indicating the presence of this data, but it does not provide any configuration options, nor does Preferences->Privacy->Cookies and website data->Always allow work to avoid the prompting.
Is there any other config which can allow me to get around the need for manual permission? (I'm asking here instead of superuser, as I don't know if there might also be an API which can persist overcoming the limit as well.)

Microsoft Azure Blob Storage Upload Performance

I am running an Azure web role, which is storing very small blobs into Azure storage. (Blob upload is being done from the server, not from the browser.) I have searched stack overflow and the rest of the internet for tips on optimizing blob storage performance, and I believe I've checked and implemented all of the usual suspects: uploading async, allowing unlimited outgoing web connections (which now seems to be the default setting on web roles and no longer needs to be explicitly set in web.config or in code).
Tweaking the number of concurrent uploads I allow makes some difference, but regardless of what I've tried, I seem to max out at around 1,000 blob uploads per second. This is when running in the Azure web role, in the same region as the storage account (East US). My rate when running this from home over a good internet connection isn't much less, ~700 blobs/sec, which seems to tell me that it's not the network latency that's limiting the rate, it's the actual processing time of the storage service.
I wouldn't normally consider these rates horrible for this kind of a service, but I've read that Microsoft boasts a rate of ~20,000 storage transactions per second, so I've been a little disappointed with these results.
I'd like to get some feedback from those who have really tried to push the limits of blob storage. Does ~1000 small uploads per second sound about right? Or is there possibly something else I should be doing to improve this? I'll post the code if I need to, but I'd rather not receive speculative answers, I'd like to hear from developers who can either confirm that my results are reasonable, or that they've seen much higher throughput.
I should add that I'm currently running this in a small web role. I've tried it also in a medium web role, and didn't see any significant difference.
EDIT:
After a few days of development and testing, my upload rate seemed to suddenly increase. Not by a lot, but maybe by another ~200 per second. In looking around the web, I noticed a comment in the Azure documentation stating "A storage account scales automatically as usage increases." So I'm wondering if it really is capable of much higher rates, but will not automatically scale up until it sees sustained period of high volume. Some confirmation of that would also be greatly appreciated.
Depending on how small your requests are the problem might be caused by Nagle’s Algorithm is Not Friendly towards Small Requests - although usually I see that with queues / table operations. Try disabling Nagle's and let me know if that makes any difference. As an fyi, you have to disable it prior to establishing the connection otherwise the changes will not take effect.
Jason

Saving data to local memory restriction windows 8 metro style apps

I was wondering if there are memory limits for metro style apps. I am not talking about the RAM. That I already found out has a limit of 150 mb, right?
I want to know, if there is a restriction of using the local memory (the hard discstorage). I am creating a database to save alot of data. Can i do so until the device runs out of storage? (I am not actually planning to do so. But occupying like 80mb of the memory would be nice)
Thanks in advance.
There is no limit on local data.
Local application data should be used for any information that needs to be preserved between application sessions and is not suitable type or size wise, for roaming application data. Data that is not applicable on other devices should be stored here as well. There are no general size restriction on local data stored. Location is available via the localFolder property. Use the local app data store for data that it does not make sense to roam and for large data sets.
From here. There is a limit on roaming data. Same document has that.

How to avoid spammer to use my FTP, bandwidth and mySQL of my site?

THE PROBLEM
My server gave me an ultimatum (3 business days):
"We regret to say That database is currently consuming excessive resources on our servers Which causes our servers to degrade performance Affecting ITS customers to other database driven sites are hosted on this server That. The database / tables / queries statistical information's are provided below:
AVG Queries / logged / killed
79500/0/0
There are Several Reasons where the queries gets Increased. Unused plugins will Increase the number of queries. If the plugins are not causing the issue, you can go ahead and block the IP addresses of the spammers Which will optimize the queries. Also you can look for any spam Existed contents in the database and clear them up.
You need to check for the top hitters in the Stats page. Depending upon the bandwidth accessed, top hits and IP you need to take specific actions on Them to optimize the database queries. you need to block the Unknown robot (Identified by 'bot *'). Since These bots are scraping content from your website, blog comment spamming your area, harvesting email addresses, sniffing for security holes in your scripts, trying to use your mail form scripts as relays to send spam email. .htaccess Editor tool is available to block the IP address."
THE BACKGROUND
The site is made ​​100% from us in VB. NET, mySQL and platform of Win (except the Snitz Forum). The only point from which we received SPAM was a form for comments which now has a captcha. We talk of more than 4000 files between tools articles, forums, etc. for a total of 19GB of space. Only upload it takes me 2 weeks.
STATISTICS OF ROBOTS
Awstats tells us for the month of February 2012:
ROBOT AND SPIDER
Googlebot
+303 2572945 accesses
5:35 GB
Unknown robot (Identified by 'bot *')
772520 accesses +2740
259.55 MB
BaiDuSpider
+95 96 639 access
320.02 MB
Google AdSense
35907 accesses
486.16 MB
MJ12bot
33567 +1208 access
844.52 MB
Yandex bot
+104 18 876 access
433.84 MB
[...]
STATISTICS OF IP
IP
41.82.76.159
11681 pages
12078 accesses
581.68 MB
87.1.153.254
9807 pages
10734 accesses
788.55 MB
[...]
other
249561 pages
4055612 accesses
59.29 GB
THE SITUATION
Help!!! I don't know how to block IP with .htaccess and I don't know what IP! I'm not sure! Awstats ends without the past 4 days!
I already tried in the past to change the password of FTP and account, nothing! The goal is not I think are generic attacks aimed at obtaining backlinks and redirects (often do not work)!
This isn't really an htaccess issue. Look at your own stats. You've had ~4M hits generating some 12Kb per hit in the last 4 days. I ran the OpenOffice.org user forums for 5 years and this sort off access rate can be typical for a busy forum. I used to run on a dedicated quad-core box, but migrated this a modern single core VM and when tuned, this took this sort of load.
The relative Bot volumetrics are also not surprising as a % of these volumes, nor are the 75K D/B queries.
I think that what your hosting provider is pointing out is that you are using an unacceptable amount of system (D/B) resources for your type of account. You either need to upgrade your hosting plan or examine how you can optimise your database use. E.g. are your tables properly indexed and do you routinely do a Check/Analyze/Optimize of all tables. If not then you should!
It may well be that spammers are exploiting your forum for SPAM link posts, but you need to look at the content in the first instance to see if this is the case.