I am building up a webapp with offline functionality. I am using combination of webcache and pouchDB to achieve it.
Currently I am testing recovery mechanisms against DB corruption. My premise is that since pouchDB is running in client, it is exposed to anyone who by mistake or on purpose can corrupt the DB. Also maybe in case of bugs or similar, DB could get corrupted. Then, if DB gets corrupted, unless it gets detected and clean by webapp, this will never work correctly.
Test is quite simple:
- Create PouchDB:
var dbOptions = {
auto_compaction : false,
cache : false
};
var db = new PouchDB('myDB',dbOptions);
With Developer Tools delete part of the database.
On loading the application it tries to read all documents:
db.allDocs({include_docs : true}, function(_err,_response){
(certain code here)
}
It is at this point when "Uncaught TypeError: Cannot set property '_rev' of undefined " is thrown. I tried to catch exception and using provided promise by pouchDB but none did work.
Did any of you fellows have similar problem? How did you solve it?
EDIT:
When PouchDB returns 500 Internal error, how is the application supposed to recover from it? I tried to destroy the database
db.destroy(function(err,info){console.log(err||info);}
but it does not work. It returns 500 Internal error as well.
It indeed sounds like your database got corrupted. Sorry about that; we try to write bulletproof code, but since we're working against the WebSQL/IndexedDB APIs, there's always the possibility that something goes wrong at that interface, the browser crashes, lightning strikes your computer, etc.
500 errors indicate an internal PouchDB error, so you're not supposed to recover from them. Probably the best way to protect against corruption like that is just to set up continual sync with a CouchDB server (kind of the point of PouchDB anyway). CouchDB is a full database implemented from top to bottom and is very robust – since it uses append-only database files, your database can never get corrupted. So if you use continuous sync, you can always delete the PouchDB database and recover from CouchDB.
That being said, if you could let us know which version of PouchDB you're running, which browser you saw this on, or even a code snippet to reproduce, that would be really helpful. If you're using Firefox, you can also send us the storage files themselves for IDB by following the instructions here to find the Profile folder and then sending us the contents of the storage/persistent/<my_site>/idb folder. Thanks!
I got this error while adding a new schema to my RxDB database. It turned out I included the primary key and wrong property names into encrypted fields. I removed the primary key and put proper names and it worked fine after that.
Related
I am having a very hard time making RavenFS behave properly and was hoping that I could get some help.
I'm running into two separate issues, one where uploading files to the ravenfs while using an embedded db inside a service causes ravendb to fall over, and the other where synchronizing two instances setup in the same way makes the destination server fall over.
I have tried to do my best in documenting this... Code and steps to reproduce these issues are located here (https://github.com/punkcoder/RavenFSFileUploadAndSyncIssue), and video is located here (https://youtu.be/fZEvJo_UVpc). I tried looking for these issues in the issue tracker and didn't find something directly that looked like it related, but I may have missed something.
Solution for this problem was to remove Raven from the project and replace it with MongoDB. Binary storage in Mongo can be done on the record without issue.
I'm using MigratorDotNet to manage Rails-style migrations for my web app. I have a workflow where, if I delete all the tables in the database, I can access an installation view that will run MigratorDotNet and create all the necessary tables.
This works locally. For some reason, when I upload my code to my Arvixe hosting, the migrations just never run. I get this odd error:
There is already an object named 'SchemaInfo' in the database.
This is odd because, prior to running migrations, I manually deleted all the tables in the database (to make sure it wasn't left over from a previous install).
My code essentially boils down to:
new Migrator.Migrator("SqlServer", connectionString.ToString(), migrationsAssembly).MigrateToLastVersion();
I've already verified by logging that the connection string is correct (production/hosting settings), and the assembly is correctly loaded (name and version).
Works locally, but not on Arvixe. How do I troubleshoot this?
This is a dark day.
It turns out (oddly) that the root cause was my hosting company used a schema other than dbo for my database. Because of this, the error message I saw (SchemaInfo already exists) was talking about their table.
My solution, unfortunately, was to rip out MigratorDotNet and go with FluentMigator instead. not only did this solve the problem, but it also gave me a more intelligible error message (one referring to the schema names).
While it doesn't seem possible to auto-set the schema, and while I need to switch the schema on my dev vs. production machine, it's still a solvable problem (and a better API, IMO). I googled, but did not find any way to change the default schema in migratordotnet.
I'm sorry for the issues that you were having. On shared hosting, unfortunately the only way that we may be able to change the schema is manually. If you are still looking for a solution that requires our assistance, please forward your ticket ID to qa .at. arvixe.com as well as arvand .at. arvixe.com and we can look into the best way to resolve this.
I have set up a RavenDB for evaluation. I wrote some code which pushed some documents into it. I then have a web site which renders those documents.
Throughout the day, I used the Raven Studio to modify some text in those documents, so that I could see the changes come through in my web site.
Problem: It seems that after going home for the night, when I come in the next day my database has changed - my documents have reverted to the 'pre-changed' versions... what's going on??
I've looked through the Raven console output, and there were no update commands issued on my developer machine overnight (nor would I expect there to be!!)
Note: this is just running on my development machine.
As far as I know, RavenDB has no code in it that would automatically undo commited write operations and honestly, this would really scare me. Altogether this sounds really weird and I can't think of a scenario where that could actually happen. I suggest you send the logfiles to ravendb support if it happens again, because this would be a really serious issue.
My colleague had this very problem with updates being reverted. The update we made was to add a property, and then also a document specific value for this property, to all the documents. We called SaveConfiguration() and saw the change being done in the Raven Studio. A while later some of the documents had lost it's new property.
I decided to turn on the logging and therefore added an NLog.config file, to get the logging started I touched the web.config. This of course restarted the application, and "voila", the updates appeared in the Raven Studio again.
After a while they disappeared from the Raven Studio, so I assumed that this was a studio problem. I therefore tried to retrieve the objects from the database in a test controller, unfortunately the objects were lacking the property value here too, so it wasn't just a studio problem.
With the logging turned on we updated the documents of the specific type again, and according to the logs and also the studio we actually updated the documents. Not long thereafter the documents reverted by losing it's added property yet again (my colleague started crying at this point - true story)..
Later I came to realize that this was all because of our live web application still had the old version of the object. When it was read in the web application the data was returned without the extra property. Because of this it seems like our DocumentSession thought that the object had changed (in all fairness), so when we called SaveChanges even these objects was written to the database - without it's extra property.
Is my conclusion correct? What is the solution to this problem? I'm thinking CQRS, because then we will never call "SaveChanges()" on the DocumentSession for reads.
Adam,
Just making sure, did you call SaveChanges() after you made your modifications?
There is absolutely nothing in RavenDB that would cause this behavior.
We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated.
It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working.
We have enabled it like this...
bool UseLocalCache =
int LocalCacheObjectCount = int.MaxValue;
TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3);
DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased;
if (UseLocalCache)
{
configuration.LocalCacheProperties =
new DataCacheLocalCacheProperties(
LocalCacheObjectCount,
LocalCacheDefaultTimeout,
LocalCacheInvalidationPolicy
);
// configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300));
}
Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing).
The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache.
Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly.
We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either.
I cant find anything in SO or Google.
Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think?
(note: the IIS box running ASp.net isnt in yhe cluster - it is just the client).
Any insights greatfully received!
Which DataCache methods are you using to read from the cache? Several of the DataCache methods will always make a hit against the server regardless of local cache being configured. You pretty much have to make sure you only use Get if you want the local cache to be leveraged.
This is one my biggest nits with AppFabric Caching. They don't explain any of this to you and so when you begin to rely on local caching you begin to fall into these little pitfalls because you do not think you're paying a penalty for talking to the service, transferring data over the wire and deserializing objects, but you are.
The worst thing is, I could understand having to talk to the service to make sure the local cache represents the latest data. I can even understand transferring the data back so that multiple calls are not made. What I can not understand for the life of me though is that even if the instance in the local cache is verified to still be the current version that came back from the cache, they still deserialize the object from the wire rather than just returning the instance that's in memory already. If your objects are large/complex this can hurt a lot.
After days and days of looking into why we get so many Local Cache misses we finally solved it:
There is a bug with local cache in AppFabric v 1.1 that is fixed in CU4, see http://support2.microsoft.com/kb/2800726/en-us
Make sure that the Microsoft.ApplicationServer.Caching.Client.dll used by your application is also updated. We had CU4 installed on the machine but got the Client.dll without CU4 from a NuGet package in our application. In our case a simple NuGet package update made everything work.
After installing CU4 and making sure that the Client.dll was also updated we reduced our reads towards the AppFabric Host by a lot, due to Local Cache hits increasing. yay!
Have you tried using a nhibernate profiler? http://nhprof.com/
There is also this:
http://mdcadmintool.codeplex.com/
It's a nice way to manage and view the cache.
Both of these may help in debugging the issue.
I'm trying to track down a problem on our test environment. Previously it was set to use InProc Session State Type, but I've added in the SQLServer type for one specific Web App. I did this because we use the SQLServer type in our production environment and I want our test env to match as closely as possible.
However, after changing it to SQLServer I do not get any errors when trying to store unserializable data in session like I would expect. It works just fine, even though I would think it shouldn't. I'm a relative newbie when it comes to configuring this, but from the various tutorials I googled, I thought I covered all the bases.
I was wondering if there's any code snippets to verify which session state type an application is actively using.
Thanks
Ok, found it by accessing:
System.Web.SessionState.HttpSessionState.Mode
Was also able to look at the tables in the ASPState database to see sessions being added/removed.
Apparently it was just the test code we were using that we expected to break that was not behaving as we expected..