RavenDB progmatically empty database - ravendb

Is there a command in RavenDB that empties everything in a store? Something like Store.Empty?

No, and there wouldn't be one.
That would assure accidental data loss.
You probably want to run in memory for tests, right? We have explicit support for that.

Related

What kind of objects get serialized and why? When would I use this?

I understand what serialized is. I simply do not know when I would use it. I have seen the discouraged practice of session data in a database and things like that but other than that I do not know.
What kind of objects state would I save in a database, file system, anything that needs persistence? Why would I use it for a non-"permanent" reason?
I do not have a context per se. All I really do are client server web apps. I may get to use a Java stack for it, but I'd really like to understand this part of things, should I need it.
I have asked similar questions. I'm just not understanding.
In a sentence, using a generic serialiser is a reasonable way to save stuff to disk, move stuff over a network in a manner which doesn't require you to design a data format, write code that emits data in that format, and write a parser for that format (all error-prone) by hand.
Any time you want to persist an object (or object hierarchy) beyond its existence inside a single execution on a single machine, you are going to want to serialise and deserialise.
Some scenarios that come to my mind are
Caching: when you want to offload in-memory objects to disk (the caching framework can serialise the object to disk)
For thick clients (either a desktop application or an app using RMI) you'll need to transfer objects from one JVM to another, and this is done by serialising them
I can't think of any other scenarios from the top of my head.

Get my database under Version Control using a DVCS [Mercurial]

What would be the best approach for versioning my whole database ?
Creating a file for each database object (table,view,procedsure..) or rather having one file for all DDL scripts and any new change will be put in a separate file ?
What about handling changes made in a Database manager tool ?
I'd like to have a generic solutions for any kind of RDBMS.
Are there any other options ?
I'm a huge VCS fan in general and a big Mercurial booster, but I really think you're going down the wrong path.
VCSs aren't just about iterative changes, the "what", they're also about answering the "who", "when", and "why". For a database those answers are a lot less interesting or hard to provide to the VCS. If you're doing nightly exports and commits the "who" will always be "cron" and the "why" will always be "midnight".
The other thing modern VCSs do really well is helping you merge changes from multiple branches. That's less applicable in the database world. Very seldom do you say "I want this table structure, but this data", and if you do the text/diff merge isn't going to help you much.
The thing that does do "what" and "when" very well is an incremental backup system, and that's probably the better fit.
At work we use Tivoli and at home I use rdiff-backup and duplicity, but there are plenty of great options.
I guess my general rule of thumb is "if it was typed by hand by a human then it does into source control, and if it was generated/exported then it goes in the incremental backups"
Certainly you can make this work, but I don't think it will buy you much over the more traditional backup solutions.
Have a look at this post
If you need generic solution - put everything in the scripts (simple text files) and put under Version Control system (can be used any of VCS).
Grouping similar database objects into scripts will be depend on your requirement.
So you may for example:
Store table/indexes/ in one or several script
Each procedure store in individual script or combine small procedures into one script.
However need to remember one important thing with this approach: don't forget change scripts if you changed table/view/procedure directly in databases and don't create/recreate/compile you db objects in database after changing scripts.
SQL Source Control currently supports SVN and TFS, but Mercurial requests are increasing rapidly and we're hoping to have a story for this very soon.
We use UserVoice to measure demand so please vote accordingly if you're interesting in this: http://redgate.uservoice.com/forums/39019-sql-source-control

Programmatically check for Access database corruption?

Is there a way to programmatically check for database object corruption in Access 2003?
My development project has gotten complex enough that it's hard to manually check all the objects after a day of programming to see if some small control, form, report, query, or code object has been corrupted somehow. I already have the data split off into a separate SQL Database stored on another machine, and this project is merely a front-end application to work with the data.
Mostly an academic musing, as I just don't want to get so far - then have corruption put me back several weeks because some seldom used object got corrupted way back when.
Any ideas out there? Thanks in advance for any pointers!
EDITED 12/03/2009 # 11:51
Sadly, I can only accept one answer - though I got a few very good ones, thank you for all the pointers!
You might like to look at: Is it possible to programmatically detect corrupt Access 2007 database tables?
I am inclined to keep a copy of important databases at each compact & repair and to compare the new database against the previous one. You can also check for non-standard characters.
Neither Compact/Repair nor Decompile/Recompile catches all corruption problems, although you should be doing this anyway.
I use a function to export all Container Docs (and QueryDefs) using SaveAsText into a date/time stamped folder, and use it regularly throughout the day. If I suspect any corruption, I create a new mdb, and use LoadFromText to recreate the objects.
Proper compilation practices will prevent corruption of the VBA project (which is what you're talking about here).
That entails:
use OPTION EXPLICIT in all modules.
turn off COMPILE ON DEMAND in the VBE options.
compile your code regularly, while working.
periodically (e.g., once a day after a full day of coding) decompile and recompile the code.
If you do this, you'll never encounter corruption in the first place so you won't need to test for it (which is impossible in the first place).

LINQ doesn't recognize changes in the database

I did not find something similar so I have to ask:
I use LINQ to SQL and all was fine until i started to use Stored Procedures to update the database entries. (My Stored Proc is something like update all entries which groupid x)
The Update runs fine and the values in the database change. But the DataContext ignores this changes.
I have to say that the data context is a Singleton which I know is no common way but I have different reasons why I have to do it like this.
so
db.Refresh(System.Data.Linq.RefreshMode.OverwriteCurrentValues);
doesn't help.
Why he doesn't know the changes of the db?
What you are trying to do goes very much against how LinqToSql works.
Using a long lived DataContext is very difficult to do correctly, especially if you need to call stored procedures, where LinqToSql can't easily track the data changes.
Changes you make through the DataContext are generally tracked automatically, so the DataContext can properly manage its cache and keep track of changes being made to the database from that DataContext. That isn't always the case however. The DataContext doesn't (and can't easily) understand what your stored procedure is doing, so it has no idea how to keep its cache correct. At that point, after calling the stored procedure, your very best option is to get rid of that DataContext and create a new one. That effectively blows away your cache, which may or may not be a significant performance hit, but data integrity should be your primary concern.
If your Singleton DataContext isn't the only thing modifying the database (for example, your database could be modified by things like: triggers, batch processing, other applications, etc.), your DataContext may also contain inaccurate data in its cache, which is yet another reason to have a short lived DataContext.
So, while you could possibly succeed with a long lived Singleton DataContext, you will be fighting the system the entire way and the system is likely to win in the end.
You have to decide: How important is data integrity?
Because the datacontext is caching the values. Here's an article on how to clear the cache. But now you have the problem of implementing a notification system of knowing when to clear it.
Microsoft recommend that a data context should be only used for a single unit of work. Hanging onto it as a singleton probably isn't a good idea.

Is there a way of sharing a Core Data store between processes?

What am I trying to do?
A UI process that reads data from a Core Data store on disk. It wouldn't need to edit the data, just read and display the data.
A command line process that writes to the same data store as accessed by the UI.
Why?
So that the command line process can be running all the time but the user can quit the UI process and forget about the app until they need to look at the data it's captured.
What would be the simplest and most reliable way of achieving this?
What Have I Tried?
I've read up on sharing a data store between threads and implemented this once before, but I can't find anything in the docs or on the web indicating how to share a store between processes.
Is it as simple as pointing both processes at the same data store file? I've experimented with this briefly. It appeared to work OK, but I'm worried I might run into problems with locking etc when it's really put under stress.
Finally
I'd really appreciate someone giving me pointers on what direction to go with this. Thanks.
This might be one of those situations in which you'll simply have to Try It And Seeā„¢.
Insofar as I can remember, SQLite (which is the data store you'll most likely want to be using) has built in mechanisms for file locking and so on; so the integrity of the file is likely to be assured. If, on the other hand, you use the CoreData/XML approach, you might run into problems.
In other words; use the SQLite backing for your file, and you should likely be fine.
You can do exactly what you want, you probably want to use the SQLite store otherwise saving and committing every time you want to synch out data will be horrifically slow. You just need to use some sort of IPC doorbell between the apps so that you can inform one app it needs to recheck the persistent store on disk and merge in its data.
Apple documents using multiple persistent store corindators as a valid option in Multi-Threading with Core Data (in "General Guidelines", open 2). That happens to be discussing completely parallel CD stacks in the same process, but it is valid if they are in completely separate address spaces as well.
Nearly two years on, and I've just found a much better way of doing this.
The answer seems to lie with Sync Services. I didn't even realise it existed! There's an excellent post about this at:
http://www.timisted.net/blog/archive/core-data-and-sync-services/
I've not tried this with my app yet, but it seems like an excellent way of sharing a core data store between two processes or applications.
If I experience any performance issues, I'll update this answer accordingly, but this seems like the Apple recommended way of doing it.
You need to re-think your architecture. If you want a daemon to own the data store, then have your GUI app connect to the daemon. Trying to share the data store is a can of worms you don't want to open.