I have a windows service which uses EF Core to save data into SQL database. The main requirement for this service is that it works even if database is unavailable. Soo, if service needs to insert data into database and database is not available it should temporary save it somewhere else and insert them later (when database is available again).
What would be the best solution for this? I'm thinking about a queue of queries that weren't executed due to database unavailability. Are there any existing solutions for this type of problems? Does EF Core already have some kind of functionality to store data in file and insert them into database, when it becomes available again? Is there maybe any other library to achieve this?
Thanks for your help!
Related
Basically we are doing a revamp of an old website so we've set up a new application/DB with proper relational schema but we need to bring data over from old system (invalid referential integrity/no fk/sometimes no pk) as each of us work on a module.
Looking at using EF code first as there will be a few of us working on the system.
Will it be best to write SQL script to bring over all these tables to fit the new schema or is there a good way to do this in Code First? like seeding?
Or will this case be suited to going Database First approach where we just bring over data using Generate Script functionality in SSMS?
Would like to hear if anyone's done similar work. Thanks in advance.
A one-time migration from the old schema to the new schema would normally be done with SQL scripts, not with EF. You have to write the transformation logic either way, and with EF you additionally have to create a DbContext model for the old schema, and EF is slower at performing bulk operations.
I had a similar task to do: I had to create a completely new web application (App B) with a new database (Db B) based on old ones (App A and Db A). The relations and general structure was a real mess in Db A, so I decided that it was better to generate Db B with a code-first approach and then I created another simple application which would do all the migration stuff. I had 2 contexts from Db A and Db B and all the migration logic was there. It worked like a charm.
As for seeding. It would be better if I could do it with a db seeder in the App B but there was a huge amount of data so it was literally impossible to do that.
I am working with a client that has data in an MSSQL database. I only have read access to a remote ODBC connection and cannot modify the database in any form.
I'd like to replicate a subset of the data locally in an open-source alternative, syncing once per day or so. This is largely to eliminate reads against the data during peak hours. The local data will be used in a Rails 4 application. Note that syncing only needs to be one-way, as I don't have write access.
How can I best accomplish this?
FreeTDS?
Are there any libraries that will help with the syncing, or can I expect to write all the glue code myself?
I would advise you to create a ruby script that can be scheduled to do the data retrieving.
In order to connect to the MSSQL database, please take a look at this simple project I've created.
Then you only need to code the data you want to retrieve and the way you store it.
I prefer the approach of being decoupled from your rails application, although you can use a scheduler like rufus-scheduler or sidekiq and run it with your application.
In my code I am trying to check if my entity framework Code First model and Sql Azure database are in sync by using the "mycontext.Database.CompatibleWithModel(true)". However when there is an incompatibility this line falls over with the following exception.
"The model backing the 'MyContext' context has changed since the database was created. Either manually delete/update the database, or call Database.SetInitializer with an IDatabaseInitializer instance. For example, the DropCreateDatabaseIfModelChanges strategy will automatically delete and recreate the database, and optionally seed it with new data."
This seems to defeat the purpose of the check as the very check itself is falling over as a result of the incompatibility.
For various reasons I don't want to use the Database.SetInitializer approach.
Any suggestions?
Is this a particular Sql Azure problem?
Thanks
Martin
Please check out the ScottGu blog below:
http://weblogs.asp.net/scottgu/archive/2010/08/03/using-ef-code-first-with-an-existing-database.aspx
Here is what is going on and what to do about it:
When a model is first created, we run a DatabaseInitializer to do things like create the database if it's not there or add seed data. The default DatabaseInitializer tries to compare the database schema needed to use the model with a hash of the schema stored in an EdmMetadata table that is created with a database (when Code First is the one creating the database). Existing databases won’t have the EdmMetadata table and so won’t have the hash…and the implementation today will throw if that table is missing. We'll work on changing this behavior before we ship the fial version since it is the default. Until then, existing databases do not generally need any database initializer so it can be turned off for your context type by calling:
Database.SetInitializer<Production>(null);
Using above code you are no recreating the database instead using the existing one so I don't think using Database.SetInitializer is a concern unless you have some serious thoughts about using it.
More info: Entity Framework Code Only error: the model backing the context has changed since the database was created
Working in a VB.Net 3.5 WinForms application, and using Access 2003 (JET 4.0) as a database backend through ADO.Net.
I'd like to check the database for changes, before the application decides to refresh the data from the server. Are there any best practices for this, or should I trust the ADO.Net environment to optimise/handle this?
I was thinking of using a limited log on the server, which gets updated by every change. Pulling this log could tell whether or not a certain table has changed data. Any good?
Easiest way is to use a file based cache, which would just invalidate the cache if anything is written to the database.
This won't give you any table specific caching so this isn't the most efficient thinkable cache.
I am writing code to migrate data from our live Access database to a new Sql Server database which has a different schema with a reorganized structure. This Sql Server database will be used with a new version of our application in development.
I've been writing migrating code in C# that calls Sql Server and Access and transforms the data as required. I migrated for the first time a table which has entries related to new entries of another table that I have not updated recently, and that caused an error because the record in the corresponding table in SQL Server could not be found
So, my SqlServer productions table has data only up to 1/14/09, and I'm continuing to migrate more tables from Access. So I want to write an update method that can figure out what the new stuff is in Access that hasn't been reflected in Sql Server.
My current idea is to write a query on the SQL side which does SELECT Max(RunDate) FROM ProductionRuns, to give me the latest date in that field in the table. On the Access side, I would write a query that does SELECT * FROM ProductionRuns WHERE RunDate > ?, where the parameter is that max date found in SQL Server, and perform my translation step in code, and then insert the new data in Sql Server.
What I'm wondering is, do I have the syntax right for getting the latest date in that Sql Server table? And is there a better way to do this kind of migration of a live database?
Edit: What I've done is make a copy of the current live database. Which I can then migrate without worrying about changes, then use that to test during development, and then I can migrate the latest data whenever the new database and application go live.
I personally would divide the process into two steps.
I would create an exact copy of Access DB in SQLServer and copy all the data
Copy the data from this temporary SQLServer DB to your destination database
In that way you can write set of SQL code to accomplish second step task
Alternatively use SSIS
Generally when you convert data to a new database that will take it's place in porduction, you shut out all users of the database for a period of time, run the migration and turn on the new database. This ensures no changes to the data are made while doing the conversion. Of course I never would have done this using c# either. Data migration is a database task and should have been done in SSIS (or DTS if you have an older version of SQL Server).
If the databse you are converting to is just in development, I would create a backup of the Access database and load the data from there to test the data loading process and to get the data in so you can do the application development. Then when it is time to do the real load, you just close down the real database to users and use it to load from. If you are trying to keep both in synch wile you develop, well I wouldn't do that but if you must, make a nightly backup of the file and load first thing in the morning using your process.
You may want to look at investing in a tool like SQL Data Compare.
I believe it has support for access databases too, and you can download a trial.
I you are happy with you C# code, but it fails because of the constraints in your destination database you temporarily can disable them and then enable after you copy the whole lot.
I am assuming that your destination database is brand new DB with no data, and not used by anyone when the transfer happens
It sounds like you have two problems:
You're migrating data from one database to another.
You're changing your schema.
Doing either of these things is tricky if you are trying to migrate the data while people are using the data.
The simplest approach is to migrate the data based on a static copy of the data, and also to queue updates to that data from the moment you captured the static copy. I don't know how easy this is in Access, but in SQLServer or Oracle you can use the redo logs for this or a manual solution using triggers. The poor-man's way of doing this is to make triggers for all the relevant tables that log the primary key of the records that have changed. Then after the old database is shut off you can iterate over those keys and get those records from the old database and put them into the new database. Just copy the whole record; if the record was deleted then delete it from the new database.
Your problem is compounded by the fact that you can't simply copy the data, you have to transform it. This means you probably have to shut down both databases and re-migrate the records based on the change list. It will take a lot of planning to ensure you get things right and I'd recommend writing a testing script that can validate that the resulting data is correct.
Also I'd ensure that the code for the migration runs inside one of the databases if possible. Otherwise you are copying the data twice and this will significantly harm the performance.