I need to do some data migration between two oracle databases that in different servers. I've thought of some ways to do it like writing a jdbc program but i think the best way is to do it in SQL itself. I can also copy the entire table over to the database I am migrating to but these tables are big and doesnt seem like a "elegant" solution.
Is it possible to open a connection to one DB in SQL developer then connect to the other one using SQL and writing update/insert functions on tables as if they were both in the same connection?
I have read some examples on creating linked tables but none seem to be oracle specific or tell me how to open the external connection by supplying it the server hostname/port/SID/user credentials.
thanks for the help!
If you create a Database Link, you can just select a from different database by querying TABLENAME#dblink.
You can create such a link using the CREATE DATABASE LINK statement.
It depends if its a one time thing or a normal process and if you need to do ETL (Extract, Transform and Load) or not, but ill help you out based on what you explained.
From what i can gather from your explanation, what you attempt to accomplish is to copy a couple of tables from one db to another, if they can reach one another then its really simple, you could just create a DBLINK (http://www.dba-oracle.com/t_how_create_database_link.htm) and then do a SELECT AS INSERT from either side using the DBLINK for one of the tables and the local table as the receiver or sender. Its pretty straight forward.
But if its a one time thing i would just move the table with expdp and impdp since that will be a lot faster and a lot less strain on the DB.
If its something you need to maintain and keep updated, why not just add the DBLINK and use that on both sides, this will be dependent on network performance though.
If this is a bit out of you depth or you cant create dblinks due to restrictions, SQL Developer has had a database copy option for a while and you can go as far a copying individual tables, but its very heavy on the system where its being run (http://deepak-sharma.net/2014/01/12/copy-database-objects-between-two-databases-in-oracle-using-sql-developer/).
Related
I use "Lexware Warenwirtschaft Premium 2014" (a well-known merchandise management software in Germany). It uses Sybase as a database. I connect to the database by using a ODBC connection(SQL Anywhere driver). The database has 800+ tables. For example when Lexware creates a new Article, it writes data into different tables.
Is there a way to track into which tables Lexware wrote data?
As an ad-hoc measure you could switch on ODBC tracing, and then review the contents.
http://support.microsoft.com/kb/274551 tells you how to do this from a Windows client, and you can find similar information for Linux/Unix and other clients.
You'd then have to parse the trace file to see which queries were inserted into. The first step would probably be to isolate all the SQLPrepare and SQLExecDirect statements, and check them for INSERT, UPDATE and other relevant Sybase statements.
Note that this is not something you'd want as an ongoing solution, just a way to find out what an ODBC client does if you do not have access to e.g. logging information on the database itself. However, the trace slows down execution and would generate a very large trace file if you left it running for any significant period.
I don't think so. Whatever this program does behind the interface is hidden in its binaries and unreadable for humans, so you can't read the code to see which tables are altered.
You might be able to figure out which table was edited last, depending on the SQL-Server and it's version.
Is it possible to build a generic query that verifies and if needed corrects a whole table schema?
Example:
On my dev machine i have a sql server with some tables. I, and others, make changes to the tables and sometimes misses to notify the others about them. :/
I want to build a query that reads the dev sql tables and creates a query that i can run one another sql server and updates that table there so that they are equal.
I cant drop the table and recreate it unfortunately. I dont want to change any of the data.
If this is to hard with sql syntax is there some tools that can do this for me? The sql tables are almost always on different machines and most likely i cant connect directly to them from the same place. So tha fixing/verifying needs to be done "offline".
Time is not of the essence, it can be a very slow query as long as it works.
update: I want to verify the sql schema and not the content of the table
update2: We are using SQL Server 2008 R2
It is possible, but not easy. This kind of tool is called a Data Dictionary, and you can write one yourself (see advice from the Database Programmer) or you can buy a commercial one, for example RedGate's SQL Compare.
I have 3 computers having the same sql server 2005 database, I would like to gather the data from the 3 computers to another computer which has the same database. Please help me.
This is called "data conversion" and a lot of your work will be to determine uniqueness on each one of them and coming up with strategies to prevent collisions, mainly primary keys that likely are the same across these databases. No simple answer here, it can be a project in itself.
It might be difficult without any manual data transformation. It depends on your database and type of the data. For example what do you use as a keys? If you have sequential integers as a primary/foreign keys, then you will have to do some manual data transformation. IF you use GUIDS, it will get slightly easier, but you still have to ensure that for example some lookup tables doesn't have different guid keys for same items etc.. But there is no took for doing this automatically.
Maybe if you have some very simple data without any relations to other tables (like table with one column with text messages etc) you can script the data with SQL Server Database Publishing Wizard, and then execute the scripts against your target database.
You need to backup your databases by right clicking in Enterprise Manager and choosing backup before choosing the location etc.
After backing up you can then restore to your local Sql Server by right clicking and choosing restore.
After you have the data locally you will need to write queries to transfer the data to your local database.
Alternatively you can use something like Red Gates Sql Data Compare to compare and transfer data using a visual interface. Although this costs money.
Redgate SQL Toolbelt may be able to help you. You would first copy database to that another computer and then compare it with Sql Data Compare against 3 databases always copying data only one way (to your new database). However I am not 100% sure if it will work like i think it would. You would have to verify it yourself.
Like other people suggested some things like primary keys etc may be problematic.
I'm trying to find out if this is possible, but so far I haven't found out any good solutions. What I would like to achieve is write a stored procedure that can clone a database but without the stored data. That means all tables, views, constraints, keys and indexes should be included but without any data. Can it be done?
Sure - your stored proc would have to read the system catalog views to find out what objects are in the database, determine their potential dependencies, and then create a single or a collection of SQL scripts which re-create the database, and execute those.
It's possible - not very nice and easy to do. Especially the dependencies between objects might cause more headaches than first meets the eye....
You could also:
use something like SQL Server Management Studio (if you're on SQL Server - you didn't specify) and create the scripts manually, and just re-execute them on a separate server
use a "diff" tool like Redgate SQL Compare to compare two servers and have the second one brought up to date
I've successfully used the Microsoft SQL Server Database Publishing Wizard for this purpose. It's pretty straightforward, no coding needed. Here's a sample call:
sqlpubwiz script -d DatabaseName -S ServerName -schemaonly C:\Projects2\Junk\ DatabaseName.sql
I believe the default is to create both data and schema, but you can use the schemaonly parameter.
Download it here
In SQL Server you can roll through the system tables (sys.tables, sys.columns, etc.) and construct things one at a time. It's going to be very manual and error prone at the beginning, but it should become systematic pretty quickly.
Another way to do it is to write something in .Net using SMO. Check out this link:
http://www.sqlteam.com/article/scripting-database-objects-using-smo-updated
I am writing code to migrate data from our live Access database to a new Sql Server database which has a different schema with a reorganized structure. This Sql Server database will be used with a new version of our application in development.
I've been writing migrating code in C# that calls Sql Server and Access and transforms the data as required. I migrated for the first time a table which has entries related to new entries of another table that I have not updated recently, and that caused an error because the record in the corresponding table in SQL Server could not be found
So, my SqlServer productions table has data only up to 1/14/09, and I'm continuing to migrate more tables from Access. So I want to write an update method that can figure out what the new stuff is in Access that hasn't been reflected in Sql Server.
My current idea is to write a query on the SQL side which does SELECT Max(RunDate) FROM ProductionRuns, to give me the latest date in that field in the table. On the Access side, I would write a query that does SELECT * FROM ProductionRuns WHERE RunDate > ?, where the parameter is that max date found in SQL Server, and perform my translation step in code, and then insert the new data in Sql Server.
What I'm wondering is, do I have the syntax right for getting the latest date in that Sql Server table? And is there a better way to do this kind of migration of a live database?
Edit: What I've done is make a copy of the current live database. Which I can then migrate without worrying about changes, then use that to test during development, and then I can migrate the latest data whenever the new database and application go live.
I personally would divide the process into two steps.
I would create an exact copy of Access DB in SQLServer and copy all the data
Copy the data from this temporary SQLServer DB to your destination database
In that way you can write set of SQL code to accomplish second step task
Alternatively use SSIS
Generally when you convert data to a new database that will take it's place in porduction, you shut out all users of the database for a period of time, run the migration and turn on the new database. This ensures no changes to the data are made while doing the conversion. Of course I never would have done this using c# either. Data migration is a database task and should have been done in SSIS (or DTS if you have an older version of SQL Server).
If the databse you are converting to is just in development, I would create a backup of the Access database and load the data from there to test the data loading process and to get the data in so you can do the application development. Then when it is time to do the real load, you just close down the real database to users and use it to load from. If you are trying to keep both in synch wile you develop, well I wouldn't do that but if you must, make a nightly backup of the file and load first thing in the morning using your process.
You may want to look at investing in a tool like SQL Data Compare.
I believe it has support for access databases too, and you can download a trial.
I you are happy with you C# code, but it fails because of the constraints in your destination database you temporarily can disable them and then enable after you copy the whole lot.
I am assuming that your destination database is brand new DB with no data, and not used by anyone when the transfer happens
It sounds like you have two problems:
You're migrating data from one database to another.
You're changing your schema.
Doing either of these things is tricky if you are trying to migrate the data while people are using the data.
The simplest approach is to migrate the data based on a static copy of the data, and also to queue updates to that data from the moment you captured the static copy. I don't know how easy this is in Access, but in SQLServer or Oracle you can use the redo logs for this or a manual solution using triggers. The poor-man's way of doing this is to make triggers for all the relevant tables that log the primary key of the records that have changed. Then after the old database is shut off you can iterate over those keys and get those records from the old database and put them into the new database. Just copy the whole record; if the record was deleted then delete it from the new database.
Your problem is compounded by the fact that you can't simply copy the data, you have to transform it. This means you probably have to shut down both databases and re-migrate the records based on the change list. It will take a lot of planning to ensure you get things right and I'd recommend writing a testing script that can validate that the resulting data is correct.
Also I'd ensure that the code for the migration runs inside one of the databases if possible. Otherwise you are copying the data twice and this will significantly harm the performance.