Replace part of oracle database with data in excel - sql

I have a table with 103 columns and more than 100,000 records. I was told that Oracle Data base has bad data in few of the columns(four columns) and in few of the records(20,000 records approx) and was told to modify data. Also I was given a query where I should modify data(bad data). With the help of query, I exported all the bad data into excel and modified using macros.
How to replace existing bad data in Oracle data base to data which I have in excel(good data). Is it possible to modify data in some part of the database using SQL query? I mean to say 4 out of 103 columns and 20k out of 100k records has to be modified without affecting already existing good data in Oracle database.
I am using SQL developer and Oracle 11g
My query to retrieve bad data
select e.id_number
, e.gender_code
, e.pref_mail_name
, e.prefix
,e.first_name
,e.last_name
,e.spouse_name
,e.spouse_id_number
,e.pref_jnt_mail_name1
,e.pref_jnt_mail_name2
from advance.entity e
where e.person_or_org = 'P'
and ascii(e.spouse_name) <> 32
or ascii(e.spouse_id_number) <> 32"
Note: Moreover I am not changing any primary key or secondary key data. Bad data is in other columns.

You can start by importing all corrected data (which now resides in your Excel file), to a temporary table. See Load Excel data sheet to Oracle database for more details.
The it should be a simple task to write one or more UPDATE statements. If you are new to Oracle syntax, Tech on the Net has some nice examples.
Also, do not forget to backup the original table to a temporary table before you make any changes there. That way, if you mess up the data repair, you can start over.

Related

How to deal with data structure changes when performing a full historical data load?

I'm dealing with a SQL Server database which contains a column "defined data" with JSON data in it (and some other simple columns). The data builds up over time, right now we have about 8 million rows.
The data from this db is periodically read by an ETL system which then reads the JSON data in the "defined data" column and maps the data to a new SQL Server table based on the columns names contained in the JSON data.
This SQL Server table is prone to changes, meaning that about every 4 months additional columns are needed or column names change. Whenever this SQL Server table changes its data structure, a new version is introduced, which also forces the JSON data structure to change.
However, the ETL system should still be able to load all historical (JSON) data from the SQL Server database, regardless of the changing version throughout time. How can I make this work, taking into consideration version changes of the SQL Server tables and the JSON data?
!example]1
So in this example my question is:
How can I ensure that I can load both client 20 and 21 into one SQL Server table without getting errors because the JSON data structure is not reflecting version 2 in the case of historical data?
Given the size of the SQL Server database, it doesn't seem like an option to update all historical JSON data according to the latest version (in this example that would mean adding "AssetType" for the 01-01-2021 data and filling it in with NULL).
Many, many thanks in advance!
First I would check if json fields exist in the table as column names by looking them up in the information schema. If not exists then alter table add column.
How can I ensure that I can load both client 20 and 21 into one SQL Server table without getting errors because the JSON data structure is not reflecting version 2 in the case of historical data?
You maintain 2 separate tables. A Raw/Staging/Bronze table that has the same schema as the source, and a Cleansed/Warehouse/Silver table that has the desired schema for reporting. If you have multiple separate sources, you may have separate Raw tables.
Periodically you enhance the schema of the Cleansed table to add new data that has appeared in the Raw table.

Migrating data from one table of one database to other table of other database

I am trying to migrate data from table A of database DB1 to table B of database DB2 using java and Oracle.
I am using java 1.8 and my source database has Oracle 11g and destination database has Oracle 12c.
I made structure (scema, tables )of destination database in source database. And migrating as by making use of *insert into dest select * from source* query in java . but as the number of records in source table in millions so it's consuming time.. and later on this migrated data i want to export into my actual destination so that too will going to take time.
As per my little knowledge.. i think I can't use prepared statement with 2 connection. Because my table consists of 400 to 500 columns , so binding that many columns with prepared statement is not a good idea. Also my structure of source and destination tables are different. I made the field mapping in properties file where I mapped the old field to new field for insert into select * from tbl query. Like my source table has column as col0001 and the corresponding column in destination is ref_no. So this too will not allow me to use prepared statement. But by making use of statement in java i can migrate data in single dB only.
I tried with dblink also. But for clob datatype i am not able to migrate data.
Kindly provide the solution if anyone did something like this previously.
For a one-off copy, you can do a direct mode insert:
insert /*+ APPEND */ into local_table select * from table#database_link;
Here are some other related links.

How to Overwrite Database Table Data Using SQL Server Data Tools

I'm currently working on a small project using sql data tools. I'm getting data from excel sheet and write it back to my database table. from the database table I'm generating a report. Time to time I'm updating my excel sheet. but it won't update in my database table. how do I clear the table and rewrite excel sheet data to database table when run the project every time?
post build script would be how and when I would clear or synchronize the data. Then you have to write the truncate or delete and the insert and/or update sections to move data from your excel document to the sql table. I would suspect that the SQL table is a better place to maintain the data though!!! If you maintain in SQL you can always have a query to see what it is in Excel at any given time and you don't have to worry about synching back to SQL.

How can I copy and overwrite data of tables from database1 to database2 in SQL Server

I have a database1 which has more than 500 tables and I have database2 which also has the same number of tables and in both the databases the name of tables are same.. some of the tables have different table definitions, for example a table reports in database1 has 9 columns and the table reports in database2 has 10.
I want to copy all the data from database1 to database2 and it should overwrite the same data and append the columns if structure does not match. I have tried the import export wizard in SQL Server 2008 but it gives an error when it comes to the last step of copying rows. I don't have the screen shot of that error right now, it is my office PC. It says that error inserting into the readonly column xyz, some times it says that vs_isbroken, for the read only column error as I mentioned a enabled the identity insert but it did not help..
Please help me. It is an opportunity in my office for me.
SSIS and SQL Server 2008 Wizards can be finicky tools.
If you get a "can't insert into column ABC", then it could be one of the following:
Inserting into a PK column -> when setting up the mappings, you need to indicate to overwrite the value
Inserting into a column with a smaller range -> for example from nvarchar(256) into nvarchar(50)
Inserting into a calculated column (pointed out by #Nick.McDermaid)
You could also get issues with referential integrity if your database uses this (most do).
If you're going to do this more often, then I suggest you build an SSIS package instead of using the wizard tooling. This way you will see warnings on all sorts of issues like the ones I've described above. You can then run your package on demand.
Another suggestion I would make, is that you insert DB1 into "stage" tables in DB2. These tables should have no relational integrity and will allow you to break the process into several steps as follows.
Stage the data from DB1 into DB2
Produce reports/queries on issues pertinent to your database/rules
Merge the data from stage tables into target tables using SQL
That last step is where you can use merge statements, or simple insert/updates depending on a key match. Using SQL here in the local database is then able to use set theory to manage the overlap of the two sets and figure out what is new or to be updated.
SSIS "can" do this, but you will not be able to do a bulk update using SSIS, whereas with SQL you can. SSIS would do what is known as RBAR (row by agonizing row), something slow and to be avoided.
I suggest you inform your seniors that this will take a little longer to ensure it is reliable and the results reportable. Then work step by step, reporting on each stages completion.
Another two small suggestions:
Create _Archive tables of each of the stage tables and add a Tstamp column to each. Merge into these after the stage step which will allow you to quickly see when which rows were introduced into DB2
After stage and before the SQL merge step, create indexes on your stage tables. This will improve the merge performance
Drop those Indexes after each merge, this will increase the bulk insert Performance
Basic on Staging (response to question clarification):
Links:
http://www.codeproject.com/Articles/173918/How-to-Create-your-First-SQL-Server-Integration-Se
http://www.jasonstrate.com/tag/31daysssis/
http://blogs.msdn.com/b/andreasderuiter/archive/2012/12/05/designing-an-etl-process-with-ssis-two-approaches-to-extracting-and-transforming-data.aspx
Staging is the act of moving data from one place to another without any checks.
First you need to create the target tables, the schema should match the source tables.
Open up BIDS and create a new Project and in it a new SSIS package.
In the package, create a connection for the source server and another for the destination.
Then create a data flow step, in the step create a data source for each table you want to copy from.
Connect each source to a new data destination and set the appropriate connection and table.
When done, save and do a test run.
Before the data flow step, you might like to add a SQL step that will truncate all the target tables.
If you're open to using tools then what about using something like Red Gate Sql Compare and Red Gate SQL Data Compare?
First I would use data compare to manage the schema differences, add the new columns you want to your destination database (database2) from the source (database1). Then with data compare you match the contents of the tables any columns it can't match based on names you specify how to handle. Then you can pick and choose what data you want to copy from your destination. So you'll see what data is new and what's different (you can delete data in the destination that's not in the source or ignore it). You can either have the tool do the work or create you a script to run when you want.
There's a 15 day trial if you want to experiment.
Seems like maybe you are looking for Replication technology as is offered by SQL Server Replication.
Well, if i understood your requirement correctly, you need to make database2 a replica of database1. Why not take a full backup of database1 and restore it as database2? Your database2 will be exactly what database1 is at the time of backup.

Copy (Import) Data from Oracle Database To Another

I want to Copy a data from One oracle database to another.
I have checked Import/Export Utility but the problem is import utility doesn't support conflicts resolution techniques between rows.
For Example if there's a table in the source database have the same row key in the destination database. if i use 'Ignore' parameter with value = y, the destination table will have a duplicate rows.
I want to ask if there's another way to import data from oracle database to another with some mechanism of detecting the conflicts and resolve them?
You might want to consider using a database link from database A to database B. You can query the data from database B to insert into your database A tables. You are free to query whatever you want using SQL or PL/SQL.
More on database links:
http://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5005.htm