Reading the value from text file and updating it to a field in sql table - sql

I have atext file with data like
Patient name: Patient1 Medical rec #: A1Admit date: 04/26/2009 Discharge date: 04/26/2009
DRG: 982 and so on.
In the format as given above i am having several records in text file.each field is separated by colon
I have to read this file and find out values and update corresponding fields in my sql table.Say drg value 982 has to be updated in drg column of sql table)
Please help in doing it through sql query or ssis package.

If I get this task I'll use SSIS.
Create 2 DataSources: flat file (for text file) and SQL Server connection
Use Lookup task to lookup value from text file for each record in the db table
Use execute SQL Task to update records by lookuped value

You MIGHT try doing this by means of BULK INSERT.
Create a temp-table to get hold the new values
BULK INSERT the file into said table (**)
[optionally do some data-enrichment/cleaning here]
merge the information from the temp-table into the actual table
The only problem with this MIGHT be that
the server cannot access the file directly (eg. when the file is on a
network share)
the file is of a format that can't be handled by BULK INSERT
Given the example data above you might need to load the data into one big column and then do the splitting into different columns by means of creative-sql (PatIndex, substring, the works...). You might try giving colon as a field-separator, but you'll still end up with data that needs (quite a bit) of cleaning.

Related

Sql script to search value in column of database by taking value from a file

I have a csv file with two columns. The file has over 200.000 rows. Inside database I have the same table with the same values.
How can I write a script so that I can search for the values that are present in file but not in database?
I am using SQL Developer for this
Creating an External table is the best option when you want to read the contents of a flat-file using a select query.
Click here to know more about how to create an external table.
After creating the external table, you can make use of a query similar to below to identify the records which are exclusively available in the external table(i.e. flat file).
select *
from new_external_table et
where not exists (select 1 from source_table st where et.column_name=st.column_name);

SSIS Pipe delimited file not failing when the row has more number pipes than the column number?

My Source File is (|) Pipe Delimited text file(.txt). I am trying load the file into SQL Server 2012 using SSIS(SQL Server Data Tools 2012). I have three columns. Below is the example for how data in file looks like.
I am hoping my package should fail as this is pipe(|) delimited instead my package is a success and the last row in the third column with multiple Pipes into last column.
My Question is Why is't the package failing? I believe it has corrupt data because it has more number of columns if we go by delimiter?
If I want to fail the package what are my options,If number of delimiters are more than the number columns?
You can tell what is happening if you look at the advanced page of the flat file connection manager. For all but the last field the delimiter is '|', for the last field it is CRLF.
So by design all data after the last defined pipe and the end of the line (CRLF) is imported into your last field.
What I would do is add another column to the connection manager and your staging table. Map the new 'TestColumn' in the destination. When the import is complete you want to ensure that this column is null in every row. If not then throw an error.
You could use a script task but this way you will not need to code in c# and you will not have to process the file twice. If you are comfortable coding a script task and / or you can not use a staging table with extra column then that will be the only other route I could think of.
A suggestion for checking for null would be to use an execute sql task with single row result set to integer. If the value is > 0 then fail the package.
The query would be Select Count(*) NotNullCount From Table Where TestColumn Is Not Null.
You can write a script task that reads the file, counts the pipes, and raises an error if the number of pipes is not what you want.

Oracle SQL: How to Query A CSV With No Header/Column Names?

I have a third party tool which uses CSV Text Drivers which allows for executing SQL queries on CSV data imported into the tool. Most Oracle SQL queries work on this while many don't.
I have a requirement where I have to read and import data into the tool using a CSV file which has no column names or header fields available. How can I execute SQL queries on a table which has no column names or headers defined?
Sample Table:
AB 100 GPAA 9876
AC 101 GPAB 9877
AD 102 GPAC 9878
You would likely need to add the headers before running the queries. Is there a table in which the data will eventually end up? If so, you could export the column names from there first, then append the CSV info to the newly created file afterward.
So apparently, you can specify if your CSV file has a header or not when using the CSV SQL Text driver for interaction with CSV files.
jdbc:csv:////<path_to_file>/?_CSV_Header=false;
Then, we can have a query like
select distinct (column1) as accountID, (column2) as groupID from csv_file_name
The parameters (column1), (column2)... represent the actual columns in the file from left to right and they have to be written like this for the query to work.

Import CSV file in SQL server

I want to import csv data in sql server. I searched about and found answers about BULK INSERT ... FROM.
The problem I have is :
I want to select just one column of my results
The table already exists with bad datas and I just want to update these fields
The CSV I had contains towns and its parameters (correct datas)
Town,Id,ZipCode,...
T1,1,12000
T2,2,12100
T3,3,12200
And the table in SQL Server 'town' contains for example
T1,1,30456
T2,2,36327
T3,3,85621
I just want to get ZipCode in CSV and update the ZipCode in the table in function the ID.
Does it exist an easy way to do it ?
I normally prefer bcp over bulk insert because with that you can easily import the files over network and there's less issues with access rights. Otherwise I would just load the data into tempdb and update the original table from there.

Load CSV using the headers SQL Server

I am looking for a good generic SP in SQL server to load CSV files using the header row to create the necessary table columns for the temp table.
For example, I have a table with 20 columns in it, and I have been told to load a csv file into SQL server I have to load all the columns into the table. So, that means I have to go in and create all this by hand and then load the data. Think that is a time waster.
So, I was wondering if there was a way to read the header row use it to create the columns, whether my table has 10 or 20 and the use bulk update to load the data itself.
Suggestions?