Need SQL expression for REPLACE - sql

During a CSV file import from external URL i need to execute REPLACE
(I can't edit the CSV file manually/locally cause it's located on the suppliers FTP and will be used in the future for add/dele/update etc of the products in the file on a automated recurring scheduled task)
I got this expression for replacing value of a column in the CSV file:
REPLACE([CSV_COL(6)],"TEXSON","EXTERNAL")
It's working for column 6 in the CSV file cause all row values of that column is the same(TEXSON)
What i need help with:
In column 5 in the CSV file i have various values and there no connection between these values.
How can i run an expression that replaces all values in column 5 in the CSV with "EXTERNAL"?
See image of how it looks in the CSV file:
Maybe some "wildcard" to just replace everything in that column, no matter what value it is...
Additional information: Im working with the PrestaShop Store Manager to import products to the shop from our supplier...
Thanks!

Related

Query contains parameters but import file contains different values [importing csv to Teradata SQL]

I am using Teradata SQL to import a CSV file. I clicked import to activate the import operation, then typed the following
insert into databasename.tablename values(?,?,?,...)
I made sure to specify the database name as well as what I want the table to be named, and I put 13 commas--the number of columns in my CSV file.
It gives me the following error:
Query contains 13 parameters but Import file contains 1 data values
I have no idea what the issue is.
The default delimiter used by your SQL Assistant doesn't match the one used in the CSV, so it doesn't recognise all the columns.
On SQL Assistant, go to : Tools >> Options >> Export/Import and choose the proper delimiter so it matches the one in your CSV.

TYPE command. Inserting csv file

I have a CSV file im looking to load into a TSQL table using the "type" command.
Code: type yourfilename
When looking in the command prompt its breaking the file lines into two different rows and inserting them separately into my destination table
EX.
"Manheim Chicago","Manheim","IL","199004520601","On
Block","2D4FV47V86H126473","2006","DODGE","MAGNUM 4X2 V6"
I want solution to look like this
Solution Pic
[1]: https://i.stack.imgur.com/Bkgf6.png
Where this would be one record in the table.
Question. Does anyone know how to format a type command so it displays a full record without line breaks?

"UNLOAD" data tables from AWS Redshift and make them readable as CSV

I am currently trying to move several data tables in my current AWS instance's redshift database to a new database in a different AWS instance (for background my company has acquired a new one and we need to consolidate to on instance of AWS).
I am using the UNLOAD command below on a table and I plan on making that table a csv then uploading that file to the destination AWS' S3 and using the COPY command to finish moving the table.
unload ('select * from table1')
to 's3://destination_folder'
CREDENTIALS 'aws_access_key_id=XXXXXXXXXXXXX;aws_secret_access_key=XXXXXXXXX'
ADDQUOTES
DELIMITER AS ','
PARALLEL OFF;
My issue is that when I change the file type to .csv and open the file I get inconsistencies with the data. there are areas where many rows are skipped and on some rows when the expected columns end I get additional columns with the value "f" for unknown reasons. Any help on how I could achieve this transfer would be greatly appreciated.
EDIT 1: It looks like fields with quotes are having the quotes removed. Additionally fields with commas are having the commas separated away. I've identified some fields with quotes and commas and they are throwing everything off. Would the addquotes clause I have apply to the entire field regardless of whether there are quotes and commas within the field?
Default document will have extension as txt and with quotes. Try to open it with Excel and then save as csv file.
refer https://help.xero.com/Q_ConvertTXT

SSIS Pipe delimited file not failing when the row has more number pipes than the column number?

My Source File is (|) Pipe Delimited text file(.txt). I am trying load the file into SQL Server 2012 using SSIS(SQL Server Data Tools 2012). I have three columns. Below is the example for how data in file looks like.
I am hoping my package should fail as this is pipe(|) delimited instead my package is a success and the last row in the third column with multiple Pipes into last column.
My Question is Why is't the package failing? I believe it has corrupt data because it has more number of columns if we go by delimiter?
If I want to fail the package what are my options,If number of delimiters are more than the number columns?
You can tell what is happening if you look at the advanced page of the flat file connection manager. For all but the last field the delimiter is '|', for the last field it is CRLF.
So by design all data after the last defined pipe and the end of the line (CRLF) is imported into your last field.
What I would do is add another column to the connection manager and your staging table. Map the new 'TestColumn' in the destination. When the import is complete you want to ensure that this column is null in every row. If not then throw an error.
You could use a script task but this way you will not need to code in c# and you will not have to process the file twice. If you are comfortable coding a script task and / or you can not use a staging table with extra column then that will be the only other route I could think of.
A suggestion for checking for null would be to use an execute sql task with single row result set to integer. If the value is > 0 then fail the package.
The query would be Select Count(*) NotNullCount From Table Where TestColumn Is Not Null.
You can write a script task that reads the file, counts the pipes, and raises an error if the number of pipes is not what you want.

Format 6 Mio Codes for SQL Table

I have a txt. File with 6 Mio unique Codes.
Codes like:
0007:=)GnuW
0045:)w1WKu
0007:=)GnuW
0045:)w1WKu
.....
I need a way to format them with a separator like || to upload them into a SQL Table.
I tried to use SublimeText's ability to mark all 6 Mio Lines and jump to the End of each line to add the separator. ->that didn't worked Sublime crashes.
Once I have my formatted csv. How should I import this huge amount of records?
Should I Split the File into 100 Files?
I don't get why you need the file to be converted from .txt to .csv? .txt should have line breaks.
If you are able to perform bcp this will be the fastest way to import data.
http://technet.microsoft.com/en-us/library/aa173839(v=sql.80).aspx
Another way would be using bulk insert
http://technet.microsoft.com/en-us/library/aa225968(v=sql.80).aspx
But also using "Import Data" feature in SSMS or using SSIS Data Flow should not take too long. if inserting into an empty table.
I'm assuming your data has linebreaks? So what about importing the data first and then split as needed?