How to import data to a table with 14 columns via BCP if my data file has less than 13 delimiters? - sql

Sorry for the extremely awkward wording in that question. I'll explain.
I have a table with 14 columns in which i'm trying to import data to via BCP. My data comes from a text file. This text file is TAB delimited. Logically there should be 13 delimiters for 14 cells of data in a row. My data is inconsistent and doesn't have delimiters if the values at the end are null. This means that some rows of data only have 10 delimiters. This causes my data to "wrap around" when it is imported. The first cell of data in my text file is being put in the 10th column of the row prior to it. It should be the first cell in its own new row.
The thing is every single row in the text file ends in with "CRLF" which is used by default in BCP.
Is there a way to tell BCP to fill in all 14 columns before moving on to the next row? Or will i have to re format my data file every time i import (not ideal).
Here is my BCP command:
bcp testdb.dbo.MACARP in C:\Users\sysbrady\Desktop\MyData.txt /c /T /t "\t" /E -S WSTVDISTD023\SQLEXPRESS

"Is there a way to tell BCP to fill in all 14 columns before moving on to the next row?"
When you say "fill in", do you mean you want BCP to keep the null values present in your text file? The -k qualifier tells BCP to keep the nulls (make sure the column in your table allows nulls). See link below:
http://msdn.microsoft.com/en-us/library/ms187887.aspx
"The thing is every single row in the text file ends in with "CRLF" which is used by default in BCP."
This is unclear - could you post an image? Unsure of whether you have phrased this as a problem, or a feature you want to retain.

Related

Copying output from Snowflake to Excel or notepad showing additional row

I was trying to copy the output from Snowflake to Excel. For example, there are only 4 rows in Snowflake, but it becomes 5 rows with one additional row when I copied it to Excel or Notepad.
I used trim and replace statement to clean up spaces, but the extra row is still there.
So it "works for me" the SQL come from this Answer
Do you have better words, or a cut down example?

How to combine two rows into one row with respective column values

I've a csv file which contains data with new lines for a single row i.e. one row data comes in two lines and I want to insert the new lines data into respective columns. I've loaded the data into sql but now I want to replace the second row data into 1st row with respective column values.
output details:
I wouldn't recommend fixing this in SQL because this is an issue with the CSV file. The issue is that file contains new lines, which causes rows split.
I strongly encourage fixing CSV files, if possible. It's going to be difficult to fix that in SQL given there are going to be more cases like that.
If you're doing the import with SSIS (or if you have the option of doing it with SSIS if you are not currently), the package can be configured to manage embedded carriage returns.
Define your file import connection manager with the columns you're expecting.
In the connection manager's Properties window, set the AlwaysCheckForRowDelimiters property to False. The default value is True.
By setting the property to False, SSIS will ignore mid-row carriage return/line feeds and will parse your data into the required number of columns.
Credit to Martin Smith for helping me out when I had a very similar problem some time ago.

can i export sqlite database to csv and if so how?

I have a database including 10 tables: (date ,day ,month ,year ,pcp1 ,pcp2 ,pcp3 ,pcp4,pcp5 ,pcp6) and each column has 41 years dataset. day, month and year columns are "Null" as l will add them later after exporting tables in csv file and l did this part but format is not correct as each column must be respectively separate.
The query is not supposed to return the headers. Also I'm confident about both points below. This is untested, but the description attribute returns the last query table names, so it should work
About the extra blank line every line: I suppose you're using windows. Opening the output as text file add an extra \r (carriage return char). It's handled differently between python 2 and python 3:
It's actually OK, but you have the impression that it's not working because you're opening the csv with excel and excel requires a ; by default for csvs => you have to specify semicolon delimiter or Excel opens it on one column.

Changing the length of Text fields in an Access linked table

I am exporting a file from a system as .csv. My aim is to link to this file as a table (which matches the output field for field) and then run the queries and export.
The problem I am having is that, upon import, all the fields are 255 bytes wide rather than what they need to be.
Here's what I've tried so far:
I've looked at ALTER TABLE but I cannot run multiple ALTER TABLE statements in one macro.
I've also tried appending the table into another table with the correct structure but it seems to overwrite the structure.
I've also tried using the Left function with the appropriate field length, but when I try to export, I pretty much just see 5 bytes per column.
What I would like is a suggestion as to what is the best path to take given my situation. I am not able to amend the initial .csv export, and I would like to avoid VBA if possible, as I am not at all familiar with it.
You don't really need to worry about the size of Text fields in an Access linked table that is connected to a CSV file. Access simply assigns each Text field the largest possible maximum size: 255. It does not mean that every value is actually 255 characters long, it just means that any values in those fields can be at most 255 characters long.
Even if you could change the structure of the linked table (which you can't), it wouldn't make any real difference except to possibly truncate longer Text values, and you could easily do that with a String function. For example, if a particular field had to be restricted to 15 characters then you could simply use Left([fieldName], 15) as a query column or as the control source in a report.
In the end, as the data set is not that large, I have set this up to append from my source data into a table with the correct structure. I can now run my processes against this table as per normal.

Load Data Infile Syntax - Invalid field count in CSV on line 1

I was using phpmyadmin for ease of use and am using the Load Data Infile Syntax which gives the following error, - invalid field count in CSV on line 1. I know there is an invalid field count which is on purpose.
Basically the table has 8 columns and the files have 7. I can go into the file and change in manually to 8 by entering data in the 8th column but this is just too time consuming, in fact I would have to start again by the time I finish so I have to rule that out.
The eight column will be a number which is the exact same for each row per file, so unique for each file.
For example the first file has 1000 rows each with data that goes in the first seven columns, then the 8th column is used to identify to what this file data is in reference to. So for the 1000 rows on the sql table the first 7 columns are data, while the last column will just be 1000 1's, and then next file's 1000 rows will have an 8th column that says 1000 2's and so on. (note I'm actually goign to be entering 100001, rather than 1 or 000001 for obvious reasons).
Anyway, I can't delete the column either and add back after loading the file for good reasons which I'll not explain, but I am aware of that method is useless to this scenario.
What I would like is a method which as I load a file which fills the first 7 columns, while for the 8th column, to have a specified int placed in each row of the 8th column for each row there is in the csv. Like auto increment except, rather than increment each new row, just stay the same. Then for the second file all I need to do is change the specified int.
Notes: the solution can't be to change the csv file as this is to time consuming and it is actually counter intuitive.
I'm hoping someone knows if there is a way then to do this, possibly by having sql code which is both a mixture of Load File and Insert so that it processes correctly without error.
The solution is to simply load the 8th column into a variable, something like this:
SET #dummy_variable = 0; /* <-not sure if you need this line...*/
LOAD DATA INFILE 'file.txt'
INTO TABLE t1
(column1, column2, ..., column7, #dummy_variable);