I used the following command to import data from a text file, however, I need to find out a way of selecting specific columns within the text file. The following links have been suggested to me however I'm struggling to understand whether I need to replace my current SQL with the examples on MSDN:
BULK INSERT T2 FROM 'c:\Temp\Data.txt' WITH (FIELDTERMINATOR = ',')
http://msdn.microsoft.com/en-us/library/ms179250.aspx
http://msdn.microsoft.com/en-us/library/ms187908.aspx
I have the following fields held within a text file which is separated by comma. The data is also separated by comma enabling me to use the above code to import it all.
Date,Time,Order,Item,Delivery Slot,Delivery Time
Is there a way to only import Date, Time, Item and Delivery Time into an SQL database table?
Use a Format File for your BULK INSERT. You can specify which fields are imported through this file definition.
EDIT: example from MSDN.
BULK INSERT bulktest..t_float
FROM 'C:\t_float-c.dat' WITH (FORMATFILE='C:\t_floatformat-c-xml.xml');
GO
Related
I am using Teradata SQL to import a CSV file. I clicked import to activate the import operation, then typed the following
insert into databasename.tablename values(?,?,?,...)
I made sure to specify the database name as well as what I want the table to be named, and I put 13 commas--the number of columns in my CSV file.
It gives me the following error:
Query contains 13 parameters but Import file contains 1 data values
I have no idea what the issue is.
The default delimiter used by your SQL Assistant doesn't match the one used in the CSV, so it doesn't recognise all the columns.
On SQL Assistant, go to : Tools >> Options >> Export/Import and choose the proper delimiter so it matches the one in your CSV.
I have a following table in xlsx format which I would like to import into the my sql database:
The table is pretty complicated and I only want the records after '1)HEADING'
I have been looking at php libraries to import into sql but they only seem to be for simple excel files.
You have two ways to realize that :
First method :
1) Export it into some text format. The easiest will probably be a tab-delimited version, but CSV can work as well.
2) Use the load data capability. See http://dev.mysql.com/doc/refman/5.1/en/load-data.html
3) Look half way down the page, as it will gives a good example for tab separated data:
FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\'
4) Check your data. Sometimes quoting or escaping has problems, and you need to adjust your source, import command-- or it may just be easier to post-process via SQL.
Second method :
There's a simple online tool that can do this called sqlizer.io.
You upload an XLSX file to it, enter a sheet name and cell range, and it will generate a CREATE TABLE statement and a bunch of INSERT statements to import all your data into a MySQL database.
Looking at the AM data, just for a data analysis project and I'm having trouble importing the data into my dbms (postgresql).
My code is sql code is this:
DROP TABLE IF EXISTS member_details;
CREATE TABLE member_details(
pnum varchar(255),
.....
updatedon timestamp);
COPY member_details
FROM '/Users/etc/data/sample_dump.csv'
WITH DELIMITER ','
CSV;
Problem is that the csv file has no line breaks to separate the data, instead each record is within a bracket which my code above does not recognise and thus just imports all the data into the header in one line and so no records are created
how the data is structured
(dataA1, ....,dataAx),(dataB1,...,dataBx)
How can I alter my code so that postgresql imports the data record by record by recognising the brackets.
Based on the PostgreSQL COPY documentation, I don't believe it allows for row delimiters other than carriage returns and/or line feeds. I believe you'll need to process your file before importing. You can simply replace all ,( with \n(, then replace all the parenthesis to make it a standard csv format that COPY will happy consume.
Perhaps there's another method for PostgreSQL that would work too, but I haven't come across anything yet.
I am having a .CSV file that contain more than 1,00,000 rows.
I have tried the following method to Import the CSV into table "Root"
BULK INSERT [dbo].[Root]
FROM 'C:\Original.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
But there are so many errors like check your Terminators.
I opened the CSV with Notepad.
There is no Terminator , or \n. I find at end of the row a square box is there.
please help me to import this CSV into table.
http://msdn.microsoft.com/en-us/library/ms188609.aspx
Comma-separated value (CSV) files are not supported by SQL Server bulk-import operations. However, in some cases, a CSV file can be used as the data file for a bulk import of data into SQL Server. Note that the field terminator of a CSV file does not have to be a comma. To be usable as a data file for bulk import, a CSV file must comply with the following restrictions:
Data fields never contain the field terminator.
Either none or all of the values in a data field are enclosed in quotation marks ("").
Note: There may be other unseen characters that need to be stripped from the source file. VIM (command ":set list") or Notepad++(View > Show Symbol > Show All Characters) are two methods to check.
If you are comfortable with Java, I have written a set of tools for CSV manipulation, including an importer and exporter. The project is up on Github.com:
https://github.com/carlspring/csv-db-tools
The importer is here:
https://github.com/carlspring/csv-db-tools/tree/master/csv-db-importer
For instructions on how to use the importer, check:
https://github.com/carlspring/csv-db-tools/blob/master/csv-db-importer/USAGE
You will need to make a simple mapping file. An example can be seen here:
https://github.com/carlspring/csv-db-tools/blob/master/csv-db-importer/src/test/resources/configuration-large.xml
I have such csv file, fields delimiter is ,. My csv files are very big, and I need to import it to a SQL Server table. The process must be automated, and it is not one time job.
So I use Bulk Insert to insert such csv files. But today I received a csvfile that has such row
1,12312312,HOME ,"House, Gregory",P,NULL,NULL,NULL,NULL
The problem is that Bulk Insert creates this row, specially this field "House, Gregory"
as two fields one '"House' and second ' Gregory"'.
Is there some way to make Bulk Insert understand that double quotes override behaviour of comma?
When I open this csv with Excel it sees this field normally as 'House, Gregory'
You need preprocess your file, look to this answer:
SQL Server Bulk insert of CSV file with inconsistent quotes
If every row in the table has double quotes you can specify ," and ", as column separators for that column using format files
If not, get it changed or you'll have to write some clever pre-processing routines somewhere.
The file format need to be consistent for any of the SQL Server tools to work
Since you are referring to Sql Server, I assume you have Access available as well (Microsoft-friendly environment). If you do have Access, I recommend you use its Import Wizard. It is much smarter than the import wizard of Sql Server (even version 2014), and smarter than the Bulk Insert sql command as well.
It has a widget where you can define the Text seperator to be ", it also makes no problems with string length because it uses the Access data type Text.
If you are satisfied with the results in Access you can import them later to Sql Server seamlessly.
The best way to move the data from Access to Sql is using Sql Server Migration Assistant, available here