I have the following query to insert into a table
BULK
INSERT tblMain
FROM 'c:\Type.txt'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
It get the message
Msg 4860, Level 16, State 1, Line 1
Cannot bulk load. The file "c:\Type.txt" does not exist.
The file is clearly there. Anything I may be overlooking?
Look at that:
Cannot bulk load. The file "c:\data.txt" does not exist
Is that file on the SQL Server's C:\ drive??
SQL BULK INSERT etc. always works only with local drive on the SQL Server machine. Your SQL Server cannot reach onto your own local drive.
You need to put the file onto the SQL Server's C:\ drive and try again.
Bulk import utility syntax is described here
http://msdn.microsoft.com/en-us/library/ms188365.aspx
> BULK INSERT [ database_name . [ schema_name ] . | schema_name . ]
> [ table_name | view_name ]
> FROM 'data_file'
> [ WITH
> (
Note on data_file argument says
' data_file '
Is the full path of the data file that contains data to import into
the specified table or view. BULK INSERT can import data from a disk
(including network, floppy disk, hard disk, and so on).
data_file must specify a valid path from the server on which SQL
Server is running. If data_file is a remote file, specify the
Universal Naming Convention (UNC) name. A UNC name has the form
\Systemname\ShareName\Path\FileName. For example,
\SystemX\DiskZ\Sales\update.txt.
I've had this problem before. In addition to checking the file path you'll want to make sure you're referencing the correct file name and file type. Make sure this is indeed a text file that you have saved in the source location and not a word file etc. I got tripped up with .doc and .docx. This is a newb mistake of mine to make but hey, it can happen. Changed the file type and it fixed the problem.
Related
everyone. I am quite a new user of Azure Data Studio and stumbled upon the following problem:
I was intending to read the CSV formatted file from SQL and take the data therefrom. Here is my SQL Code:
USE excel_checks
BULK INSERT short_info from '/Item.csv'
with (fieldterminator = ',', rowterminator = '/n');
I tried to change the location of the file to the docker container by entering the following code in my Terminal
docker cp /Users/office/Desktop/Item.csv name of my container:/
It throws the following error: Cannot bulk load. The file "/Item.csv" does not exist or you don't have file access rights.
I would be happy of someone could help me get this issue sorted out. Googling did not help much as I keep getting the same error.
Thank you.
Instead of copying the file to the root of the file system, copy it into a sub directory to avoid the file access right issue:
docker cp /Users/office/Desktop/Item.csv name of my container:/tmp
Then run the BULK INSERT with that path:
USE excel_checks
BULK INSERT short_info from '/tmp//Item.csv'
with (fieldterminator = ',', rowterminator = '/n');
Im having some issues understanding what does the following type of query do:
insert overwrite local directory $directorey_name$
select $some_query$
What does this mean, and what are the side effects of this?
Export the query results into a file on the local file system
insert overwrite local directory '/tmp/hello'
row format delimited
fields terminated by '|'
select 1,2,3,'Hello','world'
;
! ls /tmp/hello;
000000_0
! cat /tmp/hello/000000_0;
1|2|3|Hello|world
How to export monetdb query result (e.g. to csv file)?
Manual says:
Copy into File
The COPY INTO command with a file name argument allows for fast
dumping of a result set into an ASCII file. The file must be
accessible by the server and a full path name may be required. The
file STDOUT can be used to direct the result to the primary output
channel.
The delimiters and NULL AS arguments provide control over the layout
required.
COPY subquery INTO file_name [ [USING] DELIMITERS
field_separator [',' record_separator [ ',' string_quote ]]] [ NULL AS
null_string ]
https://www.monetdb.org/Documentation/Manuals/SQLreference/CopyInto
I'm trying with various syntax but with no result.
example query:
select * from test;
example failures:
copy select * from test into test.csv;
copy "select * from test" into test.csv;
OK. Missing apostrophe and full path. Also delimiters useful
copy select * from test into '/home/user/test.csv' using delimiters ',';
I am trying to add a text file into SQL database table using BULK INSERT.
BULK
INSERT My_Tablename
FROM 'C:\testing\temptest.txt'
WITH
(
FIELDTERMINATOR = '|',
ROWTERMINATOR = '\n'
)
GO
But got error that 'do not have permission to use the bulk load statement'.
Is there any alternative way to do it?
I don't want to set TRUSTWORTHY ON or create certificate for BULK admin permission.
Try using the SQL Server Import and Export Wizard.
Right click on the database in in Object Explorer within SSMS.
Go to Tasks > Import Data
Select "Flat File Source" for your data source and follow the wizard to specify delimiters, etc.
Although #SQLChao definitely has the answer, I did not remember the location of said Import Data option and simply opened the delimited file with my favorite text editor, Notepad++ and did the following find and replaces with Search Mode set to extended:
Find: ' Replace: ''
Find: | Replace: ','
Find: \r\n Replace: ')\r\n
Find: \r\n Replace: \r\nINSERT INTO [DB_Name].[Schema_Name].[Table_Name] VALUES(\r\n'
The only issues should be in your first and last insert statements which can manually be edited as need be.
I then copied the text straight into Sql Server and executed.
I am trying to insert the data from this link to my SQL server
https://www.ian.com/affiliatecenter/include/V2/CityCoordinatesList.zip
I created the table
CREATE TABLE [dbo].[tblCityCoordinatesList](
[RegionID] [int] NOT NULL,
[RegionName] [nvarchar](255) NULL,
[Coordinates] [nvarchar](4000) NULL
) ON [PRIMARY]
And I am running the following script to do the bulk insert
BULK INSERT tblCityCoordinatesList
FROM 'C:\data\CityCoordinatesList.txt'
WITH
(
FIRSTROW = 2,
MAXERRORS = 0,
FIELDTERMINATOR = '|',
ROWTERMINATOR = '\n'
)
But the bulk insert fails with following error
Cannot obtain the required interface ("IID_IColumnsInfo") from OLE DB provider "BULK" for linked server "(null)".
When I google, I found several articles which says the issue may be with RowTerminator, but I tried everything like \n\r, \n etc, but nothing is working.
Could anyone please help me to insert this data into my database?
Try ROWTERMINATOR = '0x0a'.
it should work.
This can also happen if the number of columns mismatch between the table and the imported file
I got the same error message, and as you had mention, it was related to unexpected line ending.
In my case the line ending was specified in a fmt file as a Windows Line ending (CRLF), written as \r\n, and the data file to process has a Mac classic one (CR).
I solved it with an editor that can show the current line ending and change it. I used EditPad Lite wich shows the opened file line ending in the bottom bar and pressing it allow to replace with the expected one.
I had this on SQL2019 when the FORMAT='CSV' option was used, and there was a comma on the end of each line in the source file. So the table your BULK inserting into needed to have an extra dummy field to cater for the fact each record has essentially a blank field in the source file.
!
I get the same error, probably from the file encoding problem. I fixed it by opening the problem CSV file using Notepad++, select everything and copy to clipboard. Next, create a new text file (making sure it has the CSV file extension), open it using Notepad++, then paste the text to the new file. Save and close all files. You should be able to successfully load the new CSV file into the SQL server.
you need run BULK INSERT - command from windows login (not from SQL). Now I don't have any examples