I am using T-SQL with SQL Server 2012.
Here is my question: I have a text file on server 10.10.10.1. I want to create a temporary table on server 10.10.10.2 from the previous text file.
According to my research, BULK INSERT works on local C: directory.
Is there a way to do the following?
BULK INSERT dbo.#G2
FROM '10.10.10.1.C:\File\textfile.txt'
WITH
(
CODEPAGE = '1252',
FIELDTERMINATOR = ';',
CHECK_CONSTRAINTS
)
Thank you for your help.
You can read data from a file share (\\computer\share\folder\file) but the SQL Server process has to have access. As SQL Server generally runs as a local service account it can only access shares that allow anonymous access (so anyone can read the content of the share).
Better up upload to a folder on the server, but of course that means sharing a writable folder from the database server. While controllable (eg. dedicated partition, controlled ACL) it is still not ideal.
Related
I have an Access back-end that is going to be converted to SQL Server. The front-end will stay the same using Access. The issue I am having is how SQL Server handles images differently than MS Access.
Currently, a user adds a picture to the record via the attachment data type which, to my understanding, isn't possible in SQL Server. I saw the image data type is deprecated which leaves varbinary(MAX) and/or filestream as the options.
I want to go with storing the images in the filesystem as the size is greater than 256KB, but I'm not finding any documentation about accomplishing that with an Access front-end.
Consider running an MS Access pass-through query to upload user's image. Specifically, pass the file name into an SQL query as shown in MSDN docs for large-value data types. For this, the user will need OPENROWSET privileges and the image file may need to be accessible on client machine or server.
INSERT myTable (myImageColumn, ...other columns...)
SELECT myPicData.*, ...other values...
FROM OPENROWSET
(BULK 'C:\Path\To\Image.jpg', SINGLE_BLOB) AS myPicData
i have running nifi instance in one machine and have SQL Server in another machine.
Here i can try to perform bulk insert operation with bulk insert Query in SQLserver. but i cannot able insert data from one machine and move it into SQL Server in another machine.
If i run nifi and SQL Server in same machine then i can able to perform bulk insert operation easily.
i have configured GetFile->ReplaceText(BulkInsertQuery)-->PutSQL processors.
I have tried both nifi and sql server in single machine then bulk insert works but not works when both instances in different machines.
I need to get all data's from one machine and write a query to move that data into SQL runs in another machine.
Below query works when nifi and sql server in same machine
BULK INSERT BI FROM 'C:\Directory\input.csv' WITH (FIRSTROW = 1, ROWTERMINATOR = '\n', FIELDTERMINATOR = ',', ROWS_PER_BATCH = 10000)
if i run that query in another machine then it says..,"FileNotFoundError" due to "input.csv" in Host1 machine but runs query in sql server machine (host2)
Can anyone give me suggestion to do this?
The SQL query is being executed on the machine that hosts the SQL Server application. Because the query defines the incoming data with a file system path, the machine that attempts to resolve that path is the SQL machine. The data does not exist at that path, and thus, cannot be loaded. You have a couple options to handle this:
Use NiFi to move the data to a location on the SQL Server instance to be loaded during the SQL query execution. You can use GetFile/PutFTP, or ExecuteStreamCommand with RoboCopy (a Windows analog to rsync) -- this will avoid the cost of bringing the content into NiFi at all.
Use NiFi to ingest the data from the local system into the content repository and then craft a SQL insert statement that reads the actual data rather than providing a file system path.
Since I cannot comment, and this may be stupid to ask, but when you run on two separate machines, could you not have a batch job preform move to a common network location? Or FTP the needed data to a location on your SQL machine?
Since I do not know what Nifi is, I'm not sure, but making sure nifi moves the data to a common location accessible by both your SQL and NIFI machines is the first thing I would do. Then just run your bulk insert while point to this location.
BULK INSERT BI FROM 'Some network directory' WITH 'you with clauses'
I have multiple remote machines on a network that have a csv file on them stored in the same location that has been created by a powershell script.
Whats the best way to insert that data into a Microsoft Sql Server express database?
One way of doing this is to collect all the CSV-files from all the servers to your script server. And then run the SQL INSERT query from the script server to put your data in an SQL table.
As described in one of my previous answers you can use this module Invoke-SQLCmd2 to do this.
The other way, when you don't want to collect all the CSV-files first, is running the SQL INSERT query from every server that has a CSV-file. To do this you have to connect to the other server and then import the module from the script server, so you can use it:
Import-Module -Name '\\SCRIPTSERVER\C$\Windows\system32\WindowsPowerShell\v1.0\Modules\Invoke-SQL'
My problem statement is that I have a csv blob and I need to import that blob into a sql table. Is there an utility to do that?
I was thinking of one approach, that first to copy blob to on-premise sql server using AzCopy utility and then import that file in sql table using bcp utility. Is this the right approach? and I am looking for 1-step solution to copy blob to sql table.
Regarding your question about the availability of a utility which will import data from blob storage to a SQL Server, AFAIK there's none. You would need to write one.
Your approach seems OK to me. Though you may want to write a batch file or something like that to automate the whole process. In this batch file, you would first download the file on your computer and the run the BCP utility to import the CSV in SQL Server. Other alternatives to writing batch file are:
Do this thing completely in PowerShell.
Write some C# code which makes use of storage client library to download the blob and once the blob is downloaded, start the BCP process in your code.
To pull a blob file into an Azure SQL Server, you can use this example syntax (this actually works, I use it):
BULK INSERT MyTable
FROM 'container/folder/folder/file'
WITH ( DATA_SOURCE = 'ds_blob',BATCHSIZE=10000,FIRSTROW=2);
MyTable has to have identical columns (or it can be a view against a table that yields identical columns)
In this example, ds_blob is an external data source which needs to be created beforehand (https://learn.microsoft.com/en-us/sql/t-sql/statements/create-external-data-source-transact-sql)
The external data source needs to use a database contained credential, which uses an SAS key which you need to generate beforehand from blob storage https://learn.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql)
The only downside to this mehod is that you have to know the filename beforehand - there's no way to enumerate them from inside SQL Server.
I get around this by running powershell inside Azure Automation that enumerates blobds and writes them into a queue table beforehand
Curious if this is possible: The app server and db server live in different places (obviously). The app server currently generates a file for use with sql server bulk insert.
This requires both the DB and the app server to be able to see the location, and it makes configuration more difficult in different environments.
What I'd like to know is: is it possible to bypass the file system in this case? Perhaps I can pass the data to sql server and have it generate the file?
I'm on sql server 2008, if that makes a difference.
thanks!
I don't think you can do that with SQL Server's bulkcp tool, but if your app is written using .NET, you can use the System.Data.SqlClient.SqlBulkCopy class to bulk insert rows from a data table (or any datasource you can access with a SqlDataReader).
From the documentation on bulk insert:
BULK INSERT
[ database_name. [ schema_name ] . | schema_name. ] [ table_name | view_name ]
FROM 'data_file'
The FROM 'data_file' is not optional and is specified as such:
'data_file'
Is the full path of the data file that contains data to import into the
specified table or view. BULK INSERT
can import data from a disk (including
network, floppy disk, hard disk, and
so on).
data_file must specify a valid path
from the server on which SQL Server is
running. If data_file is a remote
file, specify the Universal Naming
Convention (UNC) name. A UNC name has
the form
\Systemname\ShareName\Path\FileName.
For example,
\SystemX\DiskZ\Sales\update.txt.
Your application could do the insert directly using whatever method meets your performance needs.