BULKINSERT from MatLab to SQL server - bulkinsert

I'm running a MatLab script, where I update a table on a SQL connection. All is OK if I read or update table from SQL server trhough matLab simple command; data appears perfectly. But I'm facing troubles when I use BULINSERT command. No data are updated!! However, it works while SQL console of SQL Server Management Studio.
My code (MatLab sample):
conn = database('Dados_SQL','sa','SQL#Edison');
A = {100000.00,'KGreen','06/22/2011','Challengers'};
A = A(ones(10000,1),:);
fid = fopen('c:\temp\tmp.txt','wt');
for i = 1:size(A,1)
fprintf(fid,'%10.2f \t %s \t %s \t %s \n',A{i,1}, ...
A{i,2},A{i,3},A{i,4});
e = exec(conn,['bulk insert BULKTEST from '...
'''c:\temp\tmp.txt''with (fieldterminator = ''\t'', '...
'rowterminator = ''\n'')']);
end
close(e);
Thanks in advance.!
Edison.

MSSQL has problems reading from a drive which is not local to the database. If your C drive is not the database drive, that might be the problem. You have two options:
Generate a file that you save onto the database server from matlab. Perhaps you have a shared filesystem so this is easy. Or use FTP. Or even Dropbox on each server. This depends on your setup and security requirements.
Generate a large text set of all SQL insert commands you want to run, and send them to the server in a single query. This takes away the need for multiple connections to the server which is slow, but it is not as efficient on the server side as a bulk insert.

Related

Is there a way to use SQL statements to query a CSV file on NetBeans without using a database connection?

I am trying to create an executible JAR file that has my database self-contained as a CSV file (the point is that if I share the program to another computer, that computer does not need to download MySQL in order to access the information in the database).
I have imported H2 libraries, so that I can convert the CSV into a ResultSet. Is there a way I can integrate SQL statements to manipulate the ResultSet without having to set up a connection to a database?
This is what I have so far, with a connection to a database:
// This is where I want to retrieve the data (from the CSV),
// but the only way I know how to run SQL requires that the data is
// retrieved from the MySQL database connection.
ResultSet rs = new Csv().read("file.csv", null, null);
// This is the traditional way of executing an SQL query statement
Connection con = DriverManager.getConnection("jdbc:mysql://localhost:3306/db","root","password");
Statement stmt = con.createStatement(); // Another way to create the statement?
rs = stmt.executeQuery(sql);

Moving data from SQL Server 2008 to remote SQL Server 2000, using sqlcmd

The setup:
I have two different machines, one with MS SQL Server 2008 which is my main machine and a secondary with MS SQL Server 2000. Newly generated data are stored in a specific table on the main server(2008).
The problem:
My main machine has limited storage, whereas my secondary one with the older SQL version(2000), doesn't have such kind of limitations.
A possible solution:
At least as a temporary solution, i could try to move some data, on a daily schedule, from the main machine to the secondary, using sqlcmd, run by a Windows Task Scheduler.
The largest portion of my data are stored on a single table so the idea is to "cut" them from the table on the main server and append them on a "backup/depot/storage" table on my secondary server.
What i have tried so far:
So far, i haven't been able to simultaneously connect to both servers from the main one, at least using sqlcmd. Is there a limitation for the sqlcmd to the connections it can create simultaneously?
Other possible ways:
Is there a suggested practice for that case? Would it be a good idea to write a vbs script to export from the main server and import to the secondary?
All corrections and suggestions are welcome. And thanks for your time.
First, link the servers. From the server you want to INSERT into, run this SQL (using any tool you want to run SQL with... SQL Server Management Studio or SQLCMD or OSQL or whatever):
EXEC sp_addlinkedServer N'remoteSqlServer\Instance', N'SQL Server'
EXEC sp_addlinkedsrvlogin #rmtsrvname = N'remoteSqlServer\Instance', #useself = 'FALSE', #rmtuser = N'user', #rmtpassword = N'password'
Replace "remoteSqlServer\Instance" with the actual host-name\instance of the SQL Server you want to copy data FROM. Also replace the "user" and "password" with appropriate login credentials to that server.
After that is done, you can execute SQL like the following against this server (from anywhere, including SQLCMD):
INSERT INTO [LocalTable]
SELECT * FROM [remoteSqlServer\Instance].[DatabaseName].[schema].[TableName]
Again, this is just an example... you'd replace all those values with values appropriate to your source and destination databases (everything in the square brackets). For instance, it might look something like this:
INSERT INTO [HistoricalData]
SELECT * FROM [DBServer\SQL2008].[ProductionDatabase].[dbo].[CurrentData]
I hope this helps.

Maximum size of .SQL file or variable for SQL server?

I have a SQL file/SQL string which is about 20MB. SQL server simply cannot accept this file. Is there a limit on the maximum size of the .SQL file or variable which is used to query or insert data into SQL server ?
When I say variable, it means passing a variable to SQL server through some programming language or even ETL tool.
You can use SQLCMD, but I just ran into a 2GB file size limit using that command-line tool. This was even though I had a GO after every statement. I get an Incorrect syntax error once the 2GB boundary is crossed.
After some searching, I found this link:
https://connect.microsoft.com/SQLServer/feedback/details/272757/sqlcmd-exe-should-not-have-a-limit-to-the-size-of-file-it-will-process
The linked page above says that every character after 2GB is ignored. That could explain my Incorrect syntax error.
Yep, I've seen this before. There is no size limit to .sql files. It's more about what kind of logic is being executed from within that .sql file. If you have a ton of quick inserts into a small table ex: INSERT INTO myTable (column1) VALUES(1) then you can run thousands of these within one .sql file whereas if you're applying heavy logic in addition to your insert/deletes then you'll have these problems. The size of the file isn't as important as what's in the file.
When we came across these in the past, we ran the .sql files from SQLCMD . Very easy to do. You could also create a streamreader in C# or vb to read the .sql file and build a query to execute.
SQLCMD: SQLCMD -S [Servername] -E -i [SQL Script]
Was that clear enough? If you post an example of what you're trying to do then I could write some sample code for you.
When I first experienced this problem, the only solution I found was to split the .sql file into smaller ones. That didn't work for our solution but SQLCMD did. We later implemented a utility that read these large files and executed them with some quick c# programming and a streamreader.
Size of the SQL file should be limited by memory available on your PC/workstation. However, if you don't want to use osql and/or third party tool(s), there is a solution for this in the very SSMS. It's called SQLCMD Mode and it enables you to run a SQL file by referencing it, and not really opening it in editor.
Basically, all you have to do is:
In your Query menu select SQLCMD Mode
Look up the path to your called script (large SQL file)
Open up a New Query (or use existing one) and write this code in a new line
:r D:\PathToMyLargeFile\MyLargeFile.sql
Run that (calling) script
If you need to use a variable in your called script, you have to declare it in a calling script. Then your calling script should look like this:
:setvar myVariable "My variable content"
:r D:\PathToMyLargeFile\MyLargeFile.sql
Let's say your called script uses the variable for content that should be inserted into rows. Then it should look something like this...
INSERT INTO MyTable (MyColumn)
SELECT '$(myVariable)'
Pranav was kind of on the right track in referencing the Maximum Capacity Specifications for SQL Server article; however, the applicable limit to executing queries is:
Length of a string containing SQL statements (batch size)1 65,536 *
Network packet size
1 Network Packet Size is the size of the tabular data stream (TDS)
packets used to communicate between applications and the relational
Database Engine. The default packet size is 4 KB, and is controlled by
the network packet size configuration option.
Additionally, I have seen problems with large numbers of SQL statements executing in SQL Server Management Studio. (See SQL Server does not finish execution of a large batch of SQL statements for a related problem.) Try adding SET NOCOUNT ON to your SQL to prevent sending unnecessary data. Also, when doing large numbers of INSERT statements, try breaking them into batches using the GO batch separator.
I think your concern comes from trying to open your file in SSMS. Within SSMS, opening a 20mb file would likely be problematic -- no different than trying to open the same file in Notepad or most text editors.
For the record - for other posters - I don't think the questions has anything to do at all with SQL column, table, object, or database sizes! It's simply a problem with using the IDE.
If the file is pure data to be imported, with NO sql commands, try bulk import.
If the file is SQL commands, you're going to need an editor that can handle large files, like Vedit. http://www.vedit.com/ It won't be able to execute the sql. You must do that from the command line using sqlcmd as noted above.
Here are few links,
2 I hope they might be helpful for you
I came through this article on MSDN which specifies "Maximum Capacity Specifications for SQL Server", going through this, I was able to find :
For Sql Server 2012, 2008 R2, 2005 :
Maximum File size (data): 16 terabytes
Maximum Bytes per varchar(max), varbinary(max), xml, text, or image column: 2^31-1 Bytes (~2048 GB)
For more details on Maximum Capacity Specifications for SQL Server, refer:
For SQL Server 2012:
http://msdn.microsoft.com/en-us/library/ms143432(v=sql.110).aspx
For SQL Server 2008 R2:
http://msdn.microsoft.com/en-us/library/ms143432(v=sql.105).aspx
For SQL Server 2005:
http://msdn.microsoft.com/en-us/library/ms143432(v=sql.90).aspx
For Sql server 2000, I am not sure since MSDN seems to have removed related documentation .
It is not clear from your question what the SQL file contains. The solution I suggest below is only applicable if the SQL file you refer to has only insert statements.
The fastest way to insert large amounts of data into SQL server is to use bulk copy functionality (BCP BCP Utility)
If you have SQL Server management studio then you should also have the bcp utlity look in C:\Program Files (x86)\Microsoft SQL Server\90\Tools\Binn (or equivalent).
If you want to use BCP utility then you would need to create a file that contains the data, this can be comma delimited. Refer to bcp documentation on what the file should look like.
For Maximum Capacity Specifications for SQL Server , you can check in here.
http://msdn.microsoft.com/en-us/library/ms143432(v=sql.120).aspx.
if you ask "Is there a limit on the maximum size of the .SQL file or variable which is used to query or insert data into SQL server ?" I will say yes there is a limit for each variabel.and if you want upload file with big size, i recommended you convert your file to varbinary or you can increasing the Maximum Upload Size in your sistem web.
here i give some example http://msdn.microsoft.com/en-us/library/aa479405.aspx

Copy data between two server instances

I want something like :
insert into server2.database1.table1 select * from server1.database1.table1
both tables are exactly the same.
how can I Copy data between two server instances?
SQL - Linked Server
If both servers are SQL Server, you can set up Linked servers - I would suggest using an SQL account for security there.
Then you can simply perform
insert into server2.database1.dbo.table1
select * from server1.database1.dbo.table1 where col1 = 'X'
If you run the query in SQL Management studio connected to server1, and current database set to database1, you won't need the prefix
server1.database1.dbo.
Also, the linked server would be configured on server1, to connect to server2 (rather than the other way around).
If you have the correct OLE DB drivers, this method can also work between different types of RDBMS (ie. non-SQL Server ones).
Open Query
Note: Beware not to rely on linked servers too much especially for filtering, and for joins across servers, as they require data to be read in full to the originating RDBMS before any conditions can be applied. Many complications can arise from Linked Servers, so read up before you embark, as even version differences might cause headaches.
I recommend you use the OPENQUERY command for SQL Servers to get around such limitations. Here's an example, but you should find help specific to your needs through further research:
insert into server2.database1.dbo.table1
select * from OPENQUERY(server1, 'select * from database1.dbo.table1 where col1 = ''X''');
The above code is more efficient, filtering the data on the source server (and using available indexes), before pumping the data through, saving bandwidth/time/resources of both the source and destination servers.
(Also note the double quote '', is an escape sequence to produce a single quote.)
SQL - Temporarily on the same server
Would enable (note the underscore):
insert into server2_database1.dbo.table1
select * from database1.dbo.table1
Still within the SQL query domain. If you can temporarily move the database on server2 to server1, then you won't need the linked server. A rename of the database would appear to be required while co-locating on server1. Achieving such co-location could use various methods, I suggest shrinking database files before proceeding with either:
Backup/Restore - Backup on server2, Restore on server1 (with different name) - perform insert as described above, but without the server1 or server2 prefixes. Then reverse - backup on server1, restore on server2/
Detach/Attach - Rename database, Detach on server2, (compress), copy files to server 1, (decompress), attach on server1, perform insert. Then reverse...
In either case, SQL Server version could be a barrier. If server1 is of a lower SQL version, then both backup and detach/attach methods will likely fail. This can be worked around by moving the server1 database to server2, which may or may not be more suitible.
Other Methods
May be suitable, non-SQL/TSQL method failing favorable environmental factors for previously mentioned methods. And if you have the correct access (OLE DB Drivers, etc..), this method can also work between different types of RDBMS (ie. non-SQL Server ones), and data-sources (such as XML, flatfiles, Excel Spreadsheets...)
SSIS Explicitly with Business Development Management Studio - direct datapump or using delimited file intermeditary.
SSIS Implicitly through SQL Management Studio, by right clicking the database1 on server1 > Tasks > Export, then completing the wizard. May work direct to server2, or using a flat-file intermeditary.
.Net Programming with SqlBulkInsert (I believe the SSIS datapump uses such an object), I can go into more detail about this, if it interests you.
Eg. of SQLBulkInsert (psedo-C# code)
SqlConnection c = new SqlConnection("connectionStringForServer1Database1Here");
SqlConnection c2 = new SqlConnection("connectionStringForServer2Database1Here");
c.Open();
SqlCommand cm = new SqlCommand(c);
cm.CommandText = "select * from table1;";
using (SqlDataReader reader = cm.ExecuteReader())
{
using (SqlBulkInsert bc = new SqlBulkInsert(c))
{
c2.Open();
bc.DestinationTable = "table1";
bc.WriteToServer(reader);
}
}
Pretty cool huh? If speed/efficiency is a concern - SqlBulkInsert based approaches (Such as SSIS) are the best.
Update - Modifying the destination table
If you need to update the destination table, I recommend that you:
Write to a staging table on the destination database (a temporary table, or proper table which you truncate before and after process), the latter is preferable. The former may be your only choice if you don't have CREATE TABLE rights. You can perform the transfer using any one of the above options.
Run a MERGE INTO command as per your requirements from the staging table to the destination table. This can Insert, Update and Delete as required very efficiently.
Such a whole process could be enhanced with a sliding window (changes since last checked), only taking recently changed rows in the source an applying to the destination, this complicates the process, so you should at least accomplish the simpler one first. After completing a sliding window version, you could run the full-update one periodically to ensure there are no errors in the sliding window.
To copy data between two different servers you have several options:
Use linked servers.
Use the data import export wizard.
Use a third party tool such as Red Gate SQL Data Compare.
Similar to Todd C# SqlBulkCopy
Generally this is easier than creating linked servers.
Create a unit test and run the below, if you have triggers then be careful and you will need ALTER permissions.
[Test]
public void BulkCopy()
{
var fromConnectionString = #"fromConnectionString";
var destinationConnectionString = #"destConnectionString2";
using (var testConnection = new SqlConnection(fromConnectionString))
{
testConnection.Open();
var command = new SqlCommand("select * from MyTable;", testConnection);
using (var reader = command.ExecuteReader())
{
using (var destinationConnection = new SqlConnection(destinationConnectionString))
{
using (var bc = new SqlBulkCopy(destinationConnection))
{
destinationConnection.Open();
bc.DestinationTableName = "dbo.MyTable";
bc.WriteToServer(reader);
}
}
}
}
}
}
The best way to do this would be to create a "linked server".
And then you can use below statement into your insert statement in order to define your table
[linkedserver].databasename.dbo.tablename
On Server A add a linked server (B)
http://msdn.microsoft.com/en-us/library/ms188279.aspx
Then you can transfer data between the two.
Export table data from one SQL Server to another
HTH
First You need to add the server
Eg. Server 1 and Server 2
sp_addlinkedserver 'Server-2'
then copy your data from that server to your server by using following query
In Server-1 Write
select * INTO Employee_Master_bkp
FROM [Server-2].[DB_Live].[dbo].[Employee_Master]
If you need an alternative without using Linked Servers, my favorite option is use the command line BCP utility.
With this bulk copy tool, you can export the data to a flat file, copy the file across the network and import it (load it) onto the target server.
https://learn.microsoft.com/en-us/sql/tools/bcp-utility

Most efficient way to output SQL table to XML file

I have server that needs to process and dump from an SQL database queries and tables into xml format on disk. This needs to be a scheduled task.
Currently using BCP via a scheduled batch file > sql script > xp_cmdshell >bcp, but this error
SQLState = S1000, NativeError = 0
Error = [Microsoft][SQL Server Native Client 10.0][SQL Server]Warning: Server data (172885 bytes) exceeds host-file field length (65535 bytes) for field (1). Use prefix length, termination string, or a larger host-file field size. Truncation cannot occur
for BCP output files.
is troubling me in the log files. I have found no solution online yet. I do not quite understand what the 'host-file field' is referring to. The original table has no column with a value as large as 172885 bytes. The output files are very large, and so far it seems as thought the data is all being written, but there seems to be some garbage at the end of all the xml files.
Performance is important but reliability is the most important for me in this situation.
I have tried recreating the error locally but have been unsuccessful in doing so. The server runs Windows Server 2008 r2.
Any help or explanation/analysis of the error and it's meaning, as well as a recommendation of a simple scheduled solution to dump the sql tables/queries to xml files, would be appreciated.
You should check out the FOR XML PATH syntax introduced in SQL Server 2005:
SQL Server: simple example of creating XML file with T-SQL
What's new in FOR XML in SQL Server 2005
With this, you can easily create fairly nifty XML outputs, including hierarchies, attributes and more