Out of Memory Exception running a large Insert script - sql

I need to execute a large Insert script of size 75MB in my database. I
am using the built in SQL command tool to run this script, but it
still throws the same error - "There is insufficient system memory in
resource pool 'internal' to run this query."
sqlcmd -S .\SQLEXPRESS -d TestDB -i C:\TestData.sql
How to resolve this memory issue, when the last resort of running the script through SQLCMD does not work?
Note - Increasing the Maximum Server Memory(in the Server Properties) did not resolve this problem.

I face the same issue recently. What I have done is added Go statements for every 1000 inserts. This worked perfectly for me.
Go statement will divide the statements into separate batches. So every batch treated as separate Insertion. Hope this will help you in some way.

Related

Execute 50k insert queries in postgresql dbeaver

Is there any possibility to insert 50k datasets into a postgresql database using dbeaver?
Locally, it worked fine for me, it took me 1 minute, because I also changed the memory settings of postgresql and dbeaver. But for our development environment, 50k queries did not work.
Is there a way to do this anyway or do I need to split the queries and do for example 10k queries 5 times? Any trick?
EDIT: with "did not work" I mean I got an error after 2500 seconds saying something like "too much data ranges"
If you intend to execute a giant script sql via interface: don't even try.
If you have a csv file, DBeaver gives you a tool:
Even better, as described in comments, copy command is the tool.
If you have a giant SQL file you need to use command line, like:
psql -h host -U username -d myDataBase -a -f myInsertFile
Like in this post: Run a PostgreSQL .sql file using command line arguments

Import huge SQL file into SQL Server

I use the sqlcmd utility to import a 7 GB large SQL dump file into a remote SQL Server. The command I use is this:
sqlcmd -S IP address -U user -P password -t 0 -d database -i file.sql
After about 20-30 min the server regularly responds with:
Sqlcmd: Error: Scripting error.
Any pointers or advice?
I assume file.sql is just a bunch of INSERT statements. For a large amount of rows, I suggest using the BCP command-line utility. This will perform orders of magnitude faster than individual INSERT statements.
You could also bulk insert data using the T-SQL BULK INSERT command. In that case, the file path needs to be accessible by the database server (i.e. UNC path or copied to a drive on the server) and along with needed permissions. See http://msdn.microsoft.com/en-us/library/ms188365.aspx.
Why not use SSIS. While I have a certificate as a DBA, I always try to use the right tool for the job.
Here are some reasons to use SSIS.
1 - Use can still use fast-load, bulk copy. Make sure you set the batch size.
2 - Error handling is much better.
However, if you are using fast-load, either the batch commits or it gets tossed.
If you are using single record, you can direct each error row to a separate destination.
3 - You can perform transformations on the source data before loading it into the destination.
In short, Extract Translate Load.
4 - SSIS loves memory and buffers. If you want to get really in depth, read some articles from Matt Mason or Brian Knight.
Last but not least, the LAN/WAN always plays a factor if the job is not running on the target server with the input file on a local disk.
If you are on the same backbone with a good pipe, things go fast.
In summary, yeah you can use BCP. It is great for little quick jobs. Anything complicated with robust error handling should be done with SSIS.
Good luck,

Running DB2 SQL from shell command line does not finishing execution

I am on a unix server which is set up to remotely connect to another db2 unix server.
I was able to connect to DB2 using following script:
db2 "connect to <server name> user <user name> using <pass>";
Then I ran following command to save results of SQL to a file
db2 "select * from <tablename>" > /myfile.txt
The script starts execution but never ends.I tried using -x before select too but same result never ends execution.Table is small has only one record.When I forcefully end execution the header of table gets saved in file with following error:
SQL0952N Processing was cancelled due to an interrupt. SQLSTATE=57014
Please help I am stuck in a riddle.
You could monitor the connection and the output file in order to know what is happening.
Before start the monitoring, get the current application handle
db2 "values SYSPROC.MON_GET_APPLICATION_ID()"
Open a second terminal, and execute db2top against your databases. Checks the current sessions (L) and take a look at your connection (previous application ID). If you see a Lock Wait status, it is just because another connection put a lock on that table, and it is not possible to read it concurrently.
db2top -d myDB
Try to execute the same query with another isolation level
db2 "select * from <tablename> WITH UR"
If that is the problem, you should analyze which other processes are running (modifying data) on the database.
Open another terminal, and do a
tail -f /myfile.txt
If you see the file is changing, it is just because the output is too big. Just wait.

How to execute big SQL files on SQL Server?

I have about 50 T-SQL files, some of them are 30MB but some of them are 700MB. I thought on executing them manually, but if the file is bigger than 10MB it throws an out of memory exception on the SQL Server Management Studio.
Any ideas?
you can try the sqlcmd command line tool, that may have different memory limits.
http://msdn.microsoft.com/en-us/library/ms162773.aspx
Usage example:
sqlcmd -U userName -P myPassword -S MYPCNAME\SQLEXPRESS -i myCommands.tsql
If you have that much data - wouldn't it be a lot easier and smarter to have that data in e.g. a CSV file and then bulk importing those into SQL Server??
Check out the BULK INSERT command - allows you to quickly and efficiently load large data volumes into SQL Server - much better than such a huge SQL file!
The command looks something like:
BULK INSERT dbo.YourTableName
FROM 'yourfilename.csv'
WITH ( FIELDTERMINATOR =';',
ROWTERMINATOR =' |\n' )
or whatever format you might have to import.
Maybe this is too obvious, but...did you consider writing a program to loop through the files and call SqlCommand.ExecuteNonQuery() for each line? It's almost trivial.
Less obvious advantages:
You can monitor the progress of the feed (which is going to take some time)
You can throttle it (in case you don't want to swamp the server)
You can add a little error handling in case there are problems in the input files

Are there any programs that will shrink the size of a sql script file?

I have a SQL script which is extremely large (about 700 megabytes). I am wondering if there is a good way to reduce the size of the script?
I know there are code minimizers for JavaScript and am looking for one to use with SQL scripts.
I am not looking to get performance on the SQL script. I am trying to make the file size smaller. Removing excess whitespace. Keeping name-qualification down so that the script file sizes can be smaller.
If I attempt to load the file in SQL Server Management Studio I get this error.
Not enough storage is available to
process this command. (Exception from
HRESULT: 0x80070008) (mscorlib)
What's in this script of 700MB?! I would hope that there are some similarities/repetitions that would allow it to shorten the file.
Just some guesses:
Instead of inserting a million records using Insert statements, use a bulk loading tool
Instead of updating a number of individual records, try to batch updates to the same value into one (e.g. Update tab set col=1 where id in (..) instead of individual updates)
long manipulations can be defined as a stored procedure (before running the script) and the script would only have to call the stored proc
Of course, splitting the script up into smaller portions and calling each one from a simple batch file would work too. But I'd be a little worried about performance (how long does the execution take?!) and would look for some faster ways.
What about breaking your script into several small files, and calling those files from a single master script?
This link describes how to do it from a stored procedure.
Or you can do it from a batch file like this:
REM =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
REM Define widely-used variables up here so they can be changed in one place
REM Search for "sqlcmd.exe" and make sure this path is valid for you
REM =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
set sqlcmd="C:\Program Files\Microsoft SQL Server\100\Tools\Binn\sqlcmd.exe"
set uname="your_uname_here"
set pwd="your_pwd_here"
set database="your_db_name_here"
set server="your_server_name_here"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script1.sql"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script2.sql"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script3.sql"
pause
I like the batch file approach myself, because it is easier to tinker with it, and you can schedule it as a windows job.
Make sure the .BAT file is in a folder with the appropriate security restrictions, since it has your credentials in a plain text .BAT file.
gzip should do.
SQL is much harder to shrink, the field, table names and commands need to be what they are. Plus, you wouldn't just want to rewrite the commands as something shorter because it could have implications on performance.
Depending on the DBMS that you use, it may allow short names for commands, and then there might be a converter.
(Answering this because it is the top item returned when I searched for "SQL script size")
I got the same error when trying to load a large script into Management Studio. In my case I was trying to downgrade a database from SQL2008 R2 to SQL 2008 by using the SQL Server script generator, which created a 700mb structure and data .sql file.
To get around it I used the command line to run the script instead:
C:>sqlcmd -S [SQLSERVER\INSTANCE] -i [FILELOCATION\FILENAME].sql
Hopefully this helps someone else.
Compress the sql file will have the most compression ratio.
Minimizing the txt sql file will reduce some bytes/kilobytes per mega.. is not worth...
The better approach is to create a "function" to unzip and read the file. The best benefit I guess.
Today, filesize shouldn't be a problem. Dial-up connection? Floppy disks?
pg-minify can do it, and not just for PostgreSQL, but for most notations, including MS, MySql, etc.