Running DB2 SQL from shell command line does not finishing execution - sql

I am on a unix server which is set up to remotely connect to another db2 unix server.
I was able to connect to DB2 using following script:
db2 "connect to <server name> user <user name> using <pass>";
Then I ran following command to save results of SQL to a file
db2 "select * from <tablename>" > /myfile.txt
The script starts execution but never ends.I tried using -x before select too but same result never ends execution.Table is small has only one record.When I forcefully end execution the header of table gets saved in file with following error:
SQL0952N Processing was cancelled due to an interrupt. SQLSTATE=57014
Please help I am stuck in a riddle.

You could monitor the connection and the output file in order to know what is happening.
Before start the monitoring, get the current application handle
db2 "values SYSPROC.MON_GET_APPLICATION_ID()"
Open a second terminal, and execute db2top against your databases. Checks the current sessions (L) and take a look at your connection (previous application ID). If you see a Lock Wait status, it is just because another connection put a lock on that table, and it is not possible to read it concurrently.
db2top -d myDB
Try to execute the same query with another isolation level
db2 "select * from <tablename> WITH UR"
If that is the problem, you should analyze which other processes are running (modifying data) on the database.
Open another terminal, and do a
tail -f /myfile.txt
If you see the file is changing, it is just because the output is too big. Just wait.

Related

How to login to postgresql db - After session kill (for copy database)

I tried to copy a database within the same server using postgresql server
I tried the below query
CREATE DATABASE newdb WITH TEMPLATE originaldb OWNER dbuser;
And got the below error
ERROR: source database "originaldb" is being accessed by 1 other user
So, I executed the below command
SELECT pg_terminate_backend(pg_stat_activity.pid) FROM pg_stat_activity
WHERE pg_stat_activity.datname = 'originaldb' AND pid <> pg_backend_pid();
Now none of us are able to login/connect back to the database.
When I provide the below command
psql -h 192.xx.xx.x -p 9763 -d originaldb -U postgres
It prompts for a password and upon keying password, it doesn't return any response
May I understand why does this happen? How can I connect to the db back? How do I restart/make the system to let us login back?
Can someone help us with this?
It sounds like something is holding an access exclusive lock on a shared catalog, such as pg_database. If that is the case, no one will be able to log in until that lock gets released. I wouldn't think the session-killing code you ran would cause such a situation, though. Maybe it was just a coincidence.
If you can't find an active session, you can try using system tools to figure out what is going on, like ps -efl|fgrep postgre. Or you can just restart the whole database instance, using whatever method you would usually use to do that, like pg_ctl restart -D <data_directory> or sudo service postgresql restart or some GUI method if you are on an OS that does that.

Large File export to postgreSQL

I need to export a 50gb file with inserts to a table in postgreSQL to be able to count the time it takes to perform the inserts, but I can't find any way to load that file, can someone help me?
If the file have you have contains syntactically valid SQL (like INSERT statements), this is very straightforward using the command line psql client that comes with a Postgres installation:
psql DATABASE_NAME < FILE_NAME.sql
You may also want to replace DATABASE_NAME with a connection string like postgres://user:pass#localhost/database_name.
This causes your shell to read the given file and pass it off to psql's stdin, which will cause it to execute commands against the database it's connected to.

Out of Memory Exception running a large Insert script

I need to execute a large Insert script of size 75MB in my database. I
am using the built in SQL command tool to run this script, but it
still throws the same error - "There is insufficient system memory in
resource pool 'internal' to run this query."
sqlcmd -S .\SQLEXPRESS -d TestDB -i C:\TestData.sql
How to resolve this memory issue, when the last resort of running the script through SQLCMD does not work?
Note - Increasing the Maximum Server Memory(in the Server Properties) did not resolve this problem.
I face the same issue recently. What I have done is added Go statements for every 1000 inserts. This worked perfectly for me.
Go statement will divide the statements into separate batches. So every batch treated as separate Insertion. Hope this will help you in some way.

different command line used to extract tables from an sql server file into one that is usable by mysql

What is the difference between these two command line used to extract tables from a database into one that can be used by mysql ?
C:> mysql -u user -p PASS database_name < ms.sql
And
mysql> source ms.sql ;
I used to do with the former and the database created contained all information but it didn't work. the second worked fine.
Second in the first case setting default character set is examplified but I found none in the homepage of the mysql an example for the second case. I am thankful for any help available.
Both of the commands can be referred as Batch Commands. I am pointing out the difference between them below.
First Command
mysql -u user -p PASS database_name < ms.sql
The above command is executing two commands at a time. One is to loggin to MySQL and other one is, passing the script file to execute using OS I/O Operator '<'.
After execution of this command it will display the sql result of the script and comes back to command prompt.(comes out of SQL
Prompt)
It is necessary to keep USE DB_name command in the begening of file to execute the script
This way is useful when you want to execute a big script without logging into mysql typically most often used.
Second Command
mysql> source ms.sql;
The above command is generally an SQL Command which will execute the script present in sql file.
It is used if you are already in MYSQL Prompt. After executing the script it will return back to Mysql prompt only
You may also use this command like executing the shell script something like mysql> ./filename
For more information please refer MySql Reference Link: https://dev.mysql.com/doc/refman/5.7/en/mysql-batch-commands.html

How do I import a sql data file into SQL Server?

I have a .sql file and I am trying to import it into SQL Server 2008. What is the proper way to do this?
If your file is a large file, 50MB+, then I recommend you use sqlcmd, the command line utility that comes bundled with SQL Server. It is easy to use and it handles large files well. I tried it yesterday with a 22GB file using the following command:
sqlcmd -S SERVERNAME\INSTANCE_NAME -i C:\path\mysqlfile.sql -o C:\path\output_file.txt
The command above assumes that your server name is SERVERNAME, that you SQL Server installation uses the instance name INSTANCE_NAME, and that windows auth is the default auth method. After execution output.txt will contain something like the following:
...
(1 rows affected)
Processed 100 total records
(1 rows affected)
Processed 200 total records
(1 rows affected)
Processed 300 total records
...
use readfileonline.com if you need to see the contents of huge files.
UPDATE
This link provides more command line options and details such as username and password:
https://dba.stackexchange.com/questions/44101/importing-sql-server-database-from-a-sql-file
If you are talking about an actual database (an mdf file) you would Attach it
.sql files are typically run using SQL Server Management Studio. They are basically saved SQL statements, so could be anything. You don't "import" them. More precisely, you "execute" them. Even though the script may indeed insert data.
Also, to expand on Jamie F's answer, don't run a SQL file against your database unless you know what it is doing. SQL scripts can be as dangerous as unchecked exe's
Start SQL Server Management Studio
Connect to your database
File > Open > File and pick your file
Execute it
Try this process -
Open the Query Analyzer
Start --> Programs --> MS SQL Server --> Query Analyzer
Once opened, connect to the database that you are wish running the script on.
Next, open the SQL file using File --> Open option. Select .sql file.
Once it is open, you can execute the file by pressing F5.
In order to import your .sql try the following steps
Start SQL Server Management Studio
Connect to your Database
Open the Query Editor
Drag and Drop your .sql File into the editor
Execute the import
A .sql file is a set of commands that can be executed against the SQL server.
Sometimes the .sql file will specify the database, other times you may need to specify this.
You should talk to your DBA or whoever is responsible for maintaining your databases. They will probably want to give the file a quick look. .sql files can do a lot of harm, even inadvertantly.
See the other answers if you want to plunge ahead.
Get the names of the server and database in SSMS:
Run the following command in PowerShell or CMD:
sqlcmd -S "[SERVER NAME]" -d [DATABASE NAME] -i .\[SCRIPT].sql
Here is a screenshot of what it might look like:
There is no such thing as importing in MS SQL. I understand what you mean. It is so simple. Whenever you get/have a something.SQL file, you should just double click and it will directly open in your MS SQL Studio.