PostgreSQL and queue commands - sql

I would like to know if there is a way to quere my queries. I am doing some basic text matching in psql and each query (which is saved in a different script) takes about 6 hours to run. I was wondering if there is a way to queue my scripts?
For example;
my database is called; "data"
my scipts are called; cancer, heart, death
and I am doing the following;
data; \i cancer;
data; \i heart;
data; \i death;
But I have to come back every so often and check whether it is running or not etc which doesn't seem very efficient.
I am new to postgresql so appreciate any help.

this is the most easiest/fastest solution I can think of, but should work for your case ;)
When using psql from command line, you can start it with
-f filename
where filename is a SQL script. It will run the query and send the output to stdout. Also you can forward this to a file. Just put your queries into that SQL-file and you got your own queuing.
Assuming you might run Linux, you could use screen to have a simple way to leave your session open when logging of for the night.

The easiest solution was to create a separate sql file which runs through the commands sequentially.

Related

How can I most simply automate an Oracle query in windows?

I need to run the same damn Oracle query daily (and export the output, which I can do from the sql file itself). I'd like to automate using Windows Task Scheduler but it only opens the script, and it doesn't run it.
Is this feasible or is there an easier way?
Your description isn't very descriptive (we don't know how exactly you did that), but - here's a suggestion:
create a .bat script (let's call it a.bat) which will call your .sql (let's call it a.sql) script. Only one line in it:
sqlplus scott/tiger#orcl #a.sql
a.sql is ... well, what it is, for example
prompt Number of rows in EMP table
select count(*) from emp;
Create a job in Task Scheduler which will start a program (a.bat).
I've just tried it (using that simple example I posted above) and got the result (saying that there are 14 rows in the table), so I hope you'll get it as well.

Sequential Teradata Queries

I have a collection of SQL queries that need to run in a specific order using Teradata. How can this be done?
I've considered writing an application in some other language (like Python or C++) to sequentially call each query, but am unsure how to get live data there from Teradata. I also want to keep the queries as separate SQL files (like it is currently).
Goal is to minimize the need for human interaction ie. I want to hit "Run" and let it take care of the rest.
BTEQ scripts are your Go-To solution.
Have each query, or at least, logical blocks of several statements, in single bteq script.
Then create a script that will call the the BTEQ with needed settings, i.e. TD logon command and have this script be called in a batch with parameters like this:
start /wait C:\Teradata\BTEQ.bat Script_1.txt
start /wait C:\Teradata\BTEQ.bat Script_2.txt
start /wait C:\Teradata\BTEQ.bat Script_3.txt
pause
Then you can create several batch files, split in logical blocks and have them executed at will, or scheduled.

mysql optimized query execution

As part of an ongoing research work, I am checking if an URL exists or not using the cURL command. I have been executing a shell script for couple of days and it is doing some updates for each URL in my database. However, the script seems to update around only 100,000 rows in a day.
I was thinking if I could write the values in a file first and then do the updates, the execution might be faster.
I am connecting to the database using the command line.
mysql -h servername -u username -ppassword databasename "Update Query"
For example, instead of connecting to the database 2 million times like above from the command line and updating 2 million rows, I am planning to connect to the database only once from the command line and update 2 million rows from the file.
So is the second approach better than the first one or the time difference would be negligible?
Three approaches.
You could using load data infile
You could build up a .sql file with all of the updates you need.
You could use something other than a CLI to connect to the URLs and DB. In other words, not using "curl" and "mysql" commands, but using a real programming language and provided libraries for checking URLs and updating databases.
Any of those would probably be faster. Though you'll likely get more speed improvement by making the http calls in parallel. You can do that more easily with a real programming language.

Oracle SQLPlus: How to display the output of a sqlplus command without having to first issue the spool off command?

Is there a way to display the output of a sqlplus command without having to first issue the spool off command?
I am spooling the results of a sqlplus session to a file while at the same time tailing the file. The reason for this is that for table with very long rows the format is easier to look at from a file. The problem is to see the output i have to issue the spool off command everytime i run a command in sqlplus.
Can i configure sqlplus so that after i have issued the spool command all the output is viewable straight away on the file.
(Formating the way the rows are displayed on the screen is not an option. )
THanks
SPOOL is really intended for creating a file of SQL*Plus output, for whatever purpose: logging, input to another process, etc. There is no facility for inflight viewing of its output.
There are a number of ways of solving this particular problem, but the easiest is surely to use an IDE which includes a data browser, thus obviating the need to tail a file. There are a number on the market, including Quest's TOAD and Allround Automation's PL/SQL Developer, but if you don't want to spring for a license fee then you should have a look at Oracle's own (free) SQL Developer.
If you are spooling the results of multiple statements you could turn spooling off and then turn it back on between each statement. When you turn spooling back on add the append keyword so that it will continue in the same file rather than overwriting it.
If you want to see the results of one query in your spool file you could break the query up into multiple queries that returned specific ranges of the data. This would be slower, but you could cycle spooling to get faster feedback.
If your problem is that you can't open the output file (as the spool process has a lock on it) then try copying the file output to another file and opening that file instead.
Since it sounds like your real problem is formatting of output in SQLPlus -- can you make your SQLPlus window wider and SET LINESIZE so the output looks better in SQLPlus to start with? Then you might not need to spool at all.
I tried to add a comment but for some reason it doesnt save it so ill try the "Answer your question" option :)
I do use SQLDeveloper but there are situations where i have to use sqlplus where SQLDeveloper is not available then i am stuck with plain old sqlplus.
There are other situations where i would use sqlplus over sqldeveloper purely for the fact that it would take me 1/2 minute to find out what i am looking for in sqlplus rather than several minutes with SQLDeveloper as it would take ages to load.
I have checked the time it takes before the output is flushed out and it looks like it does flush it out after a certain number of rows. Isnt there a way to reduce the buffer so that they are flushed out quicker?
There is no problem opening the file the problem is even with the file opened i cant see the output unless i issue the "spool off" command or the output has several hundred rows. I am using a free program called baretail (http://www.baretail.com) to tail the spool file on windows.
Thanks

Are there any programs that will shrink the size of a sql script file?

I have a SQL script which is extremely large (about 700 megabytes). I am wondering if there is a good way to reduce the size of the script?
I know there are code minimizers for JavaScript and am looking for one to use with SQL scripts.
I am not looking to get performance on the SQL script. I am trying to make the file size smaller. Removing excess whitespace. Keeping name-qualification down so that the script file sizes can be smaller.
If I attempt to load the file in SQL Server Management Studio I get this error.
Not enough storage is available to
process this command. (Exception from
HRESULT: 0x80070008) (mscorlib)
What's in this script of 700MB?! I would hope that there are some similarities/repetitions that would allow it to shorten the file.
Just some guesses:
Instead of inserting a million records using Insert statements, use a bulk loading tool
Instead of updating a number of individual records, try to batch updates to the same value into one (e.g. Update tab set col=1 where id in (..) instead of individual updates)
long manipulations can be defined as a stored procedure (before running the script) and the script would only have to call the stored proc
Of course, splitting the script up into smaller portions and calling each one from a simple batch file would work too. But I'd be a little worried about performance (how long does the execution take?!) and would look for some faster ways.
What about breaking your script into several small files, and calling those files from a single master script?
This link describes how to do it from a stored procedure.
Or you can do it from a batch file like this:
REM =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
REM Define widely-used variables up here so they can be changed in one place
REM Search for "sqlcmd.exe" and make sure this path is valid for you
REM =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
set sqlcmd="C:\Program Files\Microsoft SQL Server\100\Tools\Binn\sqlcmd.exe"
set uname="your_uname_here"
set pwd="your_pwd_here"
set database="your_db_name_here"
set server="your_server_name_here"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script1.sql"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script2.sql"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script3.sql"
pause
I like the batch file approach myself, because it is easier to tinker with it, and you can schedule it as a windows job.
Make sure the .BAT file is in a folder with the appropriate security restrictions, since it has your credentials in a plain text .BAT file.
gzip should do.
SQL is much harder to shrink, the field, table names and commands need to be what they are. Plus, you wouldn't just want to rewrite the commands as something shorter because it could have implications on performance.
Depending on the DBMS that you use, it may allow short names for commands, and then there might be a converter.
(Answering this because it is the top item returned when I searched for "SQL script size")
I got the same error when trying to load a large script into Management Studio. In my case I was trying to downgrade a database from SQL2008 R2 to SQL 2008 by using the SQL Server script generator, which created a 700mb structure and data .sql file.
To get around it I used the command line to run the script instead:
C:>sqlcmd -S [SQLSERVER\INSTANCE] -i [FILELOCATION\FILENAME].sql
Hopefully this helps someone else.
Compress the sql file will have the most compression ratio.
Minimizing the txt sql file will reduce some bytes/kilobytes per mega.. is not worth...
The better approach is to create a "function" to unzip and read the file. The best benefit I guess.
Today, filesize shouldn't be a problem. Dial-up connection? Floppy disks?
pg-minify can do it, and not just for PostgreSQL, but for most notations, including MS, MySql, etc.