I have a shell script in which I have to execute two queries (on different databases), spool their results to text files and finally call a C++ program that processes the information on those text files. Something like this:
sqlplus user1/pw1#database1 #query1.sql
sqlplus user2/pw2#database2 #query2.sql
./process_db_output
Both queries take some time to execute. One of them can take up to 10 minutes, while the other one is usually faster. What I want to do is execute them simultaneously and when both are done, call the processing utility.
Any suggestion on how to do so?
Use & to background the queries, then wait to wait for all subprocesses to finish, and then the c++ thing that processes the results.
Code:
#!/bin/bash
# first calling
sqlplus user1/pw1#database1 #query1.sql &
sqlplus user2/pw2#database2 #query2.sql &
#now waiting
wait
#done waiting
./process_db_output
Related
I need to simulate the below:
1. SSH (only once)
2. Execute a command on all the rows in a csv file at once.
Number of rows in the csv file is dynamic. If 10, the command needs to be executed over all the 10 rows in parallel.
Am not sure of using SSH Command Sampler here. SSH and Command are to be entered in the same sampler. How do I separate these? i.e. SSH only once and then executing the commands in parallel. Which JMeter components do I use here?
Note: Increasing the number of Threads is not an efficient option. While doing this many sessions get created. In turn hanging the terminal. This option works fine up to 10 users. Not sure if there's a limit on the number of sessions.
Thanks for your support.
Regards,
Ajith
Why do you think that Increasing the number of Threads is not an efficient option?
I would suggest moving the SSH (only once) to setUp Thread Group and put Execute a command on all the rows in a csv file at once. bit under the normal Thread Group
If the number of rows in the CSV file is dynamic - you can make the number of threads dynamic as well using __groovy() function like:
${__groovy(new File('/path/to/your/file.csv').readLines().size,)}
If you want to execute all the 10 requests (or whatever is the number of lines) at exactly the same moment you can add a Synchronizing Timer
Redis doc seems to affirm EVAL scripts are similiar too MULTI/EXEC transactions.
In my personnals words, this meens a LUA script guarantees two things :
sequential : the lua script is run like it is alone on server, thats ok with me
atomic / one shot writes : this I don't understand with LUA scripts. when is the "EXEC like" called on LUA scripts ? Because with scripts you can do conditionnal writes based on reads (or even writes because some writes returns values like NX functions). So how can redis garanthee that either all or nothing is executed with scripts ? What happen if the server crash in the middle of a script ? rollback is not possible with redis.
(I don't have this concern with MULTI/EXEC on this second point because with MULTI/EXEC you can't do writes based on previous commands)
(sorry for basic english, I am french)
Just tested it using this very slow script:
eval "redis.call('set', 'hello', 10); for i = 1, 1000000000 do redis.call('set', 'test', i) end" 0
^ This sets the hello key to 10 then infinitely sets the test key to a number.
While executing the script, Redis logs this warning:
# Lua slow script detected: still in execution after 5194 milliseconds. You can try killing the script using the SCRIPT KILL command. Script SHA1 is: ...
So I then tested to shutdown the container entirely while the script is executing to simulate a crash.
After the restart, the hello and test keys are nil, meaning that none of the called commands are actually been executed. Thus, scripts are indeed atomic and crash safe as the doc states.
My belief is that Redis wraps the Lua scripts within a MULTI/EXEC to make them atomic, or at least it has the same effect.
I have a collection of SQL queries that need to run in a specific order using Teradata. How can this be done?
I've considered writing an application in some other language (like Python or C++) to sequentially call each query, but am unsure how to get live data there from Teradata. I also want to keep the queries as separate SQL files (like it is currently).
Goal is to minimize the need for human interaction ie. I want to hit "Run" and let it take care of the rest.
BTEQ scripts are your Go-To solution.
Have each query, or at least, logical blocks of several statements, in single bteq script.
Then create a script that will call the the BTEQ with needed settings, i.e. TD logon command and have this script be called in a batch with parameters like this:
start /wait C:\Teradata\BTEQ.bat Script_1.txt
start /wait C:\Teradata\BTEQ.bat Script_2.txt
start /wait C:\Teradata\BTEQ.bat Script_3.txt
pause
Then you can create several batch files, split in logical blocks and have them executed at will, or scheduled.
I would like to know if there is a way to quere my queries. I am doing some basic text matching in psql and each query (which is saved in a different script) takes about 6 hours to run. I was wondering if there is a way to queue my scripts?
For example;
my database is called; "data"
my scipts are called; cancer, heart, death
and I am doing the following;
data; \i cancer;
data; \i heart;
data; \i death;
But I have to come back every so often and check whether it is running or not etc which doesn't seem very efficient.
I am new to postgresql so appreciate any help.
this is the most easiest/fastest solution I can think of, but should work for your case ;)
When using psql from command line, you can start it with
-f filename
where filename is a SQL script. It will run the query and send the output to stdout. Also you can forward this to a file. Just put your queries into that SQL-file and you got your own queuing.
Assuming you might run Linux, you could use screen to have a simple way to leave your session open when logging of for the night.
The easiest solution was to create a separate sql file which runs through the commands sequentially.
Is there a way to display the output of a sqlplus command without having to first issue the spool off command?
I am spooling the results of a sqlplus session to a file while at the same time tailing the file. The reason for this is that for table with very long rows the format is easier to look at from a file. The problem is to see the output i have to issue the spool off command everytime i run a command in sqlplus.
Can i configure sqlplus so that after i have issued the spool command all the output is viewable straight away on the file.
(Formating the way the rows are displayed on the screen is not an option. )
THanks
SPOOL is really intended for creating a file of SQL*Plus output, for whatever purpose: logging, input to another process, etc. There is no facility for inflight viewing of its output.
There are a number of ways of solving this particular problem, but the easiest is surely to use an IDE which includes a data browser, thus obviating the need to tail a file. There are a number on the market, including Quest's TOAD and Allround Automation's PL/SQL Developer, but if you don't want to spring for a license fee then you should have a look at Oracle's own (free) SQL Developer.
If you are spooling the results of multiple statements you could turn spooling off and then turn it back on between each statement. When you turn spooling back on add the append keyword so that it will continue in the same file rather than overwriting it.
If you want to see the results of one query in your spool file you could break the query up into multiple queries that returned specific ranges of the data. This would be slower, but you could cycle spooling to get faster feedback.
If your problem is that you can't open the output file (as the spool process has a lock on it) then try copying the file output to another file and opening that file instead.
Since it sounds like your real problem is formatting of output in SQLPlus -- can you make your SQLPlus window wider and SET LINESIZE so the output looks better in SQLPlus to start with? Then you might not need to spool at all.
I tried to add a comment but for some reason it doesnt save it so ill try the "Answer your question" option :)
I do use SQLDeveloper but there are situations where i have to use sqlplus where SQLDeveloper is not available then i am stuck with plain old sqlplus.
There are other situations where i would use sqlplus over sqldeveloper purely for the fact that it would take me 1/2 minute to find out what i am looking for in sqlplus rather than several minutes with SQLDeveloper as it would take ages to load.
I have checked the time it takes before the output is flushed out and it looks like it does flush it out after a certain number of rows. Isnt there a way to reduce the buffer so that they are flushed out quicker?
There is no problem opening the file the problem is even with the file opened i cant see the output unless i issue the "spool off" command or the output has several hundred rows. I am using a free program called baretail (http://www.baretail.com) to tail the spool file on windows.
Thanks