Please help me clarify, if the pgbench tool can execute my own sql scenarios in parallel way?
Googling and local searching brought no positive result.
I run the script that execeutes with no errors. But after execution I see no signs, that my script was actually performed.
Does pgbench commits transaction with my sql script?
That's an output I get:
C:\Program Files\PostgreSQL\9.2\bin>pgbench.exe -n -h dbserverhost -U postgres -
T 10 -c 64 -j 8 bench_dbname -f c:\Dev\bench_script.sql
transaction type: TPC-B (sort of)
scaling factor: 1
query mode: simple
number of clients: 64
number of threads: 8
duration: 10 s
number of transactions actually processed: 1020
tps = 95.846561 (including connections establishing)
tps = 103.387127 (excluding connections establishing)
C:\Program Files\PostgreSQL\9.2\bin>
SQL script bench_script.sql is:
--comment here
begin;
insert into schm.log values ( 'pgbench test', current_timestamp );
end;
SOLUTION
pgBench Windows version is sensitive to the order of the arguements passed to the utility:
"bench_dbname" argument must be the last one parameter in a line.
This is the correct example of pgbench Windows version command line:
pgbench.exe -d -r -h 127.0.0.1 -U postgres -T 5 -f C:\Dev\bench_script.sql -c 64 -j 8 postgres
The most useful arguments for me were:
-T 60 (time in seconds to run script)
-t 100 (transaction amount per client)
-d print detailed debug info to the output
-r include in summary latency value calculated for every action of the script
-f run user defined sql script in benchmark mode
-c client amount
-j thread amount
pgBench official doc
PgBench, I love you! :)
Best wishes everybody ;)
The "transaction type: TPC-B (sort of)" means that it did not process the -f option to run your custom sql script, instead it ran the default query.
On Windows versions, getopt seems to stop parsing the options once it reaches the first one that does not start with a hyphen, i.e. "bench_dbname". So make sure -f comes before that.
I guess you also need the -n option as long as you are using your custom script?
-n
--no-vacuum
Perform no vacuuming before running the test.
This option is necessary if you are running a custom test scenario
that does not
include the standard tables pgbench_accounts, pgbench_branches,
pgbench_history, and pgbench_tellers.
Related
I'm trying to run my query without the need to have SSMS open and running. Previously I've used this connecting to SQL server database via SQLMCD utility to run batches. However, this query I am now using is in MDX so I am not sure on how that translates for a connection to analysis server. This is what I've used in the past to execute the query:
echo StartTimeStamp > "%~dp0\StartTimeStamp.txt"
sqlcmd -S businesspublish -d revcube -G -i "%~dp0\Step 1 - modify query.mdx" -o "%~dp0\Step 3 - Query
results in CSV format.csv" -s"," -w 700 -I -t 28800 -h-1
echo EndTimeStamp > "%~dp0\EndTimeStamp.txt"
#set /p delExit=Press the ENTER key to exit...:
This was written by a former colleague and I'm trying to recycle it, to execute the query with the following details:
ServerName: Businesspublish
Database:RevCube
Query file name: Step 1 - modify query.mdx
Step 3 - Query results in CSV format.csv
Any help would be highly appreciated. Thanks!
Try the ASCMD utility. The folloing link explain how you can use XMLA, MDX,DMX on cube
https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ms365187(v=sql.100)?redirectedfrom=MSDN
I'm on Archlinux 64x (4.17.4-1-ARCH) with Docker (version 18.06.0-ce, build 0ffa8257ec). I'm using Microsoft's MSSQL docker container CU7. Each time I'm trying to enter a query or to run a SQL file I get this warning message:
Sqlcmd: Warning: The last operation was terminated because the user pressed CTRL+C.
Then when I check in the database with Datagrip, the query hasn't been executed! Here are my commands :
docker pull microsoft/mssql-server-linux:2017-CU7
docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=GitGood*0987654321" -e "MSSQL_PID=Developer" -p 1433:1433 --name beep_boop_boop -d microsoft/mssql-server-linux:2017-CU7
# THIS
sudo echo "CREATE DATABASE test;" > /test.sql
docker exec beep_boop_boop /opt/mssql-tools/bin/sqlcmd -U SA -P GitGood*0987654321 < test.sql
# OR
docker exec beep_boop_boop /opt/mssql-tools/bin/sqlcmd -U SA -P GitGood*0987654321 -Q "CREATE DATABASE test;"
My question is How to avoid Warning operation was terminated by user warning on MSSQL queries?
You should use docker-compose, I'm sure it will make your life easier. My guess is you're getting an error without actually knowing it. First time I tried, I used an unsafe password which didn't meet security requirements and I got this error:
ERROR: Unable to set system administrator password: Password validation failed. The password does not meet SQL Server password policy requirements because it is not complex enough. The password must be at least 8 characters long and contain characters from three of the following four sets: Uppercase letters, Lowercase letters, Base 10 digits, and Symbols..
I see your password is strong, but note that you have a * in your password, which may be executed if not correctly escaped.
Or the server is just not started when running with your command line, example:
# example of a failing attempt
docker run -it --rm -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=GitGood*0987654321' -p 1433:1433 microsoft/mssql-server-linux:2017-CU7 bash
# wait until you're inside the container, then check if server is running
apt-get update && apt-get install -y nmap
nmap -Pn localhost -p 1433
If it's not running, you'll see something like that:
Starting Nmap 7.01 ( https://nmap.org ) at 2018-08-27 06:12 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000083s latency).
Other addresses for localhost (not scanned): ::1
PORT STATE SERVICE
1433/tcp closed ms-sql-s
Nmap done: 1 IP address (1 host up) scanned in 0.38 seconds
Enough with the intro, here's a working solution:
docker-compose.yml
version: '2'
services:
db:
image: microsoft/mssql-server-linux:2017-CU7
container_name: beep-boop-boop
ports:
- 1443:1443
environment:
ACCEPT_EULA: Y
SA_PASSWORD: GitGood*0987654321
Then run the following commands and wait until the image is ready:
docker-compose up -d
docker-compose logs -f &
up -d to demonize the container so it keeps running in the background.
logs -f will read logs and follow (similar to what tail -f does)
& to run the command in the background so we don't need to use a new shell
Now get a bash running inside that container like this:
docker-compose exec db bash
Once inside the image, you can run your commands
/opt/mssql-tools/bin/sqlcmd -U SA -P $SA_PASSWORD -Q "CREATE DATABASE test;"
/opt/mssql-tools/bin/sqlcmd -U SA -P $SA_PASSWORD -Q "SELECT name FROM master.sys.databases"
Note how I reused the SA_PASSWORD environment variable here so I didn't need to retype the password.
Now enjoy the result
name
--------------------------------------------------------------------------------------------------------------------------------
master
tempdb
model
msdb
test
(5 rows affected)
For a proper setup, I recommend replacing the environment key with the following lines in docker-compose.yml:
env_file:
- .env
This way, you can store your secrets outside of your docker-compose.yml and also make sure you don't track .env in your version control (you should add .env to your .gitignore and provide a .env.example in your repository with proper documentation.
Here's an example project which confirms it works in Travis-CI:
https://github.com/GabLeRoux/mssql-docker-compose-example
Other improvements
There are probably some other ways to accomplish this with one liners, but for readability, it's often better to just use some scripts. In the repo, I took a few shortcuts such as sleep 10 in run.sh. This could be improved by actually waiting until the db is up with a proper way. The initialization script could also be part of an entrypoint.sh, etc. Hope it gets you started 🍻
I am testing Redis (version: 0.8.8.384) using the benchmark tool, and the redis-server.exe that is included in the zip package locally.
I used the following command to test the keyspace_length:
redis-benchmark -t set,get -n 4 -c 1 -d 888 -r 1000
I have managed to capture a tracer (.pcap) locally using RawCap.exe.
What I have noticed is that the keys that are send in SET command, do not match with the keys in GET command. I would expect that the used keys are stored somewhere locally and then retrieved from the GET command to interrogate the value for each random key.
Am I missing something?
Thanks in advance!
It seems this behavior is the expected, since you can run a redis-benchmark for the GET command only:
redis-benchmark -t get -n 4 -c 1 -d 888 -r 1000
====== GET ======
4 requests completed in 0.00 seconds
1 parallel clients
888 bytes payload
keep alive: 1
100.00% <= 0 milliseconds
4000.00 requests per second
So each command specified in -t is tested independently.
Edit
You can pass a lua script to test a set/get in same key. Some thoughts after a post-lunch research :)
You can turn on MONITOR on redis-cli before executing this to be sure of what is happening. IMPORTANT: this will kill your benchmarking, just set it to see the actual commands using a small number of tests (e.g. redis-benchmark -n 10);
Since you're loading a lua script, this will be executed atomically every time, like if these command were in a MULTI/EXEC block;
You can lock a single random number to be used by both commands by specifying __rand_int__ parameter AND -r 1000 (e.g.). The -r parameter defines the range of random integers used. __rand_int__ WON'T work if you don't specify the -r parameter (you can see this when monitoring);
After turning MONITOR off, you can see that for bigger -n values the simulation seems to be faster. Try with -n 10 and -n 1000 and see if this holds true.
Read the https://redis.io/topics/benchmarks :)
The script:
redis-benchmark -r 10000 -n 1000 eval "redis.call('set',KEYS[1],'xpto') return redis.call('get', KEYS[1])" 1 __rand_int__
A sample MONITOR output:
1487868918.656881 [0 127.0.0.1:50561] "eval" "redis.call('set',KEYS[1],'xpto') return redis.call('get', KEYS[1])" "1" "000000009355"
1487868918.657032 [0 lua] "set" "000000009355" "xpto"
1487868918.657051 [0 lua] "get" "000000009355"
I am looking to enclose some Oracle components within a Bash script that will perform a set of goals:
Log into a remote server (where my Oracle DB resides) as root.
Performs an "su - oracle".
Logs into sqlplus environment as a specific Oracle user.
Performs an SQL select command and stores the output of that command into a variable.
Displays the result of that variable in the Bash shell.
I have looked through a couple examples here on stackoverflow, many of which seem to go over executing a command but not necessarily detailing how to display the output to the user (although I am still examining a few more). For example, assuming all key exchanges are setup beforehand, a method could be to use the following:
#!/bin/sh
ssh -q root#5.6.7.8
sqlplus ABC/XYZ#core <<ENDOFSQL
select CREATE_DATE from PREPAID_SUBSCRIBER where MSISDN='12345678912';
exit;
ENDOFSQL
Instead, here is how I tried to set this up:
#!/bin/sh
datasource_name=`echo "select CREATE_DATE from PREPAID_SUBSCRIBER where MSISDN='12345678912';" | ssh -q 5.6.7.8 "su - oracle -c 'sqlplus -S ABC/XYZ#core'" | tail -2 | head -1`
Ideally, the datasource_name variable should now either take on values:
no rows selected
Or if there is an entry within the table:
CREATE_DATE
-------------------
07-06-2009 18:04:48
The tail and head commands are to get rid of the empty lines in the output, and the ssh -q and sqlplus -S options are for ignoring warnings.
However, when I run that command, and do an:
echo "${datasource_name}"
I get...
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
...instead of one of the two outputs above. If I understand correctly, this is a warning that can be caused depending on whether a specific shell is used, but most online sources indicate that this can be ignored. The nice thing about this warning is that it appears my command above is actually running and storing "something" into datasource_name, but it just isn't what I want.
Now to simplify this problem, I noticed I get the same tty warning when I simply try to su to oracle on the remote machine from the box where the bash script runs:
ssh root#5.6.7.8 "su - oracle"
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
If I do the following, I actually get into the sqlplus environment successfully with no issues:
ssh -q root#5.6.7.8 "su - oracle -c 'sqlplus ABC/XYZ#core'"
SQL*Plus: Release 9.2.0.4.0 - Production on Tue May 29 12:35:06 2012
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release
10.2.0.4.0 - 64bit Production With the Partitioning, Real Application Clusters, OLAP, Data Mining and Real Application Testing options
SQL>
If I understand why the problem above is occurring, it is possible that I can figure out how to get my script to work properly. Any suggestions would be greatly appreciated! Thank you.
change the first line to:
ssh -t root#5.6.7.8 "su - oracle"
to get a tty to see if that would work for you.
another thing you can do in your script is to redirect stderr to your variable as well if you would like to see that as well in your variable, which does not appear to be the case for you, though I have done so in the past in some cases. There is an example in the comments below.
This is a sample script for MySQL, but it can be easily edited for Oracle :
#!/bin/bash
remote=oracle#5.6.7.8
ssh -q -t $remote <<EOF
bash <<EOFBASH
mysql <<ENDOFSQL>/tmp/out
show databases;
ENDOFSQL
EOFBASH
EOF
scp $remote:/tmp/out /tmp/out
ds=$(</tmp/out)
cat <<EOF
START OUTPUT
$ds
END OUTPUT
EOF
rm /tmp/out
Tested, works well. Instead of using su - oracle, try to ssh directly to oracle user ;)
I have some SQL scripts that I'm trying to automate. In the past I have used SQL*Plus, and called the sqlplus binary manually, from a bash script.
However, I'm trying to figure out if there's a way to connect to the DB, and call the script from inside of the bash script... so that I can insert date and make the queries run relative to a certain number of days in the past.
I'm slightly confused. You should be able to call sqlplus from within the bash script. This may be what you were doing with your first statement
Try Executing the following within your bash script:
#!/bin/bash
echo Start Executing SQL commands
sqlplus <user>/<password> #file-with-sql-1.sql
sqlplus <user>/<password> #file-with-sql-2.sql
If you want to be able to pass data into your scripts you can do it via SQLPlus by passing arguments into the script:
Contents of file-with-sql-1.sql
select * from users where username='&1';
Then change the bash script to call sqlplus passing in the value
#!/bin/bash
MY_USER=bob
sqlplus <user>/<password> #file-with-sql-1.sql $MY_USER
You can also use a "here document" to do the same thing:
VARIABLE=SOMEVALUE
sqlplus connectioninfo << HERE
start file1.sql
start file2.sql $VARIABLE
quit
HERE
Here is a simple way of running MySQL queries in the bash shell
mysql -u [database_username] -p [database_password] -D [database_name] -e "SELECT * FROM [table_name]"
Maybe you can pipe SQL query to sqlplus. It works for mysql:
echo "SELECT * FROM table" | mysql --user=username database
I've used the jdbcsql project on Sourceforge.
On *nix systems, this will create a csv stream of results to standard out:
java -Djava.security.egd=file///dev/urandom -jar jdbcsql.jar -d oracledb_SID -h $host -p 1521 -U some_username -m oracle -P "$PW" -f excel -s "," "$1"
Note that adding the -Djava.security.egd=file///dev/urandom increases performance greatly
Windows commands are similar: see http://jdbcsql.sourceforge.net/
If you do not want to install sqlplus on your server/machine then the following command-line tool can be your friend. It is a simple Java application, only Java 8 that you need in order to you can execute this tool.
The tool can be used to run any SQL from the Linux bash or Windows command line.
Example:
java -jar sql-runner-0.2.0-with-dependencies.jar \
-j jdbc:oracle:thin:#//oracle-db:1521/ORCLPDB1.localdomain \
-U "SYS as SYSDBA" \
-P Oradoc_db1 \
"select 1 from dual"
Documentation is here.
You can download the binary file from here.
As Bash doesn't have built in sql database connectivity... you will need to use some sort of third party tool.