What is the meaning of -- -j 8 in cmake command - cmake

This is a similar question: What does --target option mean in CMake?
but the answer there doesn't explain anything about "-- -j 8".
What does it actually do?

The -- option means to pass the following options on to the native tool, probably make. For make, the -j option means the number of simultaneous jobs to run:
-j [jobs], --jobs[=jobs]
Specifies the number of jobs (commands) to run simultaneously. If there is more than one -j option, the
last one is effective. If the -j option is given without an argument, make will not limit the number of
jobs that can run simultaneously.
This allows make to use multiple processes to run different build steps at the same time, likely reducing build time.

Normally, make will execute only one recipe at a time, waiting for it to finish before executing the next. However, the ‘-j’ or ‘--jobs’ option tells make to execute many recipes simultaneously.

The -j option tells cmake to run up to N separate jobs at the same time.
From the cmake build options:
The maximum number of concurrent processes to use when building.

Related

JMeter - Execute SSH Commands in parallel

I need to simulate the below:
1. SSH (only once)
2. Execute a command on all the rows in a csv file at once.
Number of rows in the csv file is dynamic. If 10, the command needs to be executed over all the 10 rows in parallel.
Am not sure of using SSH Command Sampler here. SSH and Command are to be entered in the same sampler. How do I separate these? i.e. SSH only once and then executing the commands in parallel. Which JMeter components do I use here?
Note: Increasing the number of Threads is not an efficient option. While doing this many sessions get created. In turn hanging the terminal. This option works fine up to 10 users. Not sure if there's a limit on the number of sessions.
Thanks for your support.
Regards,
Ajith
Why do you think that Increasing the number of Threads is not an efficient option?
I would suggest moving the SSH (only once) to setUp Thread Group and put Execute a command on all the rows in a csv file at once. bit under the normal Thread Group
If the number of rows in the CSV file is dynamic - you can make the number of threads dynamic as well using __groovy() function like:
${__groovy(new File('/path/to/your/file.csv').readLines().size,)}
If you want to execute all the 10 requests (or whatever is the number of lines) at exactly the same moment you can add a Synchronizing Timer

Redis benchmark command - Does it allow to define value contents and datatype

I am exploring the Redis benchmark tool and I looked at this web page: How fast is Redis?
I pick the below command for example:
$ ./redis-benchmark -r 1000000 -n 2000000 -t get,set,lpush,lpop -q
I see that we can specify the type of operations, such as get, set, lpush etc. However, how do we know what datatypes are being used for values in these operations? Also, is there a way to specify your own data that can be consumed by the benchmark command?
how do we know what datatypes are being used for values in these operations?
By default, xxx is used as value.
is there a way to specify your own data that can be consumed by the benchmark command?
You can use the command line argument -d to specify the length of value, e.g. use redis-benchmark -d 10 to use xxxxxxxxxx, i.e. 10 characters, as value.
You can explicitly specify the command that you want to test, e.g. use redis-benchmark -n 10000 SET your-key your-value to run SET command 10000 times.
Run redis-benchmark --help to see other options.

Redis mass insertions on remote server

I have a remote server running Redis where I want to push a lot of a data from a Java application. Until now I used Webdis to push one command at the time which is not efficient, but I did not have any security issues because I could define the IPs that were accepted as connections and coomand authorizations while redis was not accepting requests from outside (protected mode).
I want to try to use jedis(Java API) and the implementation of pipeline for faster insertion but that means I have to open my Redis to accept requests from outside.
My question is this: Is it possible to use webdis in a similar way(pipilined mass insertion)? And if not what are the security configurations I need to make to use something like Jedis over the internet?
Thanks in advance for any answer
IMO it should be transparent for Redis driver how you set up the security. No driver or password protection will be so secure as specifically designed protocols or technologies.
In the most simple way I'd handle the security, is letting Redis listening on 127.0.0.1:<some port> and using an SSH tunnel to the machine. At least this way you can test the performance agains your current scenario.
You can also use IPSec or OpenVPN afterwards to organize private network which is able to communicate with Redis server.
This question is almost 4 years old so I hope its author has moved on by now, but in case someone else has the same issue I thought I might suggest a way to send data to Webdis more efficiently.
You can indeed make data ingest faster by batching your inserts, meaning you can use MSET to insert multiple keys in a single request (or HMSET for hashes, etc).
As an example, here's ApacheBench (ab) inserting one key 100,000 times using 100 clients:
$ ab -c 100 -n 100000 -k 'http://127.0.0.1:7379/SET/foo/bar'
[...]
Requests per second: 82235.15 [#/sec] (mean)
We're measuring 82,235 single-key inserts per second. Keep in mind that there's a lot more to HTTP benchmarking than just looking at averages (the latency distribution is still important, etc.) but this example is only about showing the difference that batching can make.
You can send commands to Webdis in one of three ways (documented here):
GET /COMMAND/arg0/.../argN
POST / with COMMAND/arg0/.../argN in the HTTP body (demonstrated below)
PUT /COMMAND/arg0.../argN-1 with argN in the HTTP body
If instead of inserting one key per request we create a file containing the MSET command to write 100 keys in a single request, we can significantly increase the write rate.
# first showing what the command looks like for 3 keys
$ echo -n 'MSET' ; for i in $(seq 1 3); do echo -n "/key-${i}/value-${i}"; done
MSET/key-1/value-1/key-2/value-2/key-3/value-3
# then saving the command to write 100 keys to a file:
$ (echo -n 'MSET' ; for i in $(seq 1 100); do echo -n "/key-${i}/value-${i}"; done) > batch-contents.txt
With this file, we can use ab to send this multi-insert file as a POST request (-p) to Webdis:
$ ab -c 100 -n 10000 -k -p ./batch-contents.txt -T 'application/x-www-form-urlencoded' 'http://127.0.0.1:7379/'
[...]
Requests per second: 18762.82 [#/sec] (mean)
This is showing 18,762 requests per second… with each request performing 100 inserts, for a total of 1,876,282 actual key inserts per second.
If you track the CPU usage of Redis while ab is running, you'll find that the MSET use case pegs it at 100% CPU while sending individual SET does not.
Once again keep in mind that this is a rough benchmark, just enough to show that there is a significant difference when you batch inserts. This is true regardless of whether Webdis is used, by the way: batching inserts from a client connecting directly to Redis should also be much faster than individual inserts.
Note: (I am the author of Webdis)

Calculating time of a job in HPC

I'm start to use Cloud resources
In my project I need to run a job, then I need to calculate the time between the begin of the execution of the job in the queue and the end of the job
To I put the job in the queue, I used the command:
qsub myjob
How can I do this?
Thanks in advance
the simplest (although not most accurate) way is to get the report of your queue system. If you use PBS (which is my first guess from your qsub command), you can insert in your script the options:
#PBS -m abe
#PBS -M your email
This sends you a notification at start (b) end (e) and abort(a). The end notification should have a resources_used.walltime with the information of the wall time spent.
If you use another queue system, there must be some similar option in the manual.

Are there any programs that will shrink the size of a sql script file?

I have a SQL script which is extremely large (about 700 megabytes). I am wondering if there is a good way to reduce the size of the script?
I know there are code minimizers for JavaScript and am looking for one to use with SQL scripts.
I am not looking to get performance on the SQL script. I am trying to make the file size smaller. Removing excess whitespace. Keeping name-qualification down so that the script file sizes can be smaller.
If I attempt to load the file in SQL Server Management Studio I get this error.
Not enough storage is available to
process this command. (Exception from
HRESULT: 0x80070008) (mscorlib)
What's in this script of 700MB?! I would hope that there are some similarities/repetitions that would allow it to shorten the file.
Just some guesses:
Instead of inserting a million records using Insert statements, use a bulk loading tool
Instead of updating a number of individual records, try to batch updates to the same value into one (e.g. Update tab set col=1 where id in (..) instead of individual updates)
long manipulations can be defined as a stored procedure (before running the script) and the script would only have to call the stored proc
Of course, splitting the script up into smaller portions and calling each one from a simple batch file would work too. But I'd be a little worried about performance (how long does the execution take?!) and would look for some faster ways.
What about breaking your script into several small files, and calling those files from a single master script?
This link describes how to do it from a stored procedure.
Or you can do it from a batch file like this:
REM =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
REM Define widely-used variables up here so they can be changed in one place
REM Search for "sqlcmd.exe" and make sure this path is valid for you
REM =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
set sqlcmd="C:\Program Files\Microsoft SQL Server\100\Tools\Binn\sqlcmd.exe"
set uname="your_uname_here"
set pwd="your_pwd_here"
set database="your_db_name_here"
set server="your_server_name_here"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script1.sql"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script2.sql"
%sqlcmd% -S %server% -d %database% -U %uname% -P %pwd% -i "c:\script3.sql"
pause
I like the batch file approach myself, because it is easier to tinker with it, and you can schedule it as a windows job.
Make sure the .BAT file is in a folder with the appropriate security restrictions, since it has your credentials in a plain text .BAT file.
gzip should do.
SQL is much harder to shrink, the field, table names and commands need to be what they are. Plus, you wouldn't just want to rewrite the commands as something shorter because it could have implications on performance.
Depending on the DBMS that you use, it may allow short names for commands, and then there might be a converter.
(Answering this because it is the top item returned when I searched for "SQL script size")
I got the same error when trying to load a large script into Management Studio. In my case I was trying to downgrade a database from SQL2008 R2 to SQL 2008 by using the SQL Server script generator, which created a 700mb structure and data .sql file.
To get around it I used the command line to run the script instead:
C:>sqlcmd -S [SQLSERVER\INSTANCE] -i [FILELOCATION\FILENAME].sql
Hopefully this helps someone else.
Compress the sql file will have the most compression ratio.
Minimizing the txt sql file will reduce some bytes/kilobytes per mega.. is not worth...
The better approach is to create a "function" to unzip and read the file. The best benefit I guess.
Today, filesize shouldn't be a problem. Dial-up connection? Floppy disks?
pg-minify can do it, and not just for PostgreSQL, but for most notations, including MS, MySql, etc.