What I want to do is create a custom command in cmake. But i want to do some operations in a string paramater of the command.
I have 2 variables set like below.
set(VERSION_MAJOR 1)
set(VERSION_MINOR 0)
I want to multiply VERSION_MAJOR by 10 and add VERSION_MINOR to it. But how can i manage to use user_version like user_version=VERSION_MAJOR * 10 + VERSION_MINOR.
This command works fine.
add_custom_command(
...
COMMAND sqlite3 ${DB_FILE} "PRAGMA user_version=${VERSION_MAJOR}"
...
)
But i want to use it like this.
add_custom_command(
...
COMMAND sqlite3 ${DB_FILE} "PRAGMA user_version=${VERSION_MAJOR * 10 + VERSION_MINOR}"
...
)
cmake version: 3.5.1
You can do it with the CMake command math:
set(VERSION_MAJOR 1)
set(VERSION_MINOR 0)
# multiply VERSION_MAJOR by 10 and add VERSION_MINOR
# set MY_VERSION to the resulting value
math(EXPR MY_VERSION "${VERSION_MAJOR} * 10 + ${VERSION_MINOR}")
add_custom_command(
...
COMMAND sqlite3 ${DB_FILE} "PRAGMA user_version=${MY_VERSION}"
...
)
Related
Applying this example of module in HighResolutionTimer, enter link description here, I just encounter some issue attempting to call by B module the A minimalist module that use a HighResTimer.dll build under Msys2 with gcc.
I got the error:
And the corresponding text:
./TimerTest.lua default arch : x86 arch updated : x64 == testing require from TimerTest.lua default arch : x86 arch updated : x64
execution time: 150.9933ms == end testing require from TimerTest.lua
C:\msys64\mingw32\bin\lua.exe: ./TimerTest.lua:33: attempt to index a
boolean value (local 'foo') stack traceback:
./TimerTest.lua:33: in main chunk
[C]: in ?
You can see that a first call to the minimalist inner test HighRestimer.lua which add package.cpath the Timer.dll is working on and well outputing number of elapsed ms.
This is the code:
#!/mingw64/bin/lua
------------------------------------
---HighResTimer.lua by Cody Duncan
---Wraps the High Resolution Timer Functions in
--- HighResTimer.so
------------------------------------
--
--
-- code to make script compatible within or without vim
--
--
---- init_packages {{{
basebase = 'foobar/'
function init_packages()
base = basebase .. 'lua/packages/'
arch = 'x86'
print('default arch : ' .. arch)
arch = os.getenv('MSYSTEM_CARCH') == 'x86_64' and 'x64' or 'x86'
print('arch updated : ' .. arch)
--assuming that Timer.dll got distinguished name from lua module HighResTimer.lua
package.cpath = package.cpath .. ';' .. base .. 'HighResTimer/' .. arch .. '/?.dll'
end
init_packages()
---- }}}
local timer = require("Timer") --load up the module
timer.readHiResTimerFrequency()
---- api functions {{{
----call this before code that is being measured for execution time
function start()
timer.storeTime();
end
----call this after code that is being measured for execution time
function stop()
timer.storeTime();
end
----once the prior two functions have been called, call this to get the
----time elapsed between them in nanoseconds
function getNanosElapsed()
return timer.getNanoElapsed();
end
-- }}}
---- inner unit test {{{
----
function innertest()
start();
for i = 0, 3000000 do io.write("") end --do essentially nothing 3million times.
stop();
----divide nanoseconds by 1 million to get milliseconds
executionTime = getNanosElapsed()/1000000;
if inVim == true then
print("execution time: ", executionTime, "ms\n");
else
io.write("execution time: ", executionTime, "ms\n");
end
end
innertest()
-- }}}
-- vim: set ft=lua ff=dos fdm=marker ts=2 :expandtab:
This 'A' module execution works well, displaying some inner result test in milliseconds. The problem comes when i call this 'A' lua module from a 'B' lua module with this code:
#!/mingw32/bin/lua
------------------------------------
---TimerTest.lua by Cody Duncan
---
---HighResTimer.lua and Timer.so must
--- be in the same directory as
--- this script.
------------------------------------
--
---- init_packages {{{
basebase = 'foobar/'
function init_packages()
base = basebase .. 'lua/packages/'
arch = 'x86'
print('default arch : ' .. arch)
arch = os.getenv('MSYSTEM_CARCH') == 'x86_64' and 'x64' or 'x86'
print('arch updated : ' .. arch)
package.path = package.path .. ';' .. base .. 'HighResTimer/' .. arch .. '/?.lua'
end
init_packages()
---- }}}
print(' == testing require from TimerTest.lua')
require('HighResTimer')
print(' == end testing require from TimerTest.lua')
local foo = require("HighResTimer")
foo.start();
-- for i = 0, 3000000 do io.write("") end --do essentially nothing 3million times.
-- foo.stop();
--divide nanoseconds by 1 million to get milliseconds
-- executionTime = getNanosElapsed()/1000000;
-- io.write("execution time: ", executionTime, "ms\n");
-- vim: set ft=lua ff=dos fdm=marker ts=2 :expandtab:
The require 'A' Highrestimer.lua seems to not causing troubleshoot but the mine after local foo =... cause the error.
Notice that in 'A' module i use package.cpath in some particular way to load dll meanwhile i use package.path in the 'B' module TimerTest.lua.
I am confused with a tcsh shell script issue. (for work, no choice in shell, I'm stuck with it)
The enableThingN items below are shell enviroment variables set by other things before running this csh script, using tcsh shell. These are not set within the same script here at all, only evaluated here.
Error message is:
enableThing1: Undefined variable.
Code is:
if ( ( $?enableThing1 && ($enableThing1 == 1) ) || \
( $?enableThing2 && ($enableThing2 == 1) ) || \
( $?enableThing3 && ($enableThing3 == 1) ) || \
( $?enableThing4 && ($enableThing4 == 1) ) ) then
set someScriptVar = FALSE
else
set someScriptVar = TRUE
endif
So, as I understand things, the first part of the big if condition is to check if enableThing1 is defined at all or not, using the $?enableThing1 magic. If it is defined, then move on and check the value is 1 or something else. If not defined, then skip the ==1 part of the check for the same shell variable, and move on to see if enableThing2 is defined at all or not, and so on.
As it seems like I am checking for existence, and intend to avoid checking value if it is not defined at all, where have I gone wrong?
I have searched here on stackoverflow and on Google at large, but there are few results and don't get me to an answer, such as:
https://stackoverflow.com/questions/16975968/what-does-var-mean-in-csh
An if statement to check the value of the variable requires that the variable exists.
if ( ( $?enableThing1 && ($enableThing1 == 1) ) || \
# ^ this will fail if the variable is not defined.
So the if condition turns into
if ( ( 0 && don'tknowaboutthis ) || \
and it falls flat.
Assuming you don't want an if ladder, and functionality to add to this list of variables to check for, you can try the following solution:
#!/bin/csh -f
set enableThings = ( enableThing1 enableThing2 enableThing3 enableThing4 ... )
# setting to false initially
set someScriptVar = FALSE
foreach enableThing ($enableThings)
# since we can't use $'s in $? we'll have to do something like this.
set testEnableThing = `env | grep $enableThing`
# this part is for checking if it exists or not, and if it's enabled or not
if (($testEnableThing != "") && (`echo $testEnableThing | cut -d= -f2` == 1 )) then
# ^ this is to check if the variable is defined ^ this is to take the part after the =
# d stands for delimiter
# for example, the output of testEnableThing, if it exists, would be enableThing1=1
# then we take that and cut it to get the value of the variable, in our example it's 1
# if it exists and is enabled, set your someScriptVar
set someScriptVar = TRUE
# you can put a break here since it's irrelevant to check
# for other variables after this becomes true
break
endif
end
This works because we are only working with one variable, "testEnableThing", which is always defined due to the way this works. It can be a blank string, but it will be defined so our if statement won't fall flat.
Hope this solves it for you.
I've recently found an old TrueCrypt volume file of mine, but after an hour of trying out different passwords I haven't found the right one. I know for a fact that I used a combination of old passwords, but it would take a very long time to try all combinations by hand. I've tried different programs (such as Crunch) to construct a wordlist, but all they do is to generate combinations of every single entry in the .txt-file.
So my question is: does anyone know of a program that could combine all the entries in the file, but only in pairs of two?
For example:
String 1 = hello
String 2 = bye
output =
hellobye
byehello
Under windows, the following command will combine all combinations of two passwords into a new file, using a plain text file as input with line-seperated passwords.
for /F "tokens=*" %i in (passwords.txt) do #(
for /F "tokens=*" %j in (passwords.txt) do
#echo %i%j
)>> combinations.txt
Sample wordlist: cat list.txt
a
b
c
d
Script: cat list.py:
words = []
file = open('list.txt', 'r')
for word in file.readlines():
words.append(word.replace('\n', ''))
#i - 1 is to prevent extending past the end of the list on last try
for i in range(len(words) - 1):
#i + 1 is to prevent "wordword"
for j in range(i + 1, len(words)):
print words[i] + words[j]
print words[j] + words[i]
Output: python list.py
ab
ba
ac
ca
ad
da
bc
cb
bd
db
cd
dc
Is there a way to plot a function based on values from a text file?
I know how to define a function in gnuplot and then plot it but that is not what I need.
I have a table with constants for functions that are updated regularly. When this update happens I want to be able to run a script that draws a figure with this new curve. Since there are quite few figures to draw I want to automate the procedure.
Here is an example table with constants:
location a b c
1 1 3 4
2
There are two ways I see to solve the problem but I do not know if and how they can be implemented.
I can then use awk to produce the string: f(x)=1(x)**2+3(x)+4, write it to a file and somehow make gnuplot read this new file and plot on a certain x range.
or use awk inside gnuplot something like f(x) = awk /1/ {print "f(x)="$2 etc., or use awk directly in the plot command.
I any case, I'm stuck and have not found a solution to this problem online, do you have any suggestions?
Another possibilty to have a somewhat generic version for this, you can do the following:
Assume, the parameters are stored in a file parameters.dat with the first line containing the variable names and all others the parameter sets, like
location a b c
1 1 3 4
The script file looks like this:
file = 'parameters.dat'
par_names = system('head -1 '.file)
par_cnt = words(par_names)
# which parameter set to choose
par_line_num = 2
# select the respective string
par_line = system(sprintf('head -%d ', par_line_num).file.' | tail -1')
par_string = ''
do for [i=1:par_cnt] {
eval(word(par_names, i).' = '.word(par_line, i))
}
f(x) = a*x**2 + b*x + c
plot f(x) title sprintf('location = %d', location)
This question (gnuplot store one number from data file into variable) had some hints for me in the first answer.
In my case I have a file which contains parameters for a parabola. I have saved the parameters in gnuplot variables. Then I plot the function containing the parameter variables for each timestep.
#!/usr/bin/gnuplot
datafile = "parabola.txt"
set terminal pngcairo size 1000,500
set xrange [-100:100]
set yrange [-100:100]
titletext(timepar, apar, cpar) = sprintf("In timestep %d we have parameter a = %f, parameter c = %f", timepar, apar, cpar)
do for [step=1:400] {
set output sprintf("parabola%04d.png", step)
# read parameters from file, where the first line is the header, thus the +1
a=system("awk '{ if (NR == " . step . "+1) printf \"%f\", $1}' " . datafile)
c=system("awk '{ if (NR == " . step . "+1) printf \"%f\", $2}' " . datafile)
# convert parameters to numeric format
a=a+0.
c=c+0.
set title titletext(step, a, c)
plot c+a*x**2
}
This gives a series of png files called parabola0001.png,
parabola0002.png,
parabola0003.png,
…, each showing a parabola with the parameters read from the file called parabola.txt. The title contains the parameters of the given time step.
For understanding the gnuplot system() function you have to know that:
stuff inside double quotes is not parsed by gnuplot
the dot is for concatenating strings in gnuplot
the double quotes for the awk printf command have to be escaped, to hide them from gnuplot parser
To test this gnuplot script, save it into a file with an arbitrary name, e.g. parabolaplot.gplot and make it executable (chmad a+x parabolaplot.gplot). The parabola.txt file can be created with
awk 'BEGIN {for (i=1; i<=1000; i++) printf "%f\t%f\n", i/200, i/100}' > parabola.txt
awk '/1/ {print "plot "$2"*x**2+"$3"*x+"$4}' | gnuplot -persist
Will select the line and plot it
This was/is another question about how to extract specific values into variables with gnuplot (maybe it would be worth to create a Wiki entry about this topic).
There is no need for using awk, you can do this simply with gnuplot only (hence platform-independent), even with gnuplot 4.6.0 (March 2012).
You can do a stats (check help stats) and assign the values to variables.
Data: SO15007620_Parameters.txt
location a b c
1 1 3 4
2 -1 2 3
3 2 1 -1
Script: (works with gnuplot 4.6.0, March 2012)
### read parameters from separate file into variables
reset
FILE = "SO15007620_Parameters.txt"
myLine = 1 # line index 0-based
stats FILE u (a=$2, b=$3, c=$4) every ::myLine::myLine nooutput
f(x) = a*x**2 + b*x + c
plot f(x) w l lc rgb "red" ti sprintf("f(x) = %gx^2 + %gx + %g", a,b,c)
### end of script
Result:
We use grep, cut, sort, uniq, and join at the command line all the time to do data analysis. They work great, although there are shortcomings. For example, you have to give column numbers to each tool. We often have wide files (many columns) and a column header that gives column names. In fact, our files look a lot like SQL tables. I'm sure there is a driver (ODBC?) that will operate on delimited text files, and some query engine that will use that driver, so we could just use SQL queries on our text files. Since doing analysis is usually ad hoc, it would have to be minimal setup to query new files (just use the files I specify in this directory) rather than declaring particular tables in some config.
Practically speaking, what's the easiest? That is, the SQL engine and driver that is easiest to set up and use to apply against text files?
David Malcolm wrote a little tool named "squeal" (formerly "show"), which allows you to use SQL-like command-line syntax to parse text files of various formats, including CSV.
An example on squeal's home page:
$ squeal "count(*)", source from /var/log/messages* group by source order by "count(*)" desc
count(*)|source |
--------+--------------------+
1633 |kernel |
1324 |NetworkManager |
98 |ntpd |
70 |avahi-daemon |
63 |dhclient |
48 |setroubleshoot |
39 |dnsmasq |
29 |nm-system-settings |
27 |bluetoothd |
14 |/usr/sbin/gpm |
13 |acpid |
10 |init |
9 |pcscd |
9 |pulseaudio |
6 |gnome-keyring-ask |
6 |gnome-keyring-daemon|
6 |gnome-session |
6 |rsyslogd |
5 |rpc.statd |
4 |vpnc |
3 |gdm-session-worker |
2 |auditd |
2 |console-kit-daemon |
2 |libvirtd |
2 |rpcbind |
1 |nm-dispatcher.action|
1 |restorecond |
q - Run SQL directly on CSV or TSV files:
https://github.com/harelba/q
Riffing off someone else's suggestion, here is a Python script for sqlite3. A little verbose, but it works.
I don't like having to completely copy the file to drop the header line, but I don't know how else to convince sqlite3's .import to skip it. I could create INSERT statements, but that seems just as bad if not worse.
Sample invocation:
$ sql.py --file foo --sql "select count(*) from data"
The code:
#!/usr/bin/env python
"""Run a SQL statement on a text file"""
import os
import sys
import getopt
import tempfile
import re
class Usage(Exception):
def __init__(self, msg):
self.msg = msg
def runCmd(cmd):
if os.system(cmd):
print "Error running " + cmd
sys.exit(1)
# TODO(dan): Return actual exit code
def usage():
print >>sys.stderr, "Usage: sql.py --file file --sql sql"
def main(argv=None):
if argv is None:
argv = sys.argv
try:
try:
opts, args = getopt.getopt(argv[1:], "h",
["help", "file=", "sql="])
except getopt.error, msg:
raise Usage(msg)
except Usage, err:
print >>sys.stderr, err.msg
print >>sys.stderr, "for help use --help"
return 2
filename = None
sql = None
for o, a in opts:
if o in ("-h", "--help"):
usage()
return 0
elif o in ("--file"):
filename = a
elif o in ("--sql"):
sql = a
else:
print "Found unexpected option " + o
if not filename:
print >>sys.stderr, "Must give --file"
sys.exit(1)
if not sql:
print >>sys.stderr, "Must give --sql"
sys.exit(1)
# Get the first line of the file to make a CREATE statement
#
# Copy the rest of the lines into a new file (datafile) so that
# sqlite3 can import data without header. If sqlite3 could skip
# the first line with .import, this copy would be unnecessary.
foo = open(filename)
datafile = tempfile.NamedTemporaryFile()
first = True
for line in foo.readlines():
if first:
headers = line.rstrip().split()
first = False
else:
print >>datafile, line,
datafile.flush()
#print datafile.name
#runCmd("cat %s" % datafile.name)
# Create columns with NUMERIC affinity so that if they are numbers,
# SQL queries will treat them as such.
create_statement = "CREATE TABLE data (" + ",".join(
map(lambda x: "`%s` NUMERIC" % x, headers)) + ");"
cmdfile = tempfile.NamedTemporaryFile()
#print cmdfile.name
print >>cmdfile,create_statement
print >>cmdfile,".separator ' '"
print >>cmdfile,".import '" + datafile.name + "' data"
print >>cmdfile, sql + ";"
cmdfile.flush()
#runCmd("cat %s" % cmdfile.name)
runCmd("cat %s | sqlite3" % cmdfile.name)
if __name__ == "__main__":
sys.exit(main())
Maybe write a script that creates an SQLite instance (possibly in memory), imports your data from a file/stdin (accepting your data's format), runs a query, then exits?
Depending on the amount of data, performance could be acceptable.
MySQL has a CVS storage engine, that might do what you need, if your files are CSV files.
Otherwise, you can use mysqlimport to import text files into MySQL. You could create a wrapper around mysqlimport, which figures out columns etc. and creates the necessary table.
You might also be able to use DBD::AnyData, a Perl module which lets you access text files like a database.
That said, it sounds a lot like you should really look at using a database. Is it really easier keeping table-oriented data in text files?
I have used Microsoft LogParser to query csv files several times... and it serves the purpose. It was surprising to see such a useful tool from M$ that too Free!