i want to compute a table of SDP-solutions. I create a bash file that calls an SDP-solver (SDPA or CSDP) for different data sets:
problem1.dat-s
problem2.dat-s
...
Because i want to create a table of numbers, i dont want the whole output like iterations etc. Is there a way to avoid these messages? Or even better, a way to create one solution-set-file of the data sets?
Thanks, dalvo
It's a while now, since this questions was asked, maybe you have found an answer yourself by now. If not, try calling
csdp problem1.dat-s problem1.sol > NUL
csdp problem2.dat-s problem2.sol > NUL
...
This way you'll get your solutions written to a solution file. With CSDP you'll have one vector and two matrices. Reading these files, you can easily create any other set of solutions. The information written to stdout are useless, if you're just looking for the solution, since you'll only find the error values and messages and measures of time. So redirecting stdout to NUL will help you avoid these informations.
I don't know, how this would actually work with SDPA, but considering the information found on the man-pages, it should be the same there.
Related
I often have a snakemake rule like the following:
rule aggregate:
input: expand("samples/{sample}/data.txt", sample=samples)
script:
"scripts/aggregate.py"
This gives aggregate.py the correct list of sample data files in snakemake.input, but it loses the association between samples and their files. I usually need the association sample -> sample file in aggregate.py and to get it in aggregate.py I either (A) recreate the list of files or (B) recreate the list of sample IDs in the same order as the files. Both are unsatisfying due to duplication of data and requiring that two places of code be kept in sync if either changes.
If like this example, there's only one variable being expanded, then adding it to params is OK, i.e. params: samples then zipping that together with inputs. But for more than one expanded variable, there is a big possible error where you give the variables in the different orders in the Snakefile and aggregate.py. That causes a silent error where all the data is mislabeled.
Is there a canonical or recommended way to handle this?
I would better rework the aggregate.py script, and call it from the shell section. This script should not know that it is being called from Snakemake, and get all relevant information from command line. Clean interface between the caller and the script is crucial, and would help you to rethink the task itself.
When trying to filter by tag, there is a small popup:
I have been looking for logfmt around, but all I can find is key=value format.
My questions are:
Is there a way for something more sophisticated? (starts_with, not equal, contains, etc)
I am trying to filter by url using http.url="http://example.com?bla=bla&foo=bar". I am pretty sure the value exists because I am copy/pasting from my trace. I am getting no results. Do I need to escape characters or do something else for this to work?
I did some research around logfmt as well. Based on the documentation of the original implementation and in the Python implementation of the parser (and respective tests), I would say that it doesn't support anything more sophisticated (like starts_with, not equal, contains). And this is because the output of the parser is a simple dictionary (with no regex involved in the values).
As for the second question, using the same mentioned Python parser, I was able to double-check that your filter looks fine:
from logfmt import parse_line
parse_line('http.url="http://example.com?bla=bla&foo=bar"')
Output:
{'http.url': 'http://example.com?bla=bla&foo=bar'}
This makes me suspect of an issue on the Jaeger side, but this is as far as I could go.
I plan on doing the Code Jam competition next year. The problem is (something that I can't find anywhere) how do I set up my code to accept input and return an output?
I'm just confused as to how I am supposed to handle everything, say for example that I have to add 1 to the input and make the result the output, how would I handle the input/output?
I plan on using LUA. Thanks if you can. I think a code example would be best!
Go to http://www.go-hero.net/jam/11/solutions and choose the problem and the language. You'll find all the examples you need there. Including Lua ones.
http://www.go-hero.net/jam/
On this page you can see source code for every contestant.
You should be able to answer your question from it.
You can either do I/O on files or on stdin/stdout. It doesn't matter. I've seen examples that used files and examples that used stdin/stdout.
In Python say, you could write
import fileinput
f = fileinput.input()
Then call the script from the shell like this
gorosort.py A-small-practice.in
To save the output to a file
gorosort.py A-small-practice.in > A-small-practice.out
Following conversion
SELECT to_tsvector('english', 'Google.com');
returns this:
'google.com':1
Why does TSearch2 engine didn't return something like this?
'google':2, 'com':1
Or how can i make the engine to return the exploded string as i wrote above?
I just need "Google.com" to be foundable by "google".
Unfortunately, there is no quick and easy solution.
Denis is correct in that the parser is recognizing it as a hostname, which is why it doesn't break it up.
There are 3 other things you can do, off the top of my head.
You can disable the host parsing in the database. See postgres documentation for details. E.g. something like ALTER TEXT SEARCH CONFIGURATION your_parser_config
DROP MAPPING FOR url, url_path
You can write your own custom dictionary.
You can pre-parse your data before it's inserted into the database in some manner (maybe splitting all domains before going into the database).
I had a similar issue to you last year and opted for solution (2), above.
My solution was to write a custom dictionary that splits words up on non-word characters. A custom dictionary is a lot easier & quicker to write than a new parser. You still have to write C tho :)
The dictionary I wrote would return something like 'www.facebook.com':4, 'com':3, 'facebook':2, 'www':1' for the 'www.facebook.com' domain (we had a unique-ish scenario, hence the 4 results instead of 3).
The trouble with a custom dictionary is that you will no longer get stemming (ie: www.books.com will come out as www, books and com). I believe there is some work (which may have been completed) to allow chaining of dictionaries which would solve this problem.
First off in case you're not aware, tsearch2 is deprecated in favor of the built-in functionality:
http://www.postgresql.org/docs/9/static/textsearch.html
As for your actual question, google.com gets recognized as a host by the parser:
http://www.postgresql.org/docs/9.0/static/textsearch-parsers.html
If you don't want this to occur, you'll need to pre-process your text accordingly (or use a custom parser).
First, i admit all the things i will ask are about our homework but i assure you i am not asking without struggling at least two hours.
Description: We are supposed to add a field called max_cpu_percent to task_struct data type and manipulate process scheduling algorithm so that processes can not use an higher percentage of the cpu.
for example if i set max_cpu_percent field as 20 for the process firefox, firefox will not be able to use more than 20% of the cpu.
We wrote a system call to set max_cpu_percent field. Now we need to see if the system call works or not but we could not get the value of the max_cpu_percent field from a user-spaced program.
Can we do this? and how?
We tried proc/pid/ etc can we get the value using this util?
By the way, We may add additional questions here if we could not get rid of something else
Thanks All
Solution:
The reason was we did not modify the code block writing the output to the proc queries.
There are some methods in array.c file (fs/proc/array.c) we modified the function so that also print the newly added fields value. kernel is now compiling we'll see the result after about an hour =)
It Worked...
(If you simply extended getrlimit/setrlimit, then you'd be done by now…)
There's already a mechanism where similar parts of task_struct are exposed: /proc/$PID/stat (and /proc/$PID/$TID/stat). Look for functions proc_tgid_stat and proc_tid_stat. You can add new fields to the ends of these files.