psychopy website does not give a fix for output file doubeling data - file-io

Here is some of the code...
Whenever I get an output file I get a doubling of data.
#For each record in keypress, a line is created in the file
keyPress = []
keyPress.append(event.waitKeys(keyList=['s','d'],timeStamped=clock))
for key in keyPress:
for l, t in key:
f.write(str(images[index]) + "\t iteration \t" + str(k + 1) + "\t" + l + "\t" + str(t)+"\n")
f.close()

There's a few things that are unclear here and I haven't managed to reproduce it. But I will give my shot at an answer anyway. First, event.waitKeys returns just one response, so it is really not necessary to loop over them. So I'd just do
l, t = event.waitKeys(keyList=['s','d'],timeStamped=clock)[0]
... which is much nicer. So a full reproducible solution would be this:
# Set things up
from psychopy import visual, event, core
win = visual.Window()
clock = core.Clock()
f = open('log.tsv', 'a')
# Record responses for a few trials and save
for trial in range(5):
l, t = event.waitKeys(keyList=['s','d'], timeStamped=clock)[0] # [0] extracts the first (and only) element, i.e. the (key, rt) tuple which is then unpacked into l and t.
f.write('trial' + trial + '\tkey' + l + "\tRT" + str(t) + "\n")
f.close()
Instead of creating your log files manually like this, consider using the csv module or psychopy's own data.TrialHandler. Usually it's nice to represent trials using a dict and save responses together with the properties of each trial. the csv module has a DictWriter method.

Related

Skip csv header row using boto3 in lambda and copy_from in psycopg2

I'm loading a csv into memory from s3 and then I need to insert it into postgres. I think the problem is I'm not using the right call for the s3 object or something as I don't appear to be able to skip the header line. On my local machine I would just load the file from the directory:
cur = DBCONN.cursor()
for filename in absolute_file_paths('/path/to/file/csv.log'):
print('Importing: ' + filename)
with open(filename, 'r') as log:
next(log) # Skip the header row.
cur.copy_from(log, 'vesta', sep='\t')
DBCONN.commit()
I have the below in lambda which I would like to work kind of like above, but it's different with s3. What is the correct way to have the below work like above? Or perhaps - what IS the correct way to do this?
s3 = boto3.client('s3')
#Load the file from s3 into memory
obj = s3.get_object(Bucket=bucket, Key=key)
contents = obj['Body']
next(contents, None) # Skip the header row - this does not seem to work
cur = DBCONN.cursor()
cur.copy_from(contents, 'my_table', sep='\t')
DBCONN.commit()
Seemingly, my problem had something to do with an incredibly wide csv file (I have over 200 columns) and somehow that messed up the next() function to not give the next row. SO! I will say that IF your file is not seemingly that wide, then the code I placed in the question should work. Below however is how I got it work, basically by just reading the file into memory and then writing that back to an in memory file after skipping the header row. This honestly seems a little like overkill so I'd be happy if someone could provide something more efficient but seeing as how I spend the last eight hours on this, I'm just happy to have SOMETHING that works.
s3 = boto3.client('s3')
...
def remove_header(contents):
# Reformat the file, removing the header row
data = csv.reader(io.StringIO(contents), delimiter='\t') #read data in
mem_file = io.StringIO() #create in memory file object
next(data) #skip header row
writer = csv.writer(mem_file, delimiter='\t') #set up the csv writer
writer.writerows(data) #write the data in memory to the in mem file
mem_file.getvalue() # Get the string from the buffer
mem_file.seek(0) # Go back to the beginning of the memory stream
return mem_file
...
#Load the file from s3 into memory
obj = s3.get_object(Bucket=bucket, Key=key)
contents = obj['Body'].read().decode('utf-8')
mem_file = remove_header(contents)
#Insert into postgres
try:
cur = DBCONN.cursor()
cur.copy_from(mem_file, 'my_table', sep='\t')
DBCONN.commit()
except BaseException as e:
DBCONN.rollback()
raise e
or if you want to do it with pandas
def remove_header_pandas(contents):
df = pd.read_csv(io.StringIO(contents), sep='\t')
mem_file = io.StringIO()
df.to_csv(mem_file, header=False, index=False) #remove header
mem_file.getvalue()
mem_file.seek(0)
return mem_file

How to export cplex' solution?

I have a file quadratic_obj.lp with the following content:
Minimize
obj: a + b + [ a^2 + 4 a * b + 7 b^2 ]/2
Subject To
c1: a + b >= 10
End
In an interactive cplex session, I read in the file using read, I optimize using optimize. Then I can display the solution using
display solution variables -
which gives me
Variable Name Solution Value
a 10.000000
b 0.000000
Is there a way to pipeline this output? So in an ideal world there would be something like:
display solution variables - -> myoutput.csv
I used write but the file type options there are not what I look for. E.g. sol is returned as an xml which I would have to parse again.
Is there a way to just export the variables and their values to e.g. a tab- or comma-separated file?
There is no automatic way to do this from the interactive. If you do something like the following, it gets you close:
./cplex -c "read quadratic_obj.lp" "opt" "set logfile tmp.log" "display solution variables -" "quit"
This will put the output into a file named tmp.log, but there is still some extra stuff in there that you'd need to post-process with a script (or something like this). See this link (for version 12.6.3) for more information on this technique.
Another alternative would be to use the API's. Then, you have complete control over the output. For example, using the Python API, you could do something like the following:
import cplex
cpx = cplex.Cplex()
cpx.read('quadratic_obj.lp')
cpx.solve()
# Check solution status here via cpx.solution.get_status()
for name, value in zip(cpx.variables.get_names(),
cpx.solution.get_values()):
print name, value
you can do that within CPLEX with OPL:
dvar float+ a;
dvar float+ b;
minimize a + b + ( a*a + 4 *a * b + 7 *b*b )/2;
subject to
{
c1: a + b >= 10;
}
execute
{
var f=new IloOplOutputFile("res.csv");
f.writeln(a);
f.writeln(b);
f.close();
}
and this will create a csv file res.csv
regards

How to process various tasks like video acquisition parallel in Matlab?

I want to acquire image data from stereo camera simultaneously, or in parallel, save somewhere and read the data when need.
Currently I am doing
for i=1:100
start([vid1 vid2]);
imageData1=getdata(vid1,1);
imageData2=getdata(vid2,1);
%do several calculations%
....
end
In this cameras are working serially and it is very slow. How can I make 2 cameras work at a time???
Please help..
P.S : I also tried parfor but it does not help .
Regards
No Parallel Computing Toolbox required!
The following solution can generally solve problems like yours:
First the videos, I just use some vectors as "data" and save them to the workspace, these would be your two video files:
% Creating of some "videos"
fakevideo1 = [1 ; 1 ; 1];
save('fakevideo1','fakevideo1');
fakevideo2 = [2 ; 2 ; 2];
save('fakevideo2','fakevideo2');
The basic trick is to create a function which generates another instance of Matlab:
function [ ] = parallelinstance( fakevideo_number )
% create command
% -sd (set directory), pwd (current directory), -r (run function) ...
% finally "&" to indicate background computation
command = strcat('matlab -sd',{' '},pwd,{' '},'-r "processvideo(',num2str(fakevideo_number),')" -nodesktop -nosplash &');
% call command
system( command{1} );
end
Most important is the use of & at the end of the terminal command!
Within this function another function is called where the actual video processing is done:
function [] = processvideo( fakevideo_number )
% create file and variable name
filename = strcat('fakevideo',num2str(fakevideo_number),'.mat');
varname = strcat('fakevideo',num2str(fakevideo_number));
% load video to workspace or whatever
load(filename);
A = eval(varname);
% do what has to be done
results = A*2;
% save results to workspace, file, grandmothers mailbox, etc.
save([varname 'processed'],'results');
% just to show that both processes run parallel
pause(5)
exit
end
Finally call the two processes in your main script:
% function call with number of video: parallelinstance(fakevideo_number)
parallelinstance(1);
parallelinstance(2);
My code is completely executable, so just play around a bit. I tried to keep it simple.
After all you will find two .mat files with the processed video "data" in your workspace.
Be aware to adjust the string fakevideo to name root of all your video files.

Jython - importing a text file to assign global variables

I am using Jython and wish to import a text file that contains many configuration values such as:
QManager = MYQM
ProdDBName = MYDATABASE
etc.
.. and then I am reading the file line by line.
What I am unable to figure out is now that as I read each line and have assigned whatever is before the = sign to a local loop variable named MYVAR and assigned whatever is after the = sign to a local loop variable MYVAL - how do I ensure that once the loop finishes I have a bunch of global variables such as QManager & ProdDBName etc.
I've been working on this for days - I really hope someone can help.
Many thanks,
Bret.
See other question: Properties file in python (similar to Java Properties)
Automatically setting global variables is not a good idea for me. I would prefer global ConfigParser object or dictionary. If your config file is similar to Windows .ini files then you can read it and set some global variables with something like:
def read_conf():
global QManager
import ConfigParser
conf = ConfigParser.ConfigParser()
conf.read('my.conf')
QManager = conf.get('QM', 'QManager')
print('Conf option QManager: [%s]' % (QManager))
(this assumes you have [QM] section in your my.conf config file)
If you want to parse config file without help of ConfigParser or similar module then try:
my_options = {}
f = open('my.conf')
for line in f:
if '=' in line:
k, v = line.split('=', 1)
k = k.strip()
v = v.strip()
print('debug [%s]:[%s]' % (k, v))
my_options[k] = v
f.close()
print('-' * 20)
# this will show just read value
print('Option QManager: [%s]' % (my_options['QManager']))
# this will fail with KeyError exception
# you must be aware of non-existing values or values
# where case differs
print('Option qmanager: [%s]' % (my_options['qmanager']))

R problem with apply + rbind

I cannot seem to get the following to work
directory <- "./"
files.15x16 <- c("15x16-70d.out", "15x16-71d.out")
data.15x16<-rbind( lapply( as.array(paste(directory, files.15x16, sep="")), FUN=read.csv, sep=" ", header=F) )
What it should be doing is pretty straightforward - I have a directory name, some file names, and actual files of data. I paste the directory and file names together, read the data from the files in, and then rbind them all together into a single chunk of data.
Except the result of the lapply has the data in [[]] - i.e., accessing it occurs via a[[1]], a[[2]], etc which rbind doesn't seem to accept.
Suggestions?
Use do.call:
data.15x16 <- do.call(rbind, lapply(paste(directory, files.15x16, sep=""),
FUN=read.csv, sep=" ", header=F))
You also don't need the as.array - it does not really do anything here.