How to acquire DualEELS spectra by DM script? - hardware

I would like to acquire both low-loss and high-loss EELS spectra simultaneously in DualEELS mode by DM script. However, the command for acquiring an EELS spectrum EELSAcquireSpectrum() can obtain only single EELS spectrum.
Is there an appropriate scripting commands for DualEELS acquisition?
My system is GMS2.x, but please tell me even if such a command is available in only GMS3.x.

GMS 3.2 (Possible also for GMS 2.3)
I am not aware of any specific command for DualEELS. As a rought workaround: When you start the acquistions via EELSInvokeCaptureButton() or EELSInvokeViewButton() the mode you have set on your UI will be followed. You then need to grab the two front-most images per script.
This is a rough example script:
EELSInvokeCaptureButton()
image low,high
while ( EELSAcquisitionIsActive() )
{
Result(" \n waiting..." )
sleep( 0.1 )
}
high := GetImageDocument( 0 ).ImageDocumentGetImage( 0 )
low := GetImageDocument( 1 ).ImageDocumentGetImage( 0 )
low.ImageSetName( low.ImageGetName() + " - l" )
high.ImageSetName( high.ImageGetName() + " - h" )

Related

How to find rejected files due to errors in apache beam java sdk

I Have N number of same type files to be processed and I will be giving a wildcard input pattern(C:\\users\\*\\*).
So now how do I find the file name and record ,that has been rejected while uploading to bigquery in java.
I guess BQ writes to the temp location path that you pass to your pipeline and not to local [honestly not sure about this].
In my case, with python, I used to pass tmp location as GCS bucket, and when I error is show, they usually shows the name of the log file that contains the rejected errors in the command line logs.
And then I use gsutil cp command to copy it to my local computer and read it
BigQuery I/O (Java and Python SDK) supports deadletter pattern: https://beam.apache.org/documentation/patterns/bigqueryio/.
Java
result
.getFailedInsertsWithErr()
.apply(
MapElements.into(TypeDescriptors.strings())
.via(
x -> {
System.out.println(" The table was " + x.getTable());
System.out.println(" The row was " + x.getRow());
System.out.println(" The error was " + x.getError());
return "";
}));
Python
errors = (
result['FailedRows']
| 'PrintErrors' >>
beam.FlatMap(lambda err: print("Error Found {}".format(err))))

IDL batch processing: fully automatic input selection

I need to process MODIS ocean level 2 data and I obtained an external plugin for ENVI https://github.com/dawhite/EPOC/releases. Now, I want to batch process hundreds of images for which I modified the code like the following code. The code is running fine, but I have to select the input file every time. Can anyone please help me to make the program fully automatic? I really appreciate and thanks a lot for your help!
Pro OCL2convert
dir = 'C:\MODIS\'
CD, dir
; batch processing of level 2 ocean chlorophyll data
files=file_search('*.L2_LAC_OC.x.hdf', count=numfiles)
; this command will search for all files in the directory which end with
; the specified one
counter=0
; this is a counter that tells IDL which file is being read-starts at 0
While (counter LT numfiles) Do begin
; this command tells IDL to start a loop and to only finish when the counter
; is equal to the number of files with the name specified
name=files(counter)
openr, 1, name
proj = envi_proj_create(/utm, zone=40, datum='WGS-84')
ps = [1000.0d,1000.0d]
no_bowtie = 0 ;same as not setting the keyword
no_msg = 1 ;same as setting the keyword
;OUTPUT CHOICES
;0 -> standard product only
;1 -> georeferenced product only
;2 -> standard and georeferenced products
output_choice = 2
;RETURNED VALUES
;r_fid -> ENVI FID for the standard product, if requested
;georef_fid -> ENVI FID for the georeferenced product, if requested
convert_oc_l2_data, fname=fname, output_path=output_path, $
proj=proj, ps=ps, output_choice=output_choice, r_fid=r_fid, $
georef_fid=georef_fid, no_bowtie=no_bowtie, no_msg=no_msg
print,'done!'
close, 1
counter=counter+1
Endwhile
End
Not knowing what convert_oc_l2_data does (it appears to be a program you created, there is no public documentation for it), I would say that the problem might be that the out_path variable is not defined in the rest of your program.

redis accumulate & publish a set of operations

Is it possible to instruct Redis to accumulate a set of operations and then issue a "publish all" command to publish the entire set of operations ( in linear order ) ?
So you'd somehow set a marker ( startpublish ? ) and a cache would accumulate all operations ( hdel hset ) received from that point on.
Finally you'd issue a command ( publishall ? ) and Redis would then broadcast the commands in linear order received.
IMPORTANT NOTE: I need to perform set-operations programmatically in Node.js, via Redis Sentinel Client ( package redis-sentinel-client ).
You can queue multiple commands to redis using the multi and exec redis command.
So essentially what you end up with is something like this:
redis > multi
redis > set foo bar
redis > set alpha beta
redis > exec
What you get back is an array, in the same order you execute the commands. So the result set at index 0 of the resulting array will contain the error and/or result of the command set foo bar.
Example for the multi command can be found here: http://redis.io/commands/multi

How to process various tasks like video acquisition parallel in Matlab?

I want to acquire image data from stereo camera simultaneously, or in parallel, save somewhere and read the data when need.
Currently I am doing
for i=1:100
start([vid1 vid2]);
imageData1=getdata(vid1,1);
imageData2=getdata(vid2,1);
%do several calculations%
....
end
In this cameras are working serially and it is very slow. How can I make 2 cameras work at a time???
Please help..
P.S : I also tried parfor but it does not help .
Regards
No Parallel Computing Toolbox required!
The following solution can generally solve problems like yours:
First the videos, I just use some vectors as "data" and save them to the workspace, these would be your two video files:
% Creating of some "videos"
fakevideo1 = [1 ; 1 ; 1];
save('fakevideo1','fakevideo1');
fakevideo2 = [2 ; 2 ; 2];
save('fakevideo2','fakevideo2');
The basic trick is to create a function which generates another instance of Matlab:
function [ ] = parallelinstance( fakevideo_number )
% create command
% -sd (set directory), pwd (current directory), -r (run function) ...
% finally "&" to indicate background computation
command = strcat('matlab -sd',{' '},pwd,{' '},'-r "processvideo(',num2str(fakevideo_number),')" -nodesktop -nosplash &');
% call command
system( command{1} );
end
Most important is the use of & at the end of the terminal command!
Within this function another function is called where the actual video processing is done:
function [] = processvideo( fakevideo_number )
% create file and variable name
filename = strcat('fakevideo',num2str(fakevideo_number),'.mat');
varname = strcat('fakevideo',num2str(fakevideo_number));
% load video to workspace or whatever
load(filename);
A = eval(varname);
% do what has to be done
results = A*2;
% save results to workspace, file, grandmothers mailbox, etc.
save([varname 'processed'],'results');
% just to show that both processes run parallel
pause(5)
exit
end
Finally call the two processes in your main script:
% function call with number of video: parallelinstance(fakevideo_number)
parallelinstance(1);
parallelinstance(2);
My code is completely executable, so just play around a bit. I tried to keep it simple.
After all you will find two .mat files with the processed video "data" in your workspace.
Be aware to adjust the string fakevideo to name root of all your video files.

Executing a script from inside code in VxWorks 6.7

In VxWorks 5.5.1 you could run a script using the execute command. In VxWorks 6.7 the execute command is no longer supported. Does anyone now if there is a replacement? I am specifically talking about from inside code not command line.
Through much research it appears like there are a few ways to accomplish this but none is exactly the same as the execute command from before. As I stated in the comment below it turns out that the execute command is not an official API call.
1) shellCmdExec can be used but most be called from inside the shell task.
2) The solution we choose to employ - which is to call it from within our startup script
3) And a hack way:
fd = open("/y/startup.go", 0, 0) /* open the script you want to execute /
v=shellFromNameGet("tShell0") / Get the shell i.d. */
/* Use shellinOutGet to save off the standard in of the shell /
shellInOutSet (v, fd, -1, -1) / Set the standard in of the shell to the file */
/* Here you should restore the standard in (do a shellInOutGet beforehand). Do it after the shell is done with the script. I would say that your script should increrment a variable when ti is done. */
close(fd)
There's a solution in the VxWorks Kernel programmer's guide 6.7, the problem is that it did not work for me, but it could help you:
shellGenericInit ("INTERPRETER=Cmd", 0, NULL, &shellTaskName, FALSE, FALSE,fdScript, STD_OUT, STD_ERR); do
taskDelay (sysClkRateGet ());
while (taskNameToId (shellTaskName) != ERROR); close (fdScript);
Check Section 15.2.15 of the document.
You can do it in the serial driver layer. Try the following code. It shows how to send text to the shell's input.
For example,
pass_to_sio("memShow; ifconfig"); in your c code.
-> sp pass_to_sio, "memShow; ifconfig" in the shell.
pass_to_sio("< test.scr"); in your c code if you want to run a script file.
-> sp pass_to_sio, "< test.scr" in the shell if you want to run a script file.
void pass_to_sio(char *input)
{
int old_priority;
taskPriorityGet(taskIdSelf(),&old_priority);
taskPrioritySet(taskIdSelf(),250); /* task priority must be lower than tShell0 */
NS16550_CHAN *pChan = &ns16550Chan[0]; /* this line depends on your BSP */
while (input != NULL && *input != NULL)
{
(*pChan->putRcvChar) (pChan->putRcvArg, *input);
input++;
}
(*pChan->putRcvChar) (pChan->putRcvArg, '\r');
taskPrioritySet(taskIdSelf(),old_priority);
}