scilab : index in variable name loop - variables

i would like to read some images with scilab and i use the function imread like this
im01=imread('kodim01t.jpg');
im02=imread('kodim02t.jpg');
im03=imread('kodim03t.jpg');
im04=imread('kodim04t.jpg');
im05=imread('kodim05t.jpg');
im06=imread('kodim06t.jpg');
im07=imread('kodim07t.jpg');
im08=imread('kodim08t.jpg');
im09=imread('kodim09t.jpg');
im10=imread('kodim10t.jpg');
i would like to know if there is a way to do something like below in order to optimize the
for i = 1:5
im&i=imread('kodim0&i.jpg');
end
thanks in advance

I see two possible solutions using execstr or using some kind of list/matrix
Execstr
First create a string of the command to execute with msprintf and then execute this with execstr. Note that in the msprintf conversion the right amount of leading zeros are inserted by %0d format specifier descbribed here.
for i = 1:5
cmd=msprintf('im%d=imread(\'kodim%02d.jpg\');', i, i);
execstr(cmd);
end
List/Matrix
This is probably the more intuitive option using a indexable container such as list.
// This list could be generated using msprintf from example above
file_names_list = list("kodim01t.jpg", "kodim02t.jpg" ,"kodim03t.jpg");
// Create empty list to contain images
opened_images = list();
for i=1:length(file_names_list)
// Open image and insert it at end of list
opened_images($+1) = imread(file_names_list[i]);
end

Related

Looking for a way to get a detailed description of ZTERM

I am currently trying to program a function module which should, in theory, output a custom table like T052, but with an additional field Z_TEXTLONG, which explains the details of the chosen ZTERM, akin to the text in FI_F4_ZTERM's popup. Here's what I tried:
LOOP AT T_ZBEDS ASSIGNING FIELD-SYMBOL(<line>).
CALL FUNCTION 'FI_F4_ZTERM'
EXPORTING
I_KOART = 'K'
I_ZTERM = <line>-zterm
I_XSHOW = ''
I_ZTYPE = ''
I_NO_POPUP = 'X'
IMPORTING
E_ZTERM = v_text
EXCEPTIONS
NOTHING_FOUND = 1
OTHERS = 2.
WRITE v_text TO <line>-Z_TEXTLONG.
From what I gathered, this does not work due to FI_F4_ZTERM writing the list it returns into E_ZTERM, not a single value, which would be what I need. I am a bit lost as to what I should do next. I tried looking into how exactly FI_F4_ZTERM generates these texts or where it calls them from, but I was not successful. Currently, I am trying to maybe get this text from V_T052, but that does not work either. I would be thankful for any suggestions.
Try function FI_TEXT_ZTERM, it has single import parameter I_T052
why not just call CALL FUNCTION 'FI_F4_ZTERM'
with I_NO_POPUP = 'X'
importing
ET_ZTERM = lt_zterm.
you get the list of payment terms and their texts.
Then in the loop read it from the table of payment terms.
LOOP AT T_ZBEDS ASSIGNING FIELD-SYMBOL(<line>).
read table lt_zterm assigning <term>
with key ZTERM = <line>-zterm.
ENDLOOP.

Using ssplit options for CoreNLP

According to the documentation, I can use options such as ssplit.isOneSentence for parsing my document into sentences. How exactly do I do this though, given a StanfordCoreNLP object?
Here's my code -
Properties props = new Properties();
props.setProperty("annotators", "tokenize, ssplit, pos, lemma, ner, depparse");
pipeline.annotate(document);
Annotation document = new Annotation(doc);
pipeline.annotate(document);
List<CoreMap> sentences = document.get(SentencesAnnotation.class);
At what point do I add this option and where?
Something like this?
pipeline.ssplit.boundaryTokenRegex = '"'
I'd also like to know how to use it for the specific option boundaryTokenRegex
EDIT:
I think this seems more appropriate -
props.put("ssplit.boundaryTokenRegex", "/"");
But I still have to verify.
The way to do it for tokenizing sentences to end at any instance of a ' " ' is this -
props.setProperty("ssplit.boundaryMultiTokenRegex", "/\'\'/");
or
props.setProperty("ssplit.boundaryMultiTokenRegex", "/\"/");
depending on how it is stored. (CoreNLP normalizes it as the former)
And if you want both starting and ending quotes -
props.setProperty("ssplit.boundaryMultiTokenRegex","\/'/'|``\");

Talend - Dynamic Column Name (Enterprise version)

Can anyone help me solve this case?
I have much file to process, two of them is like on below screenshot with my expected output.
I use this transformation on Talend: tFileList---tInputExcel---tUnpivotRow---tMap---tPostgresqlOutput
The output is different to my expected output. This is the screenshot of the output
Can anyone help me to reach my expected output which is like on my first picture above?
This will be pretty hard. You'd have to handle that as a text file. And whenever you found "store" value in the first column you'd update your type with the value.
Here's how I'd start:
Basically tJavaFlex begin piece would contain:
String col1Type
String colNType
main part:
if input_row.col0.equalsIgnoreCase("store") {
col1Type = input_row.col1;
col2Type = input_row.col2;
colNType = input_row.colN;
continue; /*(so this record will be Ignored for the rest of the components!)*/
}
output_row.col1Type = col1Type;
output_row.col1Value = Integer.valueOf(input_row.col1);
/*coz we have text and need numbers :( */
I think using propagate results will save you from writing down all the other fields.
And from here it would be very simple as you have key-type-value-type-value-type-value results.

SQL: Use a predefined list in the where clause

Here is an example of what I am trying to do:
def famlist = selection.getUnique('Family_code')
... Where “””...
and testedWaferPass.family_code in $famlist
“””...
famlist is a list of objects
‘selection’ will change every run, so the list is always changing.
I want to return only columns from my SQL search where the row is found in the list that I have created.
I realize it is supposed to look like: in ('foo','bar')
But no matter what I do, my list will not get like that. So I have to turn my list into a string?
('\${famlist.join("', '")}')
Ive tried the above, idk. Wasn’t working for me. Just thought I would throw that in there. Would love some suggestions. Thanks.
I am willing to bet there is a Groovier way to implement this than shown below - but this works. Here's the important part of my sample script. nameList original contains the string names. Need to quote each entry in the list, then string the [ and ] from the toString result. I tried passing as prepared statement but for that you need to dynamically create the string for the ? for each element in the list. This quick-hack doesn't use a prepared statement.
def nameList = ['Reports', 'Customer', 'Associates']
def nameListString = nameList.collect{"'${it}'"}.toString().substring(1)
nameListString = nameListString.substring(0, nameListString.length()-1)
String stmt = "select * from action_group_i18n where name in ( $nameListString)"
db.eachRow( stmt ) { row ->
println "$row.action_group_id, $row.language, $row.name"
}
Hope this helps!

Conditions in Pig storage

Say I am having a input file as map.
sample.txt
[1#"anything",2#"something",3#"anotherthing"]
[2#"kish"]
[3#"mad"]
[4#"sun"]
[1#"moon"]
[1#"world"]
Since there are no values with the specified key, I do not want to save it to a file. Is there any conditional statements that i can include with the Store into relation ? Please Help me thro' this, following is the pig script.
A = LOAD 'sample.txt';
B = FOREACH A GENERATE $0#'5' AS temp;
C = FILTER B BY temp is not null;
-- It actually generates an empty part-r-X file
-- Is there any conditional statements i can include where if C is empty, Do not store ?
STORE C INTO '/user/logs/output';
Thanks
Am I going wrong somewhere ? Please correct me if I am wrong.
From Chapter 9 of Programming Pig,
Pig Latin is a dataflow language. Unlike general purpose programming languages, it does not include control flow constructs like if and for.
Thus, it is impossible to do this using just Pig.
I'm inclined to say you could achieve this using a combination of a custom StoreFunc and a custom OutputFormat, but that seems like it would be too much added overhead.
One way to solve this would be to just delete the output file if no records are written. This is not too difficult using embedded Pig. For example, using Python embedding:
from org.apache.pig.scripting import Pig
P = Pig.compile("""
A = load 'sample.txt';
B = foreach A generate $0#'5' AS temp;
C = filter B by temp is not null;
store C into 'output/foo/bar';
""")
bound = P.bind()
stats = bound.runSingle()
if not stats.isSuccessful():
raise RuntimeError(stats.getErrorMessage())
result = stats.result('C')
if result.getNumberRecords() < 1:
print 'Removing empty output directory'
Pig.fs('rmr ' + result.getLocation())