GAMS: changing names of files to include in a loop - gams-math

I want to include multiple csv files and I want to refer to the them in a loop. All my files have systematic names: file1.csv, file2.csv etc.
How do I create a loop including all the files?
I am looking for something like this:
set j /1*10/
loop(j,
$include "filej.csv")
How do I go about it?

One way to do this is with the Put Writing Facility.
set fn / '1.csv' * '10.csv' /;
file output / output.rsp /;
put output /;
loop(fn,
put '$include file' fn.tl /);
This will produce a file output.rsp with contents:
$include file1.csv
$include file2.csv
...
$include file10.csv
Then $include output.rsp in the relevant .gms file.

Related

Comparing a .TTL file to a CSV file and extract "similar" results into a new file

I have a large CSV file that is filled with millions of different lines of which each have the following format:
/resource/example
Now I also have a .TTL file in which each line possibly has the exact same text. Now I want to extract every single line from that .TTL file containing the same text as my current CSV file into a new CSV file.
I think this is possible using grep but that is a linux command and I am very, very inexperienced with that. Is it possible to do this in Windows? I could write a Python script that compares the two files, but since both files contain millions of lines that would literally take days to execute I think. Could anyone point me in the right direction on how to do this?
Thanks in advance! :)
Edit:
Example line from .TTL file:
<nl.dbpedia.org/resource/Algoritme>; <purl.org/dc/terms/subject>; <nl.dbpedia.org/resource/Categorie:Algoritme>; .
Example line from current CSV file:
/resource/algoritme
So with these two example lines it should export the line from the .TTL file into a new CSV file.
Using GNU awk. First read the CSV and hash it to a. Then compare each entry in a against each row in the TTL file:
$ awk 'BEGIN { IGNORECASE = 1 } # ignoring the case
NR==FNR { a[$1]; next } # hash csv to a hash
{
for(i in a) # each entry in a
if($0 ~ i) { # check against every record of ttl
print # if match, output matched ttl record
next # and skip to next ttl record
}
}' file.csv file.ttl
<nl.dbpedia.org/resource/Algoritme>; <purl.org/dc/terms/subject>; <nl.dbpedia.org/resource/Categorie:Algoritme>; .
Depending on the sizes of files it might be slow and maybe could be made faster but not based on info offered in the OP.

Apache pig load multiple files

I have the following folder structure containing my content adhering to the same schema -
/project/20160101/part-v121
/project/20160105/part-v121
/project/20160102/part-v121
/project/20170104/part-v121
I have implemented a pig script which uses JSONLoader to load & processes individual files. However I need to make it generic to read all the files under the dated folder.
Right now I have managed to extract the file paths using the following -
hdfs -ls hdfs://local:8080/project/20* > /tmp/ei.txt
cat /tmp/ei.txt | awk '{print $NF}' | grep part > /tmp/res.txt
Now I need to know how do I pass this list to pig script so that my program runs on all the files.
We can use regex path in LOAD statement.
In your case the below statement should help, let me know if you face any issues.
A = LOAD 'hdfs://local:8080/project/20160102/*' USING JsonLoader();
Assuming .pig_schema (produced by JsonStorage) in the input directory.
Ref : https://pig.apache.org/docs/r0.10.0/func.html#jsonloadstore

OpenVMS - Add STRING to Filename via DCL

I have a number of files created by a program on our selling system that are produced in a format like the following:
CRY_SKI_14_EDI.LIS
CRY_SUM_14_EDI.LIS
THO_SKI_14_EDI.LIS
THO_LAK_14_EDI.LIS
CRY_SKI_IE_14_EDI.LIS
These files differ in numbers depending on the split of our product to different brandings. Is it possible to rename them all so that they read like the following:
CRY_SKI_14_EDI_DEMO.LIS
CRY_SUM_14_EDI_DEMO.LIS
THO_SKI_14_EDI_DEMO.LIS
THO_LAK_14_EDI_DEMO.LIS
CRY_SKI_IE_14_EDI_DEMO.LIS
I need the files to be correctly named prior to their FTP as a hardcoded file could potentially not exist due to the brand not being on sale and terminate the FTP which would prevent the other files following it from being transmitted to our FTP server.
The OpenVMS rename command is more handy (imho) than the windows or unix variants, because it can bulk change chuncks of the full file name. Such as 'name', 'type' or (sub)directory.
For example:
$ rename *.dat *.old
That's great but it will not change within the chunks (components) like the name part requested here.
For that, the classic DCL approach is a quick loop, either parsing directory output (Yuck!) or using F$SEARCH. For example:
$loop:
$ file = f$search("*EDI.LIS")
$ if file .eqs. "" then exit
$ name = f$parse(file,,,"name","syntax_only") ! grab name component from full name
$ rename/log 'file' 'name'_demo ! rename 'fills in the blanks'
$ goto loop
Personally I use PERL one-liners for this kind of work.
(and I test with -le using 'print' instead of 'rename' first. :-)
$ perl -e "for (<*edi.lis>) { $old = $_; s/_edi/_edi_demo/; rename $old,$_}"
Enjoy!
Hein

How do I use a for loop to get file names and then use them?

I have a folder with the files a.txt, b.txt, and c.txt in it. and I want to use a for-loop to get the names of these files, store them in a variable, and then add them to another text file. The text files are located in the same file as my bat file. Is it possible to do this? If so please show me, these for loops confuse the crap out of me...
What i have now
#echo on
SET name=hey
echo >text.txt
for %F in (*.*) do (set name=#fname
echo name >> text.txt)
#Echo OFF
(For %%# in ("*.txt") do (
Set "FileName=%%~n#"
Call Echo %%FILENAME%%
))>"MyFilenames.txt"
Pause&Exit
NOTE 1: Files are stored in "Filename" var, but is not really necessary, you can directly write the filename to the textfile.
NOTE 2:If you want a recursive loop through files use the /R switch of For command.
NOTE 3:If you want also the file extension change this: "%%~n#" to this else: "%%~nx#"
**UPDATE:**
An alternative script:
#Echo OFF
For %%# in (*.txt) do (Echo %%~n#>> "MyFiles.txt")
Pause&Exit

rename file in bourne shell

I am trying to write a bourne-shell script that takes a directory as a parameter and look for images named ixxx.a and rename them to ixxx_a.img where "xxx means the extension number for exemple image files would be named i001.a , i002.a ,i003.a ...)
here what I tried
mv $1/f[0-9][0-9][0-9].a $1/f[0-9][0-9][0-9]_a.img
but it says that the DEST is not a directory.
Any help would be much appreciated. Thanks.
for i in $1/f[0-9][0-9][0-9].a; do
mv $i ${i%.a}_a.img
done
However, this does not consider blank spaces in the file/folder names. In this case you'd have to use while so you get one file name per line (see below for bonus). There are probably dozens of other ways, including rename.
find $1 -maxdepth 1 -type f -name "f[0-9][0-9][0-9].a"|while read i; do
mv "$i" "${i%.a}_a.img"
done
Edit: Perhaps I should explain what I did there. It's called string substitution and the main use cases are these for a variable var:
# Get first two characters
${var:0:2}
# Remove shortest rear-anchored pattern - this one would give the directory name of a file, for example
${var%/*}
# Remove longest rear-anchored pattern
${var%%/*}
# Remove shortest front-anchored pattern - this in particular removes a leading slash
${var#/}
# Remove longest front-anchored pattern - this would remove all but the base name of a file given its path
# Replace a by b
${var//a/b}
${var##*/}
For more see the man page.