To grep contents from a CSV/Text File using Autohotkey(AHK) Script - scripting

Can anyone please help me in writing a script in AHK based on below requirement.
Requirement:
I have a CSV/TXT file in my windows environment which contains 20,000+ records in below format.
So, when I run the script it should prompt a InputBox to enter an instance name.
Example : If i enter Instance4 , it should display result in MsgBox as ServerName4
Sample Format:
ServerName1,ServerIP,Instance1,Type
ServerName2,ServerIP,Instance2,Type
ServerName3,ServerIP,Instance3,Type
ServerName4,ServerIP,Instance4,Type
ServerName5,ServerIP,Instance5,Type
.
.
.
Also as the CSV/TXT file contains large no of records , pls also consider the best way to avoid delay in fetching the results.

Please post your code, or at least show what you've already done.
You can use a Parsing Loop with CSV as the delimiter, and make a variable for each 'Instance' who's value is that of the current row's 'ServerName'.
The steps are to first FileRead the data from the file, then Loop, Parse like so:
Loop, Parse, data, CSV
{
; Parses row by row, then column by column in each row.
; A_LoopField // Current value
; A_Index // Current loop's index
; Write a script that makes a variable named with the current value of column 3, and give it the value of column 1
}
After that, you can make a Goto loop that spams InputBox and following a command that prints out the needed variable using the MsgBox command, like so:
MsgBox % %input%

Related

Load CSV file in PIG

In PIG, When we load a CSV file using LOAD statement without mentioning schema & with default PIGSTORAGE (\t), what happens? Will the Load work fine and can we dump the data? Else will it throw error since the file has ',' and the pigstorage is '/t'? Please advice
When you load a csv file without defining a schema using PigStorage('\t'), since there are no tabs in each line of the input file, the whole line will be treated as one tuple. You will not be able to access the individual words in the line.
Example:
Input file:
john,smith,nyu,NY
jim,young,osu,OH
robert,cernera,mu,NJ
a = LOAD 'input' USING PigStorage('\t');
dump a;
OUTPUT:
(john,smith,nyu,NY)
(jim,young,osu,OH)
(robert,cernera,mu,NJ)
b = foreach a generate $0, $1, $2;
dump b;
(john,smith,nyu,NY,,)
(jim,young,osu,OH,,)
(robert,cernera,mu,NJ,,)
Ideally, b should have been:
(john,smith,nyu)
(jim,young,osu)
(robert,cernera,mu)
if the delimiter was a comma. But since the delimiter was a tab and a tab does not exist in the input records, the whole line was treated as one field. Pig doe snot complain if a field is null- It just outputs nothing when there is a null. Hence you see only the commas when you dump b.
Hope that was useful.

non-advancing WRITE...but not intentionally?

I have a brief snippet of code from a Fortran 95 program that should, in theory, spit out my results into some text files. It would be convenient, for readability if nothing else, for the data to be written in columns (so, one column for variable X, one for Y, etc.). In the first set of WRITE commands below (i.e., those associated with the first OPEN command), the idea is to have a text identifier for the user to read, followed by a numeric value. In the second write command, I just dump out four columns of data, each specific to a given variable.
open(unit=10,file='outs_sum.txt',status='replace')
do i=1,1
write(10,'(a12,f5.4)') 'Min. sigma: ',sigma_low
end do
do i=1,1
write(10,'(a12,f5.4)') 'Max. sigma: ',sigma_high
end do
write(10,'(a11,f5.4)') 'Sigma inc: ',0.005
write(10,*) '# of sigmas: ',ii
write(10,'(a9,f5.1)') 'Min. DL: ',dl_low
write(10,'(a9,f5.1)') 'Max. DL: ',dl_high
write(10,*) 'DL inc: ',1
write(10,*) '# of DLs: ',i
write(10,*) 'Total rows: ',(i*ii)
close(unit=10)
open(unit=10,file='outs.txt',status='replace')
do i=1,dls
do ii=1,sigmas
write(10,*) gtow_out(i,ii),ctsig_out(i,ii),sigout(i,ii),dlout(i,ii)
end do
end do
close(unit=10)
However, what happens on the output side is this: the latter WRITE does exactly what I'd expect and spits out the data in column form...but the former insists on writing everything to the same row. At least when I open things in Notepad. If I use GVIM, it looks as it should.
Why does the first set of WRITE commands write to the same row, and how can I force it to insert a line break after each command instead? Alternatively, is Notepad just showing me something that isn't really there?

Using a Database column Value in output file name in Pentaho kettle

I wish to use a database column value in the output file name.
example:
select max(id) from process;
suppose the result of above query is 111
-- wish to use this value in the output file name as shown below.
output file name: file_111
how can i achieve this in pentaho kettle?
Please advice.
Depending on the type of file you want to create, you can simply create a column in your stream that contains the file name and then use the Accept file name from field-function that some output steps provide. The text file output for example does have this function, the XML output unfortunately doesn't.
to create the file name itself you can e.g. use the javascript step, or use the concat fields step together with the Add constants step.
Please follow the below steps:
Step 1: Table input :- select max(id) as max_id from process;
step 2: Modified Java Script Value:- put bellow code in this step.
eg:- var dummy= 'C:/Users/Venkatesh/Desktop/file_'+ max_id ;
in same step in the bottom ADD Field Name is dummy, Type is string and
Replace value 'Fieldname' or 'Rename to' is N
step 3: Text file output:-
select the **Add filenames to result**
**file name field** => dummy
Finally execute and see the result..

unable to specify db2 import parameters on bluemix?

I subscribed a free sqldb service from bluemix and tried to import data in CSV file to this database instance.
For certain columns I have pure "space" as data, and some columns to be filled by default value. I can import this data with the following command on my local DB2:
db2 'import from MY_DATA.csv of del modified by usedefaults keepblanks timestampformat="MM/DD/YYYY HH:MM:SS" skipcount 1 insert into MY_TABLE'
On bluemix, I can only assign date / time / timestamp format and skip 1st row. How can I add the "modified by usedefaults keepblanks" part on bluemix to complete the import?
Also, when the import fails, I only receive the following message:
BaseException message: [Routine "SYSPROC.ADMIN_CMD" execution has completed, but at least one error, "_0911", was encountered during the execution. More information is available.. _CODE=20397, _STATE=01H52, DRIVER=3.66.46]
Where can I get the detail error log that I can see on my local DB such as:
SQL3125W The character data in row "2" and column "32" was truncated because
the data is longer than the target database column.
SQL3148W A row from the input file was not inserted into the table. SQLCODE
"-181" was returned.
SQL0181N The string representation of a datetime value is out of range.
SQLSTATE=22007
SQL3185W The previous error occurred while processing data from row "2" of
the input file.
SQL3110N The utility has completed processing. "2" rows were read from the
input file.
SQL3221W ...Begin COMMIT WORK. Input Record Count = "2".
SQL3222W ...COMMIT of any database changes was successful.
SQL3149N "2" rows were processed from the input file. "0" rows were
successfully inserted into the table. "1" rows were rejected.
Number of rows read = 2
Number of rows skipped = 1
Number of rows inserted = 0
Number of rows updated = 0
Number of rows rejected = 1
Number of rows committed = 2
In the same quick load page (load complete in step 4), there should be a link to view the logs for this load. Hopefully it'll reveal more details about the error message.
Also note that keepblanks is applicable to DEL file formats (Delimited ASCII) only. It is not applicable to ASCII file formats (ASC/DEL) or ASC file formats (Non-delimited ASCII).
http://www-01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.sql.rtn.doc/doc/r0023577.html?cp=SSEPGG_10.5.0%2F3-6-1-3-0-0-12&lang=en

ksh loop putting results of file into variable

I'm not sure the best way to handle this, I'm guessing it's using a while loop. I have a .txt file with a set of numbers ( these numbers can change based on another script that runs )
ex:
0
36
41
53
60
Each number is on it's own line. For each number I want to get that number and execute a script using it. So in this example I would call a script to stop database 0, after that completes call a script to stop database 36 and so on until it's complete with all numbers in the list.
1) Is a while loop the best way to handle this?
2) I'm having trouble trying to determine what the [[condition]] needs to be to get each number 1 at a time, where can i find some additional help on this?
while [[ condition ]] ; do
command1
done
For testing purposes the file that contains all the numbers is test.txt. The script that will execute is a python script - "amgr.py stop (number from test.txt)"
Here's a simpler way:
cat test.txt | xargs amgr.py stop
this will get each line of your file, and then put it as an extra parameter for your amgr.py:
amgr.py stop 0
amgr.py stop 36
and so on..
I ended up using this method to give me the results i was looking for.
while read -r line ; do
amgr.py stop $line
done<test.txt