How to print the variable loaded and assigned from file - printf

I am just learning machine learning using Octave. I want to load the data from the file and assign the data to one variable, then I just want to print the data to the console.The data in data.txt file is several-rows and two-columns matrix.
data = load('data.txt');
x = data(:, 1);
y = data(:, 2);
printf x;
printf y;
After executing the code, x and y will show up on the console, it is not what I expected, I just want to check the data loaded from the file, how to print this? Do I use wrong command?

Just type data in console and press enter . This will print the contents of the data variable with all its elements. Do NOT add a semicolon at end of line. This would suppress output.
The explicit way doing this is the command
display(data)

Related

Load CSV file in PIG

In PIG, When we load a CSV file using LOAD statement without mentioning schema & with default PIGSTORAGE (\t), what happens? Will the Load work fine and can we dump the data? Else will it throw error since the file has ',' and the pigstorage is '/t'? Please advice
When you load a csv file without defining a schema using PigStorage('\t'), since there are no tabs in each line of the input file, the whole line will be treated as one tuple. You will not be able to access the individual words in the line.
Example:
Input file:
john,smith,nyu,NY
jim,young,osu,OH
robert,cernera,mu,NJ
a = LOAD 'input' USING PigStorage('\t');
dump a;
OUTPUT:
(john,smith,nyu,NY)
(jim,young,osu,OH)
(robert,cernera,mu,NJ)
b = foreach a generate $0, $1, $2;
dump b;
(john,smith,nyu,NY,,)
(jim,young,osu,OH,,)
(robert,cernera,mu,NJ,,)
Ideally, b should have been:
(john,smith,nyu)
(jim,young,osu)
(robert,cernera,mu)
if the delimiter was a comma. But since the delimiter was a tab and a tab does not exist in the input records, the whole line was treated as one field. Pig doe snot complain if a field is null- It just outputs nothing when there is a null. Hence you see only the commas when you dump b.
Hope that was useful.

How to use envi setup head function?

I don't understand the envi_setup_head. Could anyone help me write it in IDL code format?
I have maps that were produced in IDL and I need to process them in ENVI. I don't know how to save the images in a folder and be able open them in ENVI. Does anyone know how to do it?
To create an ENVI header for an image file, you can try something like the IDL procedure below. It creates a small image file and uses envi_setup_head to create an ENVI header file. Essentially all you have to do is provide it with the number of samples, lines, data-type, etc., and you are good to go.
pro enviHeaderTest
compile_opt idl2
; Create the data and write to a file.
ns = 100
nl = 100
data = dist(ns, nl)
fname = 'mydatafile.dat'
openw, lun, fname, /GET_LUN
writeu, lun, data
close, lun
; Open a headless ENVI.
nv = envi(/HEADLESS)
; Create some map info for the raster.
mc = [0,0,0,0] ;Tie point: [x pixel, ypixel, x map, y map]
ps = [1D/3600, 1D/3600] ; Pixel size
mapInfo = envi_map_info_create(/GEOGRAPHIC, MC=mc, PS=ps)
; Create the header.
envi_setup_head, FNAME=fname, $ ; file name
NS=ns, $ ; number of samples
NL=nl, $ ; number of lines
NB=1, $ ; number of bands
DATA_TYPE=4, $ ; IDL data type (float in this case)
INTERLEAVE=0, $ ; BSQ
MAP_INFO=mapInfo, $
/WRITE
; Close ENVI.
nv.close
end
Then, you can read the image into ENVI, either from the File->Open menu, or via the IDL command line like so:
IDL> nv = envi()
ENVI> view = nv.getview()
ENVI> raster = nv.openraster('mydatafile.dat')
ENVI> layer = view.createlayer(raster)

How to store a result to a variable in HP OpenVMS DCL?

I want to save the output of a program to a variable.
I use the following approach ,but fail.
$ PIPE RUN TEST | DEFINE/JOB VALUE #SYS$PIPE
$ x = f$logical("VALUE")
I got an error:%DCL-W-MAXPARM, too many parameters - reenter command with fewer parameters
\WORLD\
reference :
How to assign the output of a program to a variable in a DCL com script on VMS?
The usual way to do this is to write the output to a file and read from the file and put that into a DCL symbol (or logical). Although not obvious, you can do this with the PIPE command was well:
$ pipe r 2words
hello world
$ pipe r 2words |(read sys$pipe line ; line=""""+line+"""" ; def/job value &line )
$ sh log value
"VALUE" = "hello world" (LNM$JOB_85AB4440)
$
IF you are able to change the program, add some code to it to write the required values into symbols or logicals (see LIB$ routines)
If you can modify the program, using LIB$SET_SYMBOL in the program defines a DCL symbol (what you are calling a variable) for DCL. That's the cleanest way to do this. If it really needs to be a logical, then there are system calls that define logicals.

How to access the data that was assigned to the variable before in Tcl script?

I am pretty new to Tcl and have been writing snippets to improve the automation of the process flow in our Work. I want to compare the value of a variable to its previous value so that the code knows its a new flow. The problem is: How to store the old value of a variable? or more precisely, how can we store the value of a variable that is assigned during previous flow?(Is it even possible?)
The following is how our workflow looks like
Start compilation
A) Start phase1 and run flow.tcl script twice
B) Start phase2 and run flow.tcl script twice
...
End compilation
Here in this example, the variable is assigned a new value every time it is run in a different phase. But since I am unable to store the value of the variable to compare, am stuck at trying different options but in vain. This might be totally impossible but as far as I know Tcl can handle almost everything.
Any help is greatly appreciated
Thanks in advance
Hemanth
Edit: simple solution found. Have the data written to text files and read in back again. Thanks
You can save variables in an array and load the variables back into Tcl. The command "array get" serializes the data and "array set" puts it back into an array.
flow.tcl
#!/usr/bin/tclsh
proc load_data {data_file array_name} {
upvar $array_name data
if {[file exists $data_file]} {
set fp [open $data_file r]
array set data [read $fp]
close $fp
}
}
proc save_data {data_file array_name} {
upvar $array_name data
set fp [open $data_file w]
puts $fp [array get data]
close $fp
}
set now [clock seconds]
# Set defaults. If you need new keys in your data file you can add them here.
set data(count) 0
set data(last_timestamp) $now
# Load existing data over default values. If the key doesn't exist the default will be used.
load_data "flow.dat" data
# Use the saved data to find elapsed time.
set elapsed [expr $now - $data(last_timestamp)]
set count $data(count)
# Save new data.
set data(last_timestamp) $now
set data(count) [incr count]
save_data "flow.dat" data
puts "It's been $elapsed seconds since last run. You have run this $count times."
output
% ./flow.tcl
It's been 0 seconds since last run. You have run this 1 times.
flow.dat
% cat flow.dat
count 1 last_timestamp 1427142892

How to run same syntax on multiple spss files

I have 24 spss files in .sav format in a single folder. All these files have the same structure. I want to run the same syntax on all these files. Is it possible to write a code in spss for this?
You can use the SPSSINC PROCESS FILES user submitted command to do this or write your own macro. So first lets create some very simple fake data to work with.
*FILE HANDLE save /NAME = "Your Handle Here!".
*Creating some fake data.
DATA LIST FREE / X Y.
BEGIN DATA
1 2
3 4
END DATA.
DATASET NAME Test.
SAVE OUTFILE = "save\X1.sav".
SAVE OUTFILE = "save\X2.sav".
SAVE OUTFILE = "save\X3.sav".
EXECUTE.
*Creating a syntax file to call.
DO IF $casenum = 1.
PRINT OUTFILE = "save\TestProcess_SHOWN.sps" /"FREQ X Y.".
END IF.
EXECUTE.
Now we can use the SPSSINC PROCESS FILES command to specify the sav files in the folder and apply the TestProcess_SHOWN.sps syntax to each of those files.
*Now example calling the syntax.
SPSSINC PROCESS FILES INPUTDATA="save\X*.sav"
SYNTAX="save\TestProcess_SHOWN.sps"
OUTPUTDATADIR="save" CONTINUEONERROR=YES
VIEWERFILE= "save\Results.spv" CLOSEDATA=NO
MACRONAME="!JOB"
/MACRODEFS ITEMS.
Another (less advanced) way is to use the command INSERT. To do so, repeatedly GET each sav-file, run the syntax with INSERT, and sav the file. Probably something like this:
get 'file1.sav'.
insert file='syntax.sps'.
save outf='file1_v2.sav'.
dataset close all.
get 'file2.sav'.
insert file='syntax.sps'.
save outf='file2_v2.sav'.
etc etc.
Good luck!
If the Syntax you need to run is completely independent of the files then you can either use: INSERT FILE = 'Syntax.sps' or put the code in a macro e.g.
Define !Syntax ()
* Put Syntax here
!EndDefine.
You can then run either of these 'manually';
get file = 'file1.sav'.
insert file='syntax.sps'.
save outfile ='file1_v2.sav'.
Or
get file = 'file1.sav'.
!Syntax.
save outfile ='file1_v2.sav'.
Or if the files follow a reasonably strict naming structure you can embed either of the above in a simple bit of python;
Begin Program.
imports spss
for i in range(0, 24 + 1):
syntax = "get file = 'file" + str(i) + ".sav.\n"
syntax += "insert file='syntax.sps'.\n"
syntax += "save outfile ='file1_v2.sav'.\n"
print syntax
spss.Submit(syntax)
End Program.