I'm new to Octave/Matlab and I want to plot a 3D-Graph.
I was able to do so using a predefined formula, like this:
x=1:.1:5;
y=1:.1:5;
[xx,yy] = meshgrid(x,y);
z = sin(xx)+sin(yy);
mesh(x,y,z);
But now the question is how to do the same getting the data from a CSV (for example). I know I can use the function csvread, but the big question is how to format the CSV to contain such data.
An example of doing the same graph above but this time grabbing the data from Excel/CSV would be appreciated. Thanks!
Done! I was finally able to do it!
Here's how I did it:
1) I've created a file in Excel with the X values in the cells A2:A42, and the Y values in the cells B1:AP1 (so you form a rectangle).
2) Then in the cells in the middle I put the formula I want (ie =sin(A$2)+sin($B1))
3) Saved the file as CSV (but separated by spaces!) and manually edited it to look this way (the way QtOctave opens matrix files, in Matlab it might be different). For example (note the extra space before each column):
# Created by Octave 3.2.4, Thu Jan 12 19:32:05 2012 ART <diego#notebook2>
# name: z
# type: matrix
# rows: 3
# columns: 3
1 2 3
4 5 6
7 8 9
(if you're not sure how to do it, do what I did: create a simple matrix and export it to see how the exported file looks like!)
4) Octave has a function under Data -> Load matrix from file, which loads that kind of files. Or actually running this command (varname is the name of the resulting variable):
load("-text", "file-where-the-data-is", "varname")
5) Create the graph (ex is the name of the matrix I've just imported):
x=1:.1:5;
y=1:.1:5;
mesh(x,y,ex)
Related
So I have 7 variables I am plotting on a Tableau radar chart, I am following along this tutorial: https://www.pluralsight.com/guides/tableau-playbook-radar-chart. Where I have done the exact same calculations, instead of 6 axes, I have 7. Therefore 8 points in my path and dividing the Path equation by 7 instead of 6. I have also created the appropriate dummy variable in the Excel file, by duplicating the first variable and renaming it as a dummy variable to connect the line to. But still the last line does not want to connect.
Path field:
case [Variable]
when "ipsl_risicoperceptie" then 1
when "ipsl_onveiligheids-gevoelens" then 2
when "ipsl_vermijding" then 3
when "Ipersonenoverlast" then 4
when "Iverloedering" then 5
when "HICindex" then 6
when "HVCindex" then 7
else 8
end
Radian field:
IF [Path] = 8 THEN PI()/2
ELSE PI()/2 - ([Path]-1)*2*PI()/7
END
Rest of the fields are exactly the same as the tutorial. What am I missing?
I set.seed in Rmd file to generate random numbers, but when I knit the document I get different random numbers. Here is a screen shot for the Rmd and pdf documents side by side.
In R 3.6.0 the internal algorithm used by sample() has changed. The default for a new session is
> set.seed(2345)
> sample(1:10, 5)
[1] 3 7 10 2 4
which is what you get in the PDF file. One can manually change to the old "Rounding" method, though:
> set.seed(2345, sample.kind="Rounding")
Warning message:
In set.seed(2345, sample.kind = "Rounding") :
non-uniform 'Rounding' sampler used
> sample(1:10, 5)
[1] 2 10 6 1 3
You have at some point made this change in your R session, as can be seen from the output of sessionInfo(). You can either change this back with RNGkind(sample.kind="Rejection") or by starting a new R session.
BTW, in general please include code samples as text, not as images.
I have a database textfile.
It is large text file about 387,480 KB. This file contains table name, headers of the table and values. I need to split this file into multiple files each containing table creation and insertion with a file name as table name.
Please can anyone help me??
I don't see how Excel will open a 347MB file. You can try to load it into Access, and do the split, using VBA. However, the process of importing a file that large may fragment enough to blow Access up to #GB, and then it's all over. SQL Server would handle this kind of job. Alternatively, you could use Python or R to do the work for you.
### Python:
import pandas as pd
for i,chunk in enumerate(pd.read_csv('C:/your_path/main.csv', chunksize=3)):
chunk.to_csv('chunk{}.csv'.format(i))
### R
setwd("C:/your_path/")
mydata = read.csv("annualsinglefile.csv")
# If you want 5 different chunks with same number of lines, lets say 30.
# Chunks = split(mydata,sample(rep(1:5,30))) ## 5 Chunks of 30 lines each
# If you want 100000 samples, put any range of 20 values within the range of number of rows
First_chunk <- sample(mydata[1:100000,]) ## this would contain first 100000 rows
# Or you can print any number of rows within the range
# Second_chunk <- sample(mydata[100:70,] ## this would contain last 30 rows in reverse order if your data had 100 rows.
# If you want to write these chunks out in a csv file:
write.csv(First_chunk,file="First_chunk.csv",quote=F,row.names=F,col.names=T)
# write.csv(Second_chunk,file="Second_chunk.csv",quote=F,row.names=F,col.names=T)
I have developed a model in Netlogo and i want to automate the model run.
Basically what i want to do is read the input from either an excel, csv or .txt file and then ask Netlogo to change the inputs in the model accordingly. Run the model for lets say 100 ticks and store the required output from the 100th tick onto either the same file from which the input was read-in or export it onto a different file. Something like this
Trial Input1 Input2 Output
1 10 20
2 20 20
3 10 30
.
.
.
100 20 100
The variables Input 1 and Input 2 are in the interface either as a slider or input button.
Use the Behavior space feature in Netlogo. It's available under the Tool and below is the documentation on the topic.
https://ccl.northwestern.edu/netlogo/docs/behaviorspace.html
I am trying to obtain a list of all the DB Tables that will give me visibility on what tables I may need to JOIN for running SQL scripts.
For example, in TCL when I run "LIST.DICT" it returns "Name of File:" for input. I then enter "PRODUCT" and it returns a list of all available fields.
However, Where can I get a list of all my available Tables or list of my options that I can enter after "Name of File:"?
Here is what I am trying to achieve. In the screen shot below, I would like to run a SQL script that gives me the latest Log File Activity, Date - Time - Description. I would like the script to return '8/13/14 08:40am BR: 3;BuyPkg'
Thank you in advance for your help.
From TCL within the database account containing your database files, type: LISTF
Sample output:
FILES in your vocabulary 03:21:38pm 29 Jun 2015 Page 1
Filename........................... Pathname...................... Type Modulo
File - Contains all logical device names
DICT &DEVICE& /u1/uv/D_&DEVICE& 2 1
DATA &DEVICE& /u1/uv/&DEVICE& 2 3
File - Used by MAKE.MAP.FILE
DICT &MAP& /u1/uv/D_&MAP& 2 1
DATA &MAP& /u1/uv/&MAP& 6 7
File - Contains all parts of Distributed Files
DICT &PARTFILES& /u1/uv/D_&PARTFILES& 2 1
DATA &PARTFILES& /u1/uv/&PARTFILES& 18 7
DICT &PH& D_&PH& 3 1
DATA &PH& &PH& 1
DICT &SAVEDLISTS& D_&SAVEDLISTS& 3 1
DATA &SAVEDLISTS& &SAVEDLISTS& 1
File - Used by uniVerse to access the current directory.
DICT &UFD& /u1/uv/D_UFD 2 1
DATA &UFD& . 19 1
DICT &XML& D_&XML& 18 3
DATA &XML& &XML& 19 1
Firstly, UniVerse has no Log File Activity Date and Time.
However, you can still obtain the table's modified/ accessed date from the file system however.
To do this,
You need to have a subroutine accepting a path of the table to return a date or a time.
e.g. SUBROUTINE GET.FILE.MOD.DATE(DAT.MOD, S.FILE.PATH)
Inside the subroutine, you can use EXECUTE to run shell command like istat for getting these info on a unix e.g.
Please beware that for a dynamic file e.g. there are Data and Overflow parts under a directory. You should compare the dates obtained and return only the latest one.
Globally catalog the subroutine
Create an I-Desc in VOC, e.g. I.FILE.MOD.DATE in the field definition of this I-Desc: SUBR("*GET.FILE.MOD.DATE",F2) and Conv Code as "D/MDY2"
Create another I-Desc e.g. I.FILE.MOD.TIME
Finally, you can
LIST VOC I.FILE.MOD.DATE I.FILE.MOD.TIME DESC WITH TYPE LIKE "F..."
alternatively in SQL,
SELECT I.FILE.MOD.DATE, I.FILE.MOD.TIME, VOC.DESC FROM VOC WHERE TYPE LIKE "F%";