Related
I'm having a csv file like below. I need to check whether the number of columns are greater than the max length of rows. Ex,
name,age,profession
"a","24","teacher","cake"
"b",31,"Doctor",""
"c",27,"Engineer","tea"
If i try to read it using
print(pd.read_csv('test.csv'))
it will print as below.
name age profession
a 24 teacher cake
b 31 Doctor NaN
c 27 Engineer tea
But it's wrong. It happened due to the less number of columns. So i need to identify this scenario as a wrong csv format. what is the best way to test this other than reading this as string and testing the length of each row.
And important thing is, the columns can be different. There are no any mandatory columns to present.
You can try put header=None into .read_csv. Then pandas will throw ParserError if number of columns won't match length of rows. For example:
try:
df = pd.read_csv("your_file.csv", header=None)
except pd.errors.ParserError:
print("File Invalid")
I have a text file that is fixed length, with 8 fields making up each line. It is not a delimited file, but each field is a fixed length. For example:
field aaa starts in column 1 for 10 bytes, field bbb starts in column 11 for 6 bytes, field ccc starts in column 17 for 20 bytes, etc...
How can I read a line and identify each column? I need to split the input to multiple output files, but the new output will be comprised of a new order of the individual fields.
i am trying to read a csv file and my code is as follows
param=csvRead("C:\Users\USER\Dropbox\VOA-BK code\assets\Iris.csv",",","%i",'double',[],[],[1 2 3 4]); //reads number of clusters and features
data=csvRead("C:\Users\USER\Dropbox\VOA-BK code\assets\Iris.csv",",","%f",'double',[],[],[3 1 19 4]); //reads the values
numft=param(1,1);//save number of features
numcl=param(2,1);//save number of clusters
data_pts=0;
data_pts = max(size(data, "r"));//checks how many number of rows
disp(data(numft-3:data_pts,:));//print all data points (I added -3 otherwise it displays only 15 rows)
disp(numft);//print features
disp(data_pts);//print features
disp(param);
endfunction
below is the values that i am trying to read
features,4,,
clusters,3,,
5.1,3.5,1.4,0.2
4.9,3,1.4,0.2
4.7,3.2,1.3,0.2
4.6,3.1,1.5,0.2
5,3.6,1.4,0.2
7,3.2,4.7,1.4
6.4,3.2,4.5,1.5
6.9,3.1,4.9,1.5
5.5,2.3,4,1.3
6.5,2.8,4.6,1.5
5.7,2.8,4.5,1.3
6.3,3.3,6,2.5
5.8,2.7,5.1,1.9
7.1,3,5.9,2.1
6.3,2.9,5.6,1.8
6.5,3,5.8,2.2
7.6,3,6.6,2.1
I do not know why the code only displays 15 rows instead of 17. The only time it displays the correct matrix is when i put -3 in numft but with that, the number of columns would be 1. I am so confused. Is there a better way to read the values?
In the csvRead call in the first line of your script the boundaries of the region to read is incorrect, it should be corrected like this:
param=csvRead("C:\Users\USER\Dropbox\VOA-BK code\assets\Iris.csv",",","%i",'double',[],[],[1 2 2 2]);
Please help revise the title and the post if needed, thanks.
In short, I would like to firstly group lines with a unique value in the first field and accumulate the occurrences of a specific value in the other field in the underlying group of lines. If the sum of occurrences doesn't meet the self-defined threshold, the lines in the group should be ignored.
Specifically, with input
111,1,P,1
111,1,P,1
111,1,P,0
111,1,M,1
222,1,M,1
222,1,M,0
333,1,P,0
333,1,P,1
444,1,M,1
444,1,M,1
444,0,M,0
555,1,P,1
666,1,P,0
the desired output should be
111,1,P,1
111,1,P,1
111,1,P,0
111,1,M,1
333,1,P,0
333,1,P,1
555,1,P,1
666,1,P,0
meaning that "because the unique values in the first field 222 and 444 don't have at least one (which can be any desired threshold) P in the third field, lines corresponding to 222 and 444 are ignored."
Furthermore, this should be done without editing the original file and have to be combined with the solved issue Split CSV to Multiple Files Containing a Set Number of Unique Field Values. By doing this, a few lines will not be involved in the resulted split files.
I believe this one-liner does what you want:
$ awk -F, '{a[$1,++c[$1]]=$0}$3=="P"{p[$1]}END{for(i in c)if(i in p)for(j=1;j<=c[i];++j)print a[i,j]}' file
111,1,P,1
111,1,P,1
111,1,P,0
111,1,M,1
333,1,P,0
333,1,P,1
555,1,P,1
666,1,P,0
Array a, keeps track of all the lines in the file, grouping them by the first field and a count c which we use later. If the third field contains a P, set a key in the p array.
After processing the entire file, loop through all the values of the first field. If a key has been set in p for the value, then print the lines from a.
You mention a threshold number of entries in your question. If by that, you mean that there must be N occurrences of "P" in order for the lines to be printed, you could change {p[$1]} to {++p[$1]}, then change if(i in p) to if(p[i]>=N) in the END block.
I have a rinex file and is shown here..an image showing the first part of rinex file
http://imageshack.us/photo/my-images/593/65961409.jpg
The data (AOPR Rinex file) is downloaded from the site after entering a year and a day.
http://www.naic.edu/aisr/GPSTEC/gpstec.html
I want to open this file as a matrix in matlab for further processing..After the end of header at the 42nd line the time information is on 43 rd line. Then data starts. But time information is coming again after some rows say 64 the line, which should be discarded. Header should also be discarded. Also the last column is coming below the first column as a second row which should be transferred to the last column. Totally there are 55700 rows. Kindly help me with this.
I suspect the last column appearing on the line below it is just an artifact of how large the window of your text reader is...
For the rest, I think a trial-and-error loop is in place here:
fid = fopen('test.txt','r');
C = {};
while ~feof(fid)
% read lines with dictated format.
D = textscan(fid, '%d %d %d %d');
% this will fail on headerlines, empty lines, etc.
if isempty(D{1})
% in those cases, advance the file pointer by one line
fgetl(fid);
else
% if that's not the case, save the lines thus read
C = [C;D]; %#ok
end
end
fclose(fid);
% Post-process: concatenate all sub-arrays into one
C = arrayfun(#(ii) cat(1, C{:,ii}), 1:size(C,2), 'UniformOutput', false);
This works, at least with my test.txt:
header
random
garbage
1 2 3 4
4 5 6 7
4 6 7 8
more random garbage
2 5 6 7
5 6 7 8
8 6 3 7
I suspect the last column appearing on the line below it is just an artifact of how large >the window of your text reader is...
For the rest, I think a trial-and-error loop is in place here
Dear Rody I don't have any matlab background and just a beginner. It is actually a Rinex file..with 2780 epochs and 6 observables with 30 satellite values..Decoding it in matlab is tough. That is the problem. You can read a sample code at
http://web.ics.purdue.edu/~tdauterm/EAS591/Lab7/read_rinexo.m
But the problem is that the observables are six and there only 5 in the m-file which also is not in the correct order. I need C1 P2 L1 L2 S1 S2...but the code at the link gives L1 L2 C1 P1 P2. :( Can you just correct that..Then it will be a great help..