'MapDataset' object is not subscriptable and TFRecords parsing - tensorflow2.0

I have a list of TFRecords files storing a variable X (variable number of rows and 268 columns) and Y (same number of elements as X). When I try to parse them in order to pass them directly to the keras.fit() method, I get a 'MapDataset' object is not subscriptable error.
I am really struggling to understand why this error is happening given that I can loop inside the MapDataset (meaning that the data is there). I have searched a lot but I did not managed to find the solution!
Here you can see directly the problem I am facing

Related

Writing csv file with CSV.jl throws StackOverflowError message

CSV.write() returns: ERROR: LoadError: StackOverflowError: further down claiming --- the last 2 lines are repeated 39990 more times ---. Which is nonsense since the data frame to be written is acknowledged to be of 129x6 dimension with no empty fields. I fear the function is counting rows beyond the dimension of the data frame and shouldn't be doing so. How do I make this function work?
using CSV,DataFrames
file=DataFrame(
a=1:50,
b=1:50,
c=Vector{Any}(missing,50),
d=Vector{Any}(missing,50))
CSV.write("a.csv",file)
file=CSV.read("a.csv",DataFrame)
c,d=[],[]
for i in 1:nrow(file)
push!(c,file.a[i]*file.b[i])
push!(d,file.a[i]*file.b[i]/file.a[i])
end
file[!,:c]=c
file[!,:d]=d
# I found no straight forward way to fill empty columns based on the values of filled columns in a for loop that wouldn't generate error messages
CSV.write("a.csv",DataFrame) # Error in question here
The StackOverflowError is indeed due to the line
CSV.write("a.csv",DataFrame)
which tries to write the data type itself to a file instead of the actual dataframe variable. (The reason that leads to this particular error is an implementation detail that's described here.)
# I found no straight forward way to fill empty columns based on the values of filled columns in a for loop that wouldn't generate error messages
You don't need a for loop for this, just
file.c = file.a .* file.b
file.d = file.a .* file.b ./ file.a
will do the job. (Though I don't understand the point of the file.d calculation - maybe this is just a sample dummy calculation you chose just for illustration.)

TfidfTransformer.fit_transform( dataframe ) fails

I am trying to build a TF/IDF transformer (maps sets of words into count vectors) based on a Pandas series, in the following code:
tf_idf_transformer = TfidfTransformer()
return tf_idf_transformer.fit_transform( excerpts )
This fails with the following message:
ValueError: could not convert string to float: "I'm trying to work out, in general terms..."
Now, "excerpts" is a Pandas Series consisting of a bunch of text strings excerpted from StackOverflow posts, but when I look at the dtype of excerpts,
it says object. So, I reason that the problem might be that something is inferring the type of that Series to be float. So, I tried several ways to make the Series have dtype str:
I tried forcing the column types for the dataframe that includes "excerpts" to be str, but when I look at the dtype of the resulting Series, it's still object
I tried casting the entire dataframe that includes "excerpts" to dtypes str using Pandas.DataFrame.astype(), but the "excerpts" stubbornly have dtype object.
These may be red herrings; the real problem is with fit_transform. Can anyone suggest some way whereby I can see which entries in "excerpts" are causing problems or, alternatively, simply ignore them (leaving out their contribution to the TF/IDF).
I see the problem. I thought that tf_idf_transformer.fit_transform takes as the source argument an array-like of text strings. Instead, I now understand that it takes an (n,2)-array of text strings / token counts. The correct usage is more like:
count_vect = CountVectorizer()
excerpts_token_counts = count_vect.fit_transform( excerpts)
tf_idf_transformer = TfidfTransformer()
return tf_idf_transformer.fit_transform( excerpts_token_counts )
Sorry for my confusion (I should have looked at "Sample pipeline for text feature extraction and evaluation" in the TfidfTransformer documentation for sklearn).

Octave: quadprog index issue?

I am trying to run several files of code for an assignment. I am trying to solve an optimization problem using the "quadprog" function from the "optim" package.
quadprog is supposed to solve the optimization problem in a certain format and takes inputs H,f, A,b, Aeq, Beq, lb, ub.
The issue I am having involves my f which is a column vector of constants. To clarify, f looks like c*[1,1,1,1,1,1] where c is a constant. Quadprog seems to run my code just fine for certain values of c, but gives me the error:
error: index (_,49): but object has size 2x2
error: called from
quadprog at line 351 column 32
for other values of c. So, for example, 1/3 works, but 1/2 doesn't. Does anyone have any experience with this?
Sorry for not providing a working example. My code runs over several files and I seem to only be having problems with a specific value set that is very big. Thanks!
You should try the qp native Octave function.
You mention f is: c*[1,1,1,1,1,1] but, if c is a scalar, that is not a column vector. It seems very odd that a scalar value might produce a dimensions error...

Reading Fortran binary file in Python

I'm having trouble reading an unformatted F77 binary file in Python.
I've tried the SciPy.io.FortraFile method and the NumPy.fromfile method, both to no avail. I have also read the file in IDL, which works, so I have a benchmark for what the data should look like. I'm hoping that someone can point out a silly mistake on my part -- there's nothing better than having an idiot moment and then washing your hands of it...
The data, bcube1, have dimensions 101x101x101x3, and is r*8 type. There are 3090903 entries in total. They are written using the following statement (not my code, copied from source).
open (unit=21, file=bendnm, status='new'
. ,form='unformatted')
write (21) bcube1
close (unit=21)
I can successfully read it in IDL using the following (also not my code, copied from colleague):
bcube=dblarr(101,101,101,3)
openr,lun,'bcube.0000000',/get_lun,/f77_unformatted,/swap_if_little_endian
readu,lun,bcube
free_lun,lun
The returned data (bcube) is double precision, with dimensions 101x101x101x3, so the header information for the file is aware of its dimensions (not flattend).
Now I try to get the same effect using Python, but no luck. I've tried the following methods.
In [30]: f = scipy.io.FortranFile('bcube.0000000', header_dtype='uint32')
In [31]: b = f.read_record(dtype='float64')
which returns the error Size obtained (3092150529) is not a multiple of the dtypes given (8). Changing the dtype changes the size obtained but it remains indivisible by 8.
Alternately, using fromfile results in no errors but returns one more value that is in the array (a footer perhaps?) and the individual array values are wildly wrong (should all be of order unity).
In [38]: f = np.fromfile('bcube.0000000')
In [39]: f.shape
Out[39]: (3090904,)
In [42]: f
Out[42]: array([ -3.09179121e-030, 4.97284231e-020, -1.06514594e+299, ...,
8.97359707e-029, 6.79921640e-316, -1.79102266e-037])
I've tried using byteswap to see if this makes the floating point values more reasonable but it does not.
It seems to me that the np.fromfile method is very close to working but there must be something wrong with the way it's reading the header information. Can anyone suggest how I can figure out what should be in the header file that allows IDL to know about the array dimensions and datatype? Is there a way to pass header information to fromfile so that it knows how to treat the leading entry?
I played a bit around with it, and I think I have an idea.
How Fortran stores unformatted data is not standardized, so you have to play a bit around with it, but you need three pieces of information:
The Format of the data. You suggest that is 64-bit reals, or 'f8' in python.
The type of the header. That is an unsigned integer, but you need the length in bytes. If unsure, try 4.
The header usually stores the length of the record in bytes, and is repeated at the end.
Then again, it is not standardized, so no guarantees.
The endianness, little or big.
Technically for both header and values, but I assume they're the same.
Python defaults to little endian, so if that were the the correct setting for your data, I think you would have already solved it.
When you open the file with scipy.io.FortranFile, you need to give the data type of the header. So if the data is stored big_endian, and you have a 4-byte unsigned integer header, you need this:
from scipy.io import FortranFile
ff = FortranFile('data.dat', 'r', '>u4')
When you read the data, you need the data type of the values. Again, assuming big_endian, you want type >f8:
vals = ff.read_reals('>f8')
Look here for a description of the syntax of the data type.
If you have control over the program that writes the data, I strongly suggest you write them into data streams, which can be more easily read by Python.
Fortran has record demarcations which are poorly documented, even in binary files.
So every write to an unformatted file:
integer*4 Test1
real*4 Matrix(3,3)
open(78,format='unformatted')
write(78) Test1
write(78) Matrix
close(78)
Should ultimately be padded by an np.int32 values. (I've seen references that this tells you the record length, but haven't verified persconally.)
The above could be read in Python via numpy as:
input_file = open(file_location,'rb')
datum = np.dtype([('P1',np.int32),('Test1',np.int32),('P2',np.int32),('P3',mp.int32),('MatrixT',(np.float32,(3,3))),('P4',np.int32)])
data = np.fromfile(input_file,datum)
Which should fully populate the data array with the individual data sets of the format above. Do note that numpy expects data to be packed in C format (row major) while Fortran format data is column major. For square matrix shapes like that above, this means getting the data out of the matrix requires a transpose as well, before using. For non square matrices, you will need to reshape and transpose:
Matrix = np.transpose(data[0]['MatrixT']
Transposing your 4-D data structure is going to need to be done carefully. You might look into SciPy for automated ways to do so; the SciPy package seems to have Fortran related utilities which I have not fully explored.

VTK: The data array in the element may be too short

I'm trying to visualize some data in vtr format. For this purposes I've created a couple npy files by this library, then I've converted this files by PyEVTK into the vtr format (like in the lowlevel.py example). But when I'm trying to visualize this data by ParaView, an error appears:
ERROR: In /var/tmp/portage/sci-visualization/paraview-4.0.1-r1/work/ParaView-v4.0.1-source/VTK/IO/XML/vtkXMLDataReader.cxx, line 510
vtkXMLRectilinearGridReader (0x36bb080): Cannot read point data array "Pressure" from PointData in piece 0. The data array in the element may be too short.
Can anybody explain, what exactly means this error message, and what's wrong with the my visualization data?
Solved:
I made a stupid mistake - data size in header was different from the actual data size, and this was the cause of error.
This error may be coming from the XML header declaration, that may not contain all data needed. You may miss the header_type that contains the size of each info written between each set of data.
<VTKFile type="UnstructuredGrid" version="0.1" byte_order="BigEndian" header_type="UInt64">