Get order column/row-major for given numpy ndarray - numpy

I have a numpy array that is created by the third party library, namely proj_data object. Now I am trying to debug my code and I would like to get the "order" parameter of proj_data array.
However there is no such attribute that would tell me which order specifier was used when the array was created.
I need to know that, because the array is in turn used to transfer data to GPU and different order probably produces an error. Ideally I would get solution not by code analysis but really by some reflection of the proj_data object.

Related

Pycaret data_func usage

What is the proper setup to use data_func?
I have attempted to use list comprehension which is not allowed as lists are not callable.
I have attempted to use pandas as a generator but generator objects are not picklable.
What the set up to use the data_func parameter?
https://pycaret.readthedocs.io/en/latest/api/classification.html#pycaret.classification.setup
I expect that the data_func parameter would accept a dataframe generator object or a list of dataframes. Either is acceptable or what is the proper use?

Why does 'indexOf' not find the specified arrayOfLong?

I have an ArrayList of LongArrays, each of size 2. I am trying to use the built-in 'indexOf' method to find the index of a specific array of longs, and I can't figure out why the code says the array of longs I'm searching for isn't found. See the below screen shot of a debugging session where try to evaluate finding 'longArrayOf(0L,5L)' in the ArrayList. In my mind, longArrayOf(0L,5L) is clearly the first element in the cardHierarchy array. Can the 'indexOf' function not be used for finding arrays? Can anyone suggest an alternate method?
indexOf uses Object.equals() when you pass arrays, which compares by reference address which is different for the LongArray passed to indexOf and the one present in cardHierarchy.
Change it to
cardHierarchy.indexOfFirst { it.contentEquals(longArrayOf(0L, 5L)) }

Accord.net Codification can't handle non-strings

I am trying to use the Accord.net library to build test method of several of the machine learning algorithms that library supports.
One of the issues I have run into is that when I am trying to codify my string data, the Codification class does not seem capable of dealing with any datatable columns that are not strings, despite the documentation saying otherwise.
Codification codebook = new Codification(fulldata, AllAttributeNames);
I call that line where fulldata is a datatable, and I have tried including columns of both Int32 type and Double type, and the Codification class has thrown an error saying it is unable to convert them to type String.
"System.InvalidCastException: 'Unable to cast object of type 'System.Double' to type 'System.String'.'"
EDIT: It turns out this error is because the Codification system can only handle alternate data types if it is encoding the entire table. I suppose I can see the logic here, although I would prefer a better error, or that the method was a little smarter.
I now have another issue that has cropped up related to this. After changing my code to this:
Codification codebook = new Codification(fulldata);
I then learning.Learn(inputs, outputs) my algorithm and want to use the newly trained algorithm. So the next step would be to take a bunch of test data, make sure it matches the codebooks encoding, and send it through the algorithm. Unfortunately, when I try and use the
int[][] testinput = codebook.Transform(testData, inputColumnNameArray);
It blows up claiming it could not find a mapping to transform. It does this in reference to an Integer column that the codebook correctly did not map to new values. So now it seems this Transform method is not capable of handling non-string columns, and I have not found an overload of it that can, even though the documentation indicates it should be able to handle this.
Does anyone know how to get around this issue without manually building the entire int[][] testinput array one value at a time?
Turns out I was able to answer my own question eventually.
The Codification class has two methods of using it as near as I can tell. The constructor that takes a list of column names, as well as the Transform methods both lack intelligence in dealing with non-string data types, perhaps these methods are going away in the future.
The constructor that just takes a datatable by itself, as well as the Apply method, are both capable of handling data types other than strings. Once I switched to using these two methods my errors went away.
Codification codebook = new Codification(fulldata);
int[][] testinput = codebook.Apply(testData, inputColumnNameArray);
The confusion for me lay in all the example code seemingly randomly using these two methods, but using the Apply method only when processing the training data, and using the Transform method when encoding test data.
I am not sure why they chose to do this in the documentation example code, but it definitely took me a long time to figure out what was going on enough to stop having this particular issue.

Octave: converting dataframe to cell array

Given an Octave dataframe object created as
c = cell(m,n);
%populate c...
pkg load dataframe
df = dataframe(c);
(see https://octave.sourceforge.io/dataframe/overview.html),
Is it possible to access the underlying cell array?
Is it there a conversion mechanism back to cell array?
Is it possible to save df to CSV?
Yes. A dataframe object, like any object, can be converted back into a struct.
Once you have the resulting struct, look for the fields x_name to get the column names, and x_data to get the data in the form of a cell array, i.e.
struct(df).x_data
As for conversion to csv, the dataframe package does not seem to provide any relevant methods as far as I can tell (in particular the package does not provide an overloaded #dataframe/csvwrite method). Therefore, I'd just extract the information as above, and go about writing it into a csv file from there.
If you're not dealing with strictly numerical data, you might want to have a look at the cell2csv / csv2cell methods from the io package (since the built-in csvwrite function is strictly for numerical data).
And if that doesn't do exactly what you want, I'd probably just go for creating a csv file manually via custom fprintf statements.
PS. You can generally see what methods a package provides via pkg describe -verbose dataframe, or the methods for a particular class via methods(dataframe) (or even methods(df)). Also, if you ever wanted to access the documentation for an overloaded method, e.g. say the summary method, then this is the syntax for doing so: help #dataframe/summary

Gnuradio number of output items

I'm trying to display "number of items on the output stream" in the flowgraph.
Is there a way to access the function: block__nitems_written(unsigned int which_output) from the flowgraph?
So far I have tried "from gnuradio import gr" and then use gr.block__nitems_written(0) as a value in a variable. The error I get is:
module object has no attribute block__nitems_written.
I think I am not calling the function properly. Any help will be appreciated!
You're confusing things! That's not a property of a flow graph.
Each block has its own number of items that it's written to its output ports.
Hence, it's a method of gr.block, which you can only call with a block instance, i.e. typically as self.nitems_written(0) within a block's work method.