I'm using MATLAB (Mapping Toolbox) to create a large number of lines between different countries. Since there are so many lines, I'm trying to do this using object-oriented programming.
The problem is that I've created a lot of objects (lines) from the class, 'Transline', but when I try to export the whole set as a shape-file using the 'shapewrite' command, MATLAB tells me that it's invalid because the 'shapewrite' command expects an input argument of type 'struct' rather than 'Transline' (which is the class of these objects). Is there any way I can use object-oriented programming to export the set of lines as a shapefile?
Thank you.
I would think your best option is to simply call struct(myObjs)on your object before passing it to shapewrite. If the output of the structure is not in the correct format you can overload the struct method in your object. eg.
methods
function myStructOfObj=struct(obj)
%create correct structure
end
end
Related
I am trying to use the Accord.net library to build test method of several of the machine learning algorithms that library supports.
One of the issues I have run into is that when I am trying to codify my string data, the Codification class does not seem capable of dealing with any datatable columns that are not strings, despite the documentation saying otherwise.
Codification codebook = new Codification(fulldata, AllAttributeNames);
I call that line where fulldata is a datatable, and I have tried including columns of both Int32 type and Double type, and the Codification class has thrown an error saying it is unable to convert them to type String.
"System.InvalidCastException: 'Unable to cast object of type 'System.Double' to type 'System.String'.'"
EDIT: It turns out this error is because the Codification system can only handle alternate data types if it is encoding the entire table. I suppose I can see the logic here, although I would prefer a better error, or that the method was a little smarter.
I now have another issue that has cropped up related to this. After changing my code to this:
Codification codebook = new Codification(fulldata);
I then learning.Learn(inputs, outputs) my algorithm and want to use the newly trained algorithm. So the next step would be to take a bunch of test data, make sure it matches the codebooks encoding, and send it through the algorithm. Unfortunately, when I try and use the
int[][] testinput = codebook.Transform(testData, inputColumnNameArray);
It blows up claiming it could not find a mapping to transform. It does this in reference to an Integer column that the codebook correctly did not map to new values. So now it seems this Transform method is not capable of handling non-string columns, and I have not found an overload of it that can, even though the documentation indicates it should be able to handle this.
Does anyone know how to get around this issue without manually building the entire int[][] testinput array one value at a time?
Turns out I was able to answer my own question eventually.
The Codification class has two methods of using it as near as I can tell. The constructor that takes a list of column names, as well as the Transform methods both lack intelligence in dealing with non-string data types, perhaps these methods are going away in the future.
The constructor that just takes a datatable by itself, as well as the Apply method, are both capable of handling data types other than strings. Once I switched to using these two methods my errors went away.
Codification codebook = new Codification(fulldata);
int[][] testinput = codebook.Apply(testData, inputColumnNameArray);
The confusion for me lay in all the example code seemingly randomly using these two methods, but using the Apply method only when processing the training data, and using the Transform method when encoding test data.
I am not sure why they chose to do this in the documentation example code, but it definitely took me a long time to figure out what was going on enough to stop having this particular issue.
I'm using reflection to serialize an object. Getting the values as objects is a real murder on performance due to late binding penalties. CType / DirectCast can get rid of most of it but I can't feed a type variable into it so currently I'm using a switch case block on the type variable to select the correct DirectCast.
It came to my attention that CTypeDynamic exists and takes type variables but the return type is Object so... it converts an object into an object, cool. That got me wondering, what is the purpose of this function?
The CTypeDynamic function looks for dynamic information and performs the cast/conversion appropriately. This is different from the CType operator which looks for static information at compile time or relies on the types being IConvertible.
This function examines the object at runtime including looking for Shared (aka static) custom operators. As always, if you know the type then use CType, but if you need dynamic casting then you need to use CTypeDynamic.
More information here: http://blogs.msmvps.com/bill/2010/01/24/ctypedynamic/
When I wrote some general programming utility code, I found that it's good to have both inplace mutator and new object creator member function for one functionality.
For example, some class which represents path in file system may have "normalize" functionality. Path object may mutates itself into normalized one, or returns new normalized path object.
class path {
...
void normalize_itself()
path get_new_normalized_path()
...
}
I've tried some convention for this one, but most of them are not satisfiable.
'normalize!' for inplace function like ruby - good, but most other languages don't support special character to be included in identifier.
'normalize_ip' for inplace function - since most of my function usages are inplace, I think it's too ugly.
'get_normalized' for non-inplace function - acceptable, but can be confused with other simple getter function for member.
'normalized' for non-inplace function - sometime not uniform, and easily confused with its inplace counter part.
write non-inplace function as free function - lack of intellisense assistance of IDE, sometime visibility issues.
I'd like to find some good/practical convention to distinguish two function.
I think normalize for your mutator and grab_normalized for your object creator would work.
I have the following in my constants file:
typedef enum
{
AnimalTypeBear,
AnimalTypeCamel,
AnimalTypeCow,
AnimalTypeCount
}
AnimalType;
If I declare an AnimalType variable somewhere in my code like following and set it to AnimalTypeBear:
AnimalType animalType = 0;
Is there away to somehow derive the string "Bear" from that animalType variable or just in general to access the string of its corresponding constant type (in this case AnimalTypeBear).
Enums are constant expressions like #define. Enums at compile time will be "translated" into the code as constants (while #define will be evaluated before compilation). So basically it is not possible to reference the enum string in this way.
As suggested by others you can use a string array.
You cannot do this without code in (Objective-)C. If you want to be able to use actual enumeration literals as strings in your code, or during I/O, with language support then you need to use a language with enumeration type support such as Pascal or Ada.
If you are keen to have this and don't mind work as long as it is reusable then you need to learn about reading the symbol tables structures from a binary and make sure that the information is not stripped from your application. You'll see the debugger can show the correct literals, also if you use Xcode's "Product > Generate Output > Assembly File" menu item you'll see the literals are in there as strings. It will be a lot of work for you, but would be reusable once done.
After that give up and write some code - a simple static array of labels and an index operation. Yes, it's a maintenance headache if you ever change your enumeration.
Alternatively you can write some different code, say in Ruby... Xcode supports adding your own file "types" and running scripts to (pre-)process them. So you could define, say, a file type ".enum" and use a Ruby script to convert that into a C enumeration definition and code to provide the strings. Apple has examples of using Ruby to pre-process files in this way. Once you have your script Xcode will do the rest, on each compilation it will run your script to convert your ".enum" into ".m" (or ".c") and the compile the result. This approach is usually best though for files which contain only one thing, e.g. localised string file processing, you don't usually write your enum declarations in their own files.
I would like to save my objects as XML so that other applications can read from and write to the data files -- something that is very difficult with Matlab's binary mat files.
The underlying problem I'm running into is that Matlab's equivalent of reflection (which I've used to do similar things in .NET) is not very functional with respect to private properties. Matlab's struct(object) function offers a hack in terms of writing XML from an object because while I can't do
x = myInstance.myPrivateProperty;
...I can do
props = struct(myInstance);
x = props.myPrivateProperty;
So, I can create a pure (contains no objects) struct from any object using the code below, and then it's trivial to write an XML file using a pure struct.
But, is there any way to reverse the process? That is, to create an object instance using the data saved by the code below (that data contains a list of all non-Dependent, non-Constant, non-Transient properties of the class instance, and the name of the class)? I was thinking about having all my objects inherit from a class called XmlSerializable which would accept a struct as a single argument in the constructor and then assign all the values contained in the struct to the correspondingly-named properties. But, this doesn't work because if MyClass inherits XmlSerializable, the code in XmlSerializable isn't allowed to set the private properties of MyClass (related to How can I write generalized functions to manipulate private properties?). This would be no problem in .NET (See Is it possible to set private property via reflection?), but I'm having trouble figuring it out in Matlab.
This code creates a struct that contains all of the state information for the object(s) passed in, yet contains no object instances. The resulting struct can be trivially written to XML:
function s = toPureStruct(thing)
if isstruct(thing)
s = collapseObjects(thing);
s.classname = 'struct';
elseif isobject(thing)
s.classname = class(thing);
warning off MATLAB:structOnObject;
allprops = struct(thing);
warning on MATLAB:structOnObject
mc = metaclass(thing);
for i=1:length(mc.PropertyList)
p = mc.PropertyList(i);
if strcmp(p.Name, 'classname')
error('toStruct:PropertyNameCollision', 'Objects used in toStruct may not have a property named ''classname''');
end
if ~(p.Dependent || p.Constant || p.Transient)
if isobject(allprops.(p.Name))
s.(p.Name) = toPureStruct(allprops.(p.Name));
elseif isstruct(allprops.(p.Name))
s.(p.Name) = collapseObjects(allprops.(p.Name));
else
s.(p.Name) = allprops.(p.Name);
end
end
end
else
error(['Conversion to pure struct from ' class(thing) ' is not possible.']);
end
end
function s = collapseObjects(s)
fnames = fields(s);
for i=1:length(fnames)
f = s.(fnames{i});
if isobject(f)
s.(fnames{i}) = toPureStruct(f);
elseif isstruct(f)
s.(fnames{i}) = collapseObjects(f);
end
end
end
EDIT: One of the other "applications" I would like to read the saved files is a version control system (to track changes in parameters in configurations defined by Matlab objects), so any viable solution must be capable of producing human-intelligible text. The toPureStruct method above does this when the struct is converted to XML.
You might be able to sidestep this issue by using the new v7.3 MAT file format for your saved objects. Unlike the older MAT file formats, v7.3 is a variant of the HDF5, and there's HDF5 support and libraries in other languages. This could be a lot less work, and you'd probably get better performance too, since HDF5 is going to have a more efficient representation of numeric arrays than naive XML will.
It's not the default format; you can enable it using the -v7.3 switch to the save function.
To the best of my knowledge, what I want to do is impossible in Matlab 2011b. It might be possible, per #Andrew Janke's answer, to do something similar using Matlab's load command on binary HDF5 files that can be read and modified by other programs. But, this adds an enormous amount of complexity as Matlab's HDF5 representation of even the simplest class is extremely opaque. For instance, if I create a SimpleClass classdef in Matlab with two standard properties (prop1 and prop2), the HDF5 binary generated with the -v7.3 switch is 7k, and the expanded XML is 21k, and the text "prop1" and "prop2" do not appear anywhere. What I really want to create from that SimpleClass is this:
<root>
<classname>SimpleClass</classname>
<prop1>123</prop1>
<prop2>456</prop2>
</root>
I do not think it is possible to produce the above text from class properties in a generalized fashion in Matlab, even though it would be possible, for instance, in .NET or Java.