How to detect object in ML C# using yolo v3 model (with two outputs)? - yolo

I have tiny yolo v3 pre-trained model and I want to use it in C#, in order to be able to detect objects.
I came across the following working sample code but the tutorial is made for a tiny yolo v2 model with the properties:
while my pre-trained model has the properties:
So there is incompatibility not only in the names, but also in the number of outputs and in the parameters for inputs / outputs. Since ML is not my area, I am having difficulties in migrating the code to support this new model that I have.
What I did so far was:
renaming all occurrences of the input param image into the value 000_net
renaming all occurrences of the output param grid into the value 016_convolutional
changes the values (this section)
from:
public const int ROW_COUNT = 13;
public const int COL_COUNT = 13;
public const int CHANNEL_COUNT = 125;
public const int BOXES_PER_CELL = 5;
public const int BOX_INFO_FEATURE_COUNT = 5;
public const int CLASS_COUNT = 20;
public const float CELL_WIDTH = 32;
public const float CELL_HEIGHT = 32;
into (based on the comments from person that provided me the model):
public const int ROW_COUNT = 13;
public const int COL_COUNT = 13;
public const int CHANNEL_COUNT = 18;
public const int BOXES_PER_CELL = 3;
public const int BOX_INFO_FEATURE_COUNT = 5;
public const int CLASS_COUNT = 1;
public const float CELL_WIDTH = 32;
public const float CELL_HEIGHT = 32;
Also I have replaced the class names, with the one class that the model is trained for.
In the final result after all my changes, the application is not throwing errors but it's showing warnings and the objects are not detected as they should. Here is the output that I got:
Warning says:
...
2020-08-25 14:46:40.5959296 [W:onnxruntime:, graph.cc:863 onnxruntime::Graph::Graph] Initializer 022_convolutional_bn_bias appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
2020-08-25 14:46:40.5970795 [W:onnxruntime:, graph.cc:863 onnxruntime::Graph::Graph] Initializer 022_convolutional_bn_mean appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
2020-08-25 14:46:40.5979695 [W:onnxruntime:, graph.cc:863 onnxruntime::Graph::Graph] Initializer 022_convolutional_bn_var appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
2020-08-25 14:46:40.5988356 [W:onnxruntime:, graph.cc:863 onnxruntime::Graph::Graph] Initializer 022_convolutional_conv_weights appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
2020-08-25 14:46:40.5996638 [W:onnxruntime:, graph.cc:863 onnxruntime::Graph::Graph] Initializer 023_convolutional_conv_bias appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.
2020-08-25 14:46:40.6006995 [W:onnxruntime:, graph.cc:863 onnxruntime::Graph::Graph] Initializer 023_convolutional_conv_weights appears in graph inputs and will not be treated as constant value/weight. This may prevent some of the graph optimizations, like const folding. Move it out of graph inputs if there is no need to override it, by either re-generating the model with latest exporter/converter or with the tool onnxruntime/tools/python/remove_initializer_from_input.py.

Related

How do I map a texture onto an Entity in a Minecraft Plugin

I'm trying to write a plugin to brand cattle and thought it would be pretty easy, but I'm stuck looking for the information that would help me do this.
Where can I find information that will help me map a texture (from a png, for example) onto an Entity. While there's information about built-in textures for Players etc, I haven't found a resource that would help me understand how I could get something to render on the side of an Entity.
I'm guessing that I'd use something like the following calls...
Minecraft.getMinecraft().renderEngine.bindTexture(new ResourceLocation("tc:textures/gui/my-icon.png"));
Minecraft.getMinecraft().ingameGUI.drawTexturedModalRect(etc);
Not certain how I'd enforce them into the drawing of a cow or a horse.
This isn't possible while using Bukkit, since Bukkit is server-side and can't change textures. There's one exception, though: Servers can send players resource packs. However, there does not appear to be a way to create a unique texture based off of any data, so you'd have to make all cows look the same. It wouldn't really do what you want. (Players are another exception, but protocol-wise, their skins are arbitrary anyways).
However, if you want to use Minecraft Forge, this is far more manageable. You can subclass the entity and change some of the rendering code. Perhaps you can also have an item of some sort (a branding iron, maybe) to convert existing cows into branded cows (they would still spawn as normal cows). I'm not too much of a Forge dev, but something like this should work (though I haven't tested it). This is more of an outline; things like converting the entity and creating an item I'll leave to you.
Here's a basic outline for an entitiy that tracks a texture between the server and the client:
import net.minecraft.entity.passive.EntityCow;
import net.minecraft.util.ResourceLocation;
public class EntityBrandedCow extends EntityCow {
#Override
protected void entityInit() {
super.entityInit();
// Data watcher lets you track data between the server and client
// without handling packets yourself
// http://wiki.vg/Entities
this.dataWatcher.addObject(14, "minecraft:textures/entity/cow/cow.png");
}
public void setTexture(ResourceLocation texture) {
this.dataWatcher.updateObject(14, texture.toString());
}
public ResourceLocation getTexture() {
return new ResourceLocation(this.dataWatcher.getWatchableObjectString(14));
}
}
You'll need to register a custom renderer for your new entity. This would go in the client proxy.
RenderingRegistry.registerEntityRenderingHandler(EntityBrandedCow.class, new RenderBrandedCow(Minecraft.getRenderManager(), new ModelCow(), .7f));
And here's such a render you can use:
import net.minecraft.util.ResourceLocation;
import net.minecraft.client.model.ModelBase;
import net.minecraft.entity.Entity;
import net.minecraft.client.model.ModelBase;
import net.minecraft.client.renderer.entity.RenderLiving;
import net.minecraft.client.renderer.entity.RenderManager;
public class RenderBrandedCow extends RenderLiving {
public RenderBrandedCow(RenderManager manager, ModelBase model, float shadowSize) {
super(manager, model, shadowSize);
}
#Override
protected ResourceLocation getEntityTexture(Entity entity) {
return ((EntityBrandedCow)entity).getTexture();
}
}
That renderer only changes the texture, and doesn't actually overlay anything. This, among other things, means that texture packs won't change branded cows without creating additional textures. An alternative would be to create a second layer. (This is based off of the way sheep wool works - see net.minecraft.client.renderer.entity.layers.LayerSheepWool and net.minecraft.client.renderer.RenderSheep). You can change the renderer to this:
import net.minecraft.client.model.ModelBase;
import net.minecraft.client.renderer.entity.RenderLiving;
import net.minecraft.client.renderer.entity.RenderManager;
public class RenderBrandedCow extends RenderCow {
public RenderBrandedCow(RenderManager manager, ModelBase model, float shadowSize) {
super(manager, model, shadowSize);
this.addLayer(new LayerCowBrand(this));
}
}
And here's the start of some kind of layer rendering code. This won't work on its own; you'll need to write a ModelBrand (see ModelSheep1 for the basis of that).
public class LayerCowBrand implements LayerRenderer {
private final BrandedCowRenderer renderer;
private final ModelBrand model = new ModelBrand();
public LayerCowBrand(BrandedCowRenderer renderer) {
this.renderer = renderer;
}
public void doRenderLayer(EntityBrandedCow entity, float p_177162_2_, float p_177162_3_, float p_177162_4_, float p_177162_5_, float p_177162_6_, float p_177162_7_, float p_177162_8_) {
// It's common to write a second method with the right parameters...
// I don't know off my hand what the parameters here are.
this.renderer.bindTexture(entity.getTexture());
this.model.setModelAttributes(this.sheepRenderer.getMainModel());
this.model.setLivingAnimations(p_177162_1_, p_177162_2_, p_177162_3_, p_177162_4_);
this.model.render(p_177162_1_, p_177162_2_, p_177162_3_, p_177162_5_, p_177162_6_, p_177162_7_, p_177162_8_);
}
public boolean shouldCombineTextures() {
// I don't know
return true;
}
public void doRenderLayer(EntityLivingBase p_177141_1_, float p_177141_2_, float p_177141_3_, float p_177141_4_, float p_177141_5_, float p_177141_6_, float p_177141_7_, float p_177141_8_) {
// This is the actual render method that implements the interface.
this.doRenderLayer((EntityBrandedCow)p_177141_1_, p_177141_2_, p_177141_3_, p_177141_4_, p_177141_5_, p_177141_6_, p_177141_7_, p_177141_8_);
}
}
Hopefully this at least lets you get started. As I said, I'm not a forge dev, but this should be the basics. If you want to ask more questions about forge, post here on Stack Overflow using minecraft-forge (Gaming Stack Exchange also has a minecraft-forge tag but that's for mod usage, not development).

How Can I merge complex shapes stored in an ArrayList with Geomerative Library

I store shapes of this class:
class Berg{
int vecPoint;
float[] shapeX;
float[] shapeY;
Berg(float[] shapeX, float[] shapeY, int vecPoint){
this.shapeX = shapeX;
this.shapeY = shapeY;
this.vecPoint = vecPoint;
}
void display(){
beginShape();
curveVertex(shapeX[vecPoint-1], shapeY[vecPoint-1]);
for(int i=0;i<vecPoint;i++){
curveVertex(shapeX[i], shapeY[i]);
}
curveVertex(shapeX[0],shapeY[0]);
curveVertex(shapeX[1],shapeY[1]);
endShape();
}
}
in an ArrayList with
shapeList.add(new Berg(xBig,yBig,points));
The shapes are defined with eight (curveVertex-)points (xBig and yBig) forming a shape around a randomly positioned center.
After checking if the shapes are intersecting I want to merge the shapes that overlap each other. I already have the detection of the intersection working but struggle to manage the merging.
I read that the library Geomerative has a way to do something like that with union() but RShapes are needed as parameters.
So my question is: How can I change my shapes into the required RShape type? Or more general (maybe I did some overall mistakes): How Can I merge complex shapes stored in an ArrayList with or without Geomerative Library?
Take a look at the API for RShape: http://www.ricardmarxer.com/geomerative/documentation/geomerative/RShape.html
That lists the constructors and methods you can use to create an RShape out of a series of points. It might look something like this:
class Berg{
public RShape toRShape(){
RShape rShape = new rShape();
for(int i = 0; i < shapeX; i++){
rShape.addLineto(shapeX[i], shapeY[i]);
}
}
}

Encog binary classification score for ROC

I am working on a binary classifier using Encog (via Java). I have it set up using an SVM or neural network, and I am want to evaluate the quality of the different models using (in part) the area under the ROC curve.
More specifically, I would ideally like to convert the output of the model into a some kind of prediction confidence score that can be used for rank ordering in the ROC, but I have yet to find anything in the documentation.
In the code, I get the model results with something like:
MLData result = ((MLRegression) method).compute( pair.getInput() );
String classification = normHelper.denormalizeOutputVectorToString( result )[0];
How do I also get a numerical confidence of the classification?
I have found a way to coax prediction probabilities out of SVM inside the encog framework. This method relies upon the equivalent of the -b option for libSVM (see http://www.csie.ntu.edu.tw/~cjlin/libsvm/index.html)
To do this, override the SVM class from encog. The constructor will enable probability estimates via the smv_parameter object (see below). Then, when doing the calculation, call the method svm_predict_probability as shown below.
Caveat: below is only a code fragment and in order to be useful you will probably need to write other constructors and to pass the resulting probabilities out of the methods below. This fragment is based upon encog version 3.3.0.
public class MySVMProbability extends SVM {
public MySVMProbability(SVM method) {
super(method.getInputCount(), method.getSVMType(), method.getKernelType());
// Enable probability estimates
getParams().probability = 1;
}
#Override
public int classify(final MLData input) {
svm_model model = getModel();
if (model == null) {
throw new EncogError(
"Can't use the SVM yet, it has not been trained, "
+ "and no model exists.");
}
final svm_node[] formattedInput = makeSparse(input);
final double probs[] = new double[svm.svm_get_nr_class(getModel())];
final double d = svm.svm_predict_probability(model, formattedInput, probs);
/* probabilities for each class are in probs[] */
return (int) d;
}
#Override
public MLData compute(MLData input) {
svm_model model = getModel();
if (model == null) {
throw new EncogError(
"Can't use the SVM yet, it has not been trained, "
+ "and no model exists.");
}
final MLData result = new BasicMLData(1);
final svm_node[] formattedInput = makeSparse(input);
final double probs[] = new double[svm.svm_get_nr_class(getModel())];
final double d = svm.svm_predict_probability(model, formattedInput, probs);
/* probabilities for each class are in probs[] */
result.setData(0, d);
return result;
}
}
Encog has no direct support for ROC curves. A ROC curve is more of a visualization than an actual model type, which is primarily the focus of Encog.
Generating a ROC curve for SVM's and Neural Networks is somewhat different. For a neural network, you must establish thresholds for the classification neurons. There is a good paper about that here: http://www.lcc.uma.es/~jja/recidiva/048.pdf
I may eventually add direct support for ROC curves into Encog in the future. They are becoming a very common visualization.

MultiScaleImageSource GetTileLayers Explanation

I have been doing reading on MultiScaleImage source, and finding anything useful has proven to be quite difficult, thus I turn to the experts here. The specific knowledge I would like to have pertains to the GetTileLayers method. I know this method is used to get the image tiles. But I have no idea where this method is called from, or where the parameters come from or how I would use it if I subclassed the MultiScaleTileSource Class. Any insight into this method or the MSI model would be amazing but I have 3 main questions:
1. Where should/is the method GetTileLayers called from?
2. How should I change this method if I wanted to draw png's from a non-local URI?
3. Where can I find some reading to help with this?
In order to create a custom tile source, you would subclass MultiScaleTileSource and override the GetTileLayers method, as shown in the example below, which defines an image consisting of 1000*1000 tiles of size 256x256 pixels each.
public class MyTileSource : MultiScaleTileSource
{
public MyTileSource()
: base(1000 * 256, 1000 * 256, 256, 256, 0)
{
}
protected override void GetTileLayers(
int tileLevel, int tilePositionX, int tilePositionY,
IList<object> tileImageLayerSources)
{
// create an appropriate URI for tileLevel, tilePositionX and tilePositionY
// and add it to the tileImageLayerSources collection
var uri = new Uri(...);
tileImageLayerSources.Add(uri);
}
}
Now you would assign an instance of your MyTileSource class to your MultiScaleImage control:
MultiScaleImage msImage = ...
msImage.Source = new MyTileSource();

Accessing a C/C++ structure of callbacks through a DLL's exported function using JNA

I have a vendor supplied .DLL and an online API that I am using to interact with a piece of radio hardware; I am using JNA to access the exported functions through Java (because I don't know C/C++). I can call basic methods and use some API structures successfully, but I am having trouble with the callback structure. I've followed the TutorTutor guide here and also tried Mr. Wall's authoritative guide here, but I haven't been able to formulate the Java side syntax for callbacks set in a structure correctly.
I need to use this exported function:
BOOL __stdcall SetCallbacks(INT32 hDevice,
CONST G39DDC_CALLBACKS *Callbacks, DWORD_PTR UserData);
This function references the C/C++ Structure:
typedef struct{
G39DDC_IF_CALLBACK IFCallback;
//more omitted
} G39DDC_CALLBACKS;
...which according to the API has these Members (Note this is not an exported function):
VOID __stdcall IFCallback(CONST SHORT *Buffer, UINT32 NumberOfSamples,
UINT32 CenterFrequency, WORD Amplitude,
UINT32 ADCSampleRate, DWORD_PTR UserData);
//more omitted
I have a G39DDCAPI.java where I have loaded the DLL library and reproduced the API exported functions in Java, with the help of JNA. Simple calls to that work well.
I also have a G39DDC_CALLBACKS.java where I have implemented the above C/C++ structure in a format works for other API structures. This callback structure is where I am unsure of the syntax:
import java.util.Arrays;
import java.util.List;
import java.nio.ShortBuffer;
import com.sun.jna.Structure;
import com.sun.jna.platform.win32.BaseTSD.DWORD_PTR;
import com.sun.jna.win32.StdCallLibrary.StdCallCallback;
public class G39DDC_CALLBACKS extends Structure {
public G39DDC_IF_CALLBACK IFCallback;
//more omitted
protected List getFieldOrder() {
return Arrays.asList(new String[] {
"IFCallback","DDC1StreamCallback" //more omitted
});
}
public static interface G39DDC_IF_CALLBACK extends StdCallCallback{
public void invoke(ShortBuffer _Buffer,int NumberOfSamples,
int CenterFrequency, short Amplitude,
int ADCSampleRate, DWORD_PTR UserData);
}
}
Edit: I made my arguments more type safe as Technomage suggested. I am still getting a null pointer exception with several attempts to call the callback. Since I'm not sure of my syntax regarding the callback structure above, I can't pinpoint my problem in the main below. Right now the relevant section looks like this:
int NumberOfSamples=65536;//This is usually 65536.
ShortBuffer _Buffer = ShortBuffer.allocate(NumberOfSamples);
int CenterFrequency=10000000;//Specifies center frequency (in Hz) of the useful band
//in received 50 MHz wide snapshot.
short Amplitude=0;//The possible value is 0 to 32767.
int ADCSampleRate=100;//Specifies sample rate of the ADC in Hz.
DWORD_PTR UserData = null;
G39DDC_CALLBACKS callbackStruct= new G39DDC_CALLBACKS();
lib.SetCallbacks(hDevice,callbackStruct,UserData);
//hDevice is a handle for the hardware device used-- works in other uses
//lib is a reference to the library in G39DDCAPI.java-- works in other uses
//The UserData is a big unknown-- I don't know what to do with this variable
//as a DWORD_PTR
callbackStruct.IFCallback.invoke(_Buffer, NumberOfSamples, CenterFrequency,
Amplitude, ADCSampleRate, UserData);
EDIT NO 2:
I have one callback working somewhat, but I don't have control over the buffers. More frustratingly, a single call to invoke the method will result in several runs of the custom callback, usually with multiple output files (results vary drastically from run to run). I don't know if it is because I am not allocating memory correctly on the Java side, because I cannot free the memory on the C/C++ side, or because I have no cue on which to tell Java to access the buffer, etc. Relevant code looks like:
//before this, main method sets library, starts DDCs, initializes some variables...
//API call to start IF
System.out.print("Starting IF... "+lib.StartIF(hDevice, Period)+"\n")
G39DDC_CALLBACKS callbackStructure = new G39DDC_CALLBACKS();
callbackStructure.IFCallback = new G39DDC_IF_CALLBACK(){
#Override
public void invoke(Pointer _Buffer, int NumberOfSamples, int CenterFrequency,
short Amplitude, int ADCSampleRate, DWORD_PTR UserData ) {
//notification
System.out.println("Invoked IFCallback!!");
try {
//ready file and writers
File filePath = new File("/users/user/G39DDC_Scans/");
if (!filePath.exists()){
System.out.println("Making new directory...");
filePath.mkdir();
}
String filename="Scan_"+System.currentTimeMillis();
File fille= new File("/users/user/G39DDC_Scans/"+filename+".txt");
if (!fille.exists()) {
System.out.println("Making new file...");
fille.createNewFile();
}
FileWriter fw = new FileWriter(fille.getAbsoluteFile());
//callback body
short[] deBuff=new short[NumberOfSamples];
int offset=0;
int arraySize=NumberOfSamples;
deBuff=_Buffer.getShortArray(offset,arraySize);
for (int i=0; i<NumberOfSamples; i++){
String str=deBuff[i]+",";
fw.write(str);
}
fw.close();
} catch (IOException e1) {
System.out.println("IOException: "+e1);
}
}
};
lib.SetCallbacks(hDevice, callbackStructure,UserData);
System.out.println("Main, before callback invocation");
callbackStructure.IFCallback.invoke(s_Pointer, NumberOfSamples, CenterFrequency, Amplitude, ADCSampleRate, UserData);
System.out.println("Main, after callback invocation");
//suddenly having trouble stopping DDCs or powering off device; assume it has to do with dll using the functions above
//System.out.println("StopIF: " + lib.StopIF(hDevice));//API function returns boolean value
//System.out.println("StopDDC2: " + lib.StopDDC2( hDevice, Channel));
//System.out.println("StopDDC1: " + lib.StopDDC1( hDevice, Channel ));
//System.out.println("test_finishDevice: " + test_finishDevice( hDevice, lib));
System.out.println("Program Exit");
//END MAIN METHOD
You need to extend StdCallCallback, for one, otherwise you'll likely crash when the native code tries to call the Java code.
Any place you see a Windows type with _PTR, you should use a PointerType - the platform package with JNA includes definitions for DWORD_PTR and friends.
Finally, you can't have a primitive array argument in your G39DDC_IF_CALLBACK. You'll need to use Pointer or an NIO buffer; Pointer.getShortArray() may then be used to extract the short[] by providing the desired length of the array.
EDIT
Yes, you need to initialize your callback field in the callbacks structure before passing it into your native function, otherwise you're just passing a NULL pointer, which will cause complaints on the Java or native side or both.
This is what it takes to create a callback, using an anonymous instance of the declared callback function interface:
myStruct.callbackField = new MyCallback() {
public void invoke(int arg) {
// do your stuff here
}
};