After I built my custom object detection with ssd_mobilenet_v2_fpnlite_320x320, my result looks like classification not object deteciton.
Even after I changed classification -> detection in ssd_mobilenet_v2_fpnlite_320x320 config file..., it still give me results I uploaded with pictures. I have no idea what is wrong with my object detection.
Also, sometimes after training, classname+percentage does not appear on the detected image. For example, my 4th picture I uploaded does not show the class name and percentage...
*** really weird this is that when I use 'inference graph/saved_modle' to detect image, it does works like first 4 pictures, but When I used ssd_mobile..tpu/saved_model. It work fine as 5th picture***
Related
I checked how to implement image classification with arfoundation
and I found out https://qiita.com/cvusk/items/77d5afef76447d173f02
it also provide with github repository link but failed to run it.
so, I made my own new project then move all the files to my new project folder
but on android device, it keep show error on
detector.Invoke(cameraTexture);
error is
NullReferenceException Object reference not set to an instance of an object
it seems cameraTexture is null...
anyone success in run above example?
How are we supposed to implement the environment's render method in gym, so that Monitor's produced videos are not black (as they appear to me right now)? Or, alternatively, in which circumstances would those videos be black?
To give more context, I was trying to use the gym's wrapper Monitor. This wrapper writes (every once in a while, how often exactly?) to a folder some .json files and an .mp4 file, which I suppose represents the trajectory followed by the agent (which trajectory exactly?). How is this .mp4 file generated? I suppose it's generated from what is returned by the render method. In my specific case, I am using a simple custom environment (i.e. a very simple grid world/maze), where I return a NumPy array that represents my environment's current state (or observation). However, the produced .mp4 files are black, while the array clearly is not black (because I am also printing it with matplotlib's imshow). So, maybe Monitor doesn't produce those videos from the render method's return value. So, how exactly does Monitor produce those videos?
(In general, how should we implement render, so that we can produce nice animations of our environments? Of course, the answer to this question depends also on the type of environment, but I would like to have some guidance)
This might not be an exhaustive answer, but here's how I did.
First I added rgb_array to the render.modes list in the metadata dictionary at the beginning of the class.
If you don't have such a thing, add the dictionary, like this:
class myEnv(gym.Env):
""" blah blah blah """
metadata = {'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 2 }
...
You can change the desired framerate of course, I don't know if every framerate will work though.
Then I changed my render method. According to the input parameter mode, if it is rgb_array it returns a three dimensional numpy array, that is just a 'numpyed' PIL.Image() (np.asarray(im), with im being a PIL.Image()).
If mode is human, just print the image or do something to show your environment in the way you like it.
As an example, my code is
def render(self, mode='human', close=False):
# Render the environment to the screen
im = <obtain image from env>
if mode == 'human':
plt.imshow(np.asarray(im))
plt.axis('off')
elif mode == 'rgb_array':
return np.asarray(im)
So basically return an rgb matrix.
Looking at the gym source code, it seems there are other ways that work, but I'm not an expert in video rendering so for those other way I can't help.
Regarding your question "how often exactly [are the videos saved]?", I can point you to this link that helped me for that.
As a final side note, video saving with a gym Monitor wrapper does not work for a mis-indentation (as of today 30/12/20, gym version 0.18.0), if you want to solve it do like this guy did.
(I'm sorry if my English sometimes felt weird, feel free to harshly correct me)
I am trying to build a custom dataset-loader, which laods ICDAR-Dataset.
My frist step was to embed a dataset inside my loader as suggested also
here in this post, but the problem is that you have to implement all the nice features that the tenfsoflow-2 class "Dataset" offers manually.
My second try was to subclass the Dataset-Class, something like:
class MyDataset(tf.data.Dataset):
def __init__(self):
super(MyDataset, self).init()
def preprocess_images(self):
pass
But the problem is i did not find any documentation what dataset-class internally really does, the only implementation i found was this one.
So question is does anybody know how to build a custom "dataset" in tf2 by subclassing tf.data.Dataset.
By the way i also tried tensorflow_datasets, bit it does not really worked, shince it will downlaod the dataset, and split them manually which is in this is alreay seperated by train and test and also ICDAr can not be downlaoded without registration.
The content of the ICDAR-Dataset is as following:
An Image
A List of all texts in each image
A List of Bouding-boxes for each text in each image
Image:
#https://rrc.cvc.uab.es/?ch=4 owns the copyrights of this image.
Words and bounding boxes for the above image:
377,117,463,117,465,130,378,130,Genaxis Theatre
493,115,519,115,519,131,493,131,[06]
374,155,409,155,409,170,374,170,###
492,151,551,151,551,170,492,170,62-03
376,198,422,198,422,212,376,212,Carpark
494,190,539,189,539,205,494,206,###
374,1,494,0,492,85,372,86,###
Thanks
does anyone know how to
I want to make a shape that can't be constructed using the SceneKit build in geometry models so I want to use some other 3D modeling program for that. I am interested if those models (created for example in Blender) can act as models that can be created directly in SceneKit. I want to be able to apply materials and change the object's color in code, and want to know beforehand if this is possible with imported models.
I know I can export the model in .dae (Collada file) and like this I can for sure use the model, but can't change its material.
If it is possible to change it in some other way I would appreciate if you could briefly explain how the object should be exported from Blender (in which format).
Actually yes you can change the material in a Collada (dae) format.
The materials are contained in the class SCNMaterial.
Here are the methods you can use to access the material:
First you have probably the easiest method of material access:
SCNNode.geometry.firstMaterial
This method gives your the first material that the object is using.
Next you have entire material access:
SCNNode.geometry.materials
This method gives you an NSArray containing all the materials that the object is using.
Then finally you have the good'ol name access:
[SCNNode.geometry.materialWithName: NSString]
This method gives you an NSArray containing all the materials that the object is using.
And in the apple docs:
What is SCNNode.geometry? Find out here
Material access and manipulation.
A side note:
To actually control the color/image of a SCNMaterial you need to use SCNMaterialProperty
A SCNMaterial is made up of several SCNMaterialPropertys.
For more info please read the docs
I'm programmimg a museum app and I'd like to display a 3D model that responses to the user touches, like pinch to zoom or moving arround the model. I've searched a lot but all I found is game engines that seem very complicated for this thing. Is there any way to import the models (it doesn't matter the format that they have), display it and make it touch responsive? If the code (or the engine) is open source would be better, I'd prefer a free app than a paid app.So many thanks!
Update: right now I'm able to load the 3D model using cocos3D, but as I've said on an answer, the model I can load is very low-ploy. It's an app for a museum and I'd have to be a much more detailed model. I'm using the cocos3D standard template project that shows the animated "hello world", just changed the .pod file to load the one I want and started adding a few modifications to support user touch interaction. I'm reducing about 80% the quantity of original polygons to load it (this is how looks a small part of the model
). If I try to load the model reducing about 50% the original (which looks great, like these
), the app crashes and gives me this log crash:
** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'OpenGL ES 1.1 supports only GL_UNSIGNED_SHORT or GL_UNSIGNED_BYTE types for vertex indices'
* First throw call stack:
(0x22cc012 0x1ca9e7e 0x22cbe78 0x173ff35 0x1b550f 0x186751 0x180a81 0x17b750 0x11de32 0x1270d4 0x1263ac 0x14f1a2 0x13ca01 0x14ee02 0x14d45e 0x14d3c2 0x14bb22 0x14a452 0x14efcc 0x14d493 0x14d3c2 0x1643e3 0x162a41 0x10c197 0x10c11d 0x10c098 0x3d79c 0x3d76f 0x85282 0x16e9884 0x16e9737 0x8b56f 0xc4192d 0x1cbd6b0 0x505fc0 0x4fa33c 0x4fa150 0x4780bc 0x479227 0x51bb50 0xbef9ff 0xbf04e1 0xc01315 0xc0224b 0xbf3cf8 0x2fd4df9 0x2fd4ad0 0x2241bf5 0x2241962 0x2272bb6 0x2271f44 0x2271e1b 0xbef7da 0xbf165c 0x1ca506 0x2a55)
libc++abi.dylib: terminate called throwing an exception
(lldb)
It can't load all the polygons and crashes. Is there any solution for that? Or I must star looking another way to load the model? It you want more information just ask. Thanks.
I used Cocos3D to import a Earth model and rotate it according to the gestures made by the users. You can give it a look, it's not a complex thing to do.
Have a look at this post for some sample code about loading the model. For handling rotation, I found very useful this post.