I am using tensorflow object detection API to detect objects. It is working fine in my windows system. How can I make changes in it to only detect mentioned objects, for example, I only want to detect humans and not all the objects.
As per the 1st comment in this answer, I have checked the visualization file but didn't find anything related to categories of objects. Then I looked into category_util.py and found out that there is csv file from which all the categories are being loaded but didnt found this csv file in the project. Can anyone please point me into the right direction. Thanks
I assume from your question, that you did not finetune your model yourself, but just used a pretrained one from the model zoo!?
In this case, I think the model already detect humans AND other objects and you want these other objects to disappear!? For doing so, you just have to change your label_map.pbtxt by deleting all classes which you don't need. If you are not sure where to find this file have a look into your .config file and search for label_map_path="PATH".
Related
I'm playing around with Mediapipe and I'm trying to better understand how the graph works and what is the input/output of the different calculators.
If I understand correctly, the .pbtxt files are just plain-text instructions that describe how each calculator should interact with the rest of the calculators. These files are compiled into .binarypb files, which are fed to Mediapipe.
For example, this .pbtxt file got compiled into this .binarypb file.
I have a few questions:
I saw https://viz.mediapipe.dev/ , which seems to be Mediapipe's playground. That playground seems to be compiling the text in the textarea on the right. If that is correct, how does it do it? Is there any documentation I can read about it? How are .pbtxt compiled into .binarypb?
I'm especially interested in the web capabilities of mediapipe and I'd like to create a small POC using both face-mesh and depth-to-iris features. Unfortunately, there isn't a "solution" for the second one, but there is a demo in Mediapipe's viz claiming depth-to-iris web support (the demo doesn't seem to be working correctly though). If I were able to create a .pbtxt with a pipeline containing the features that I'm interested into, how would I ¿compile? the .wasm and .data files required to deploy the code to the web?
i prepared my dataset and created a version of it. Then i tried to export dataset in TensorFlow Object Detection CSV format but when i got output of given zip file. But i see that there is nothing inside the zip file except "README.roboflow.txt" and "README.Dataset.txt"
Is there anything i'm doing wrong or it's in process of development ? or it's a bug?
Thanks
I just tried this and was able to get my test, train, valid in the folder which also included the README.Dataset.txt and README.roboflow.txt.
Can you try again or share the email you used to build this project so one of us Roboflow staff members can take a look at it? You can feel free to dm it to me in our forum if it still doesn't work.
Kelly M., Developer Advocate
I can already see three ways, but none are quick (not compared to say, accessing a raw file on github)
Fork/download (requires registration)
Follow instructions here (i.e. download, open up in jupyter/ipython notebook)
Copy the code blocks manually, one by one (bad for long notebooks)
Is there an easier way? (I hoping, ideally, to add raw to the url somewhere, just like on github)
In case it's useful to others, put the notebook url here to extract raw code
Can anybody please explain to me what kernel level operations are performed, when a file is edited? The thing i'm confused with is that is it the case that a new inode is created every time a file is edited. Please explain the steps, if possible. I have searched the internet, but no satisfactory answers there.
Thanks in advance.
There's no single general answer, because this depends on what the application does when it's editing the file, what system it's running on, and what the file is stored on. It might be creating new temporary files, or clobbering and rewriting the original file, or using memory mapping, or using versioned filesystem features, or doing network file system operations, etc etc.
Instead of trying to answer this in the abstract, pick an open source editor you're interested in, and read through its source code and debug it to understand what it in particular is doing. Then if you have questions, you can read the API docs to figure out what kernel operations the functions it's calling map to or rely on.
I'm new to Core Data and I can't find the answer into the docs (but I'm sure it's somewhere):
I defined the properties for my entities and test my third version of an application (ASOC, ObjC, ObjC+CoreData): I write, read, create and remove objects, undo/redo actions, autosave, and everything is working like a charm for the moment (Stefan, my old dictionaries are gone and replaced by… well… managed objects I suppose )
I'm saving my file in binary format. The images, icons, rtfd texts are"Transformed"-type properties, because binding images by data is a deprecated manner which issues a warning (once).
Now: what if I decide to ADD a property to an entity? The previous files become unreadable! The app issues an alert:
The document “xxx” could not be opened. The file isn’t in the correct
format
I suppose Apple has implemented a sort of "backward compatibility", as the file is archived with keys/properties: when I archived some dictionaries, I could add or remove keys without problems…
Any link welcome!
If i understood you right, you changed your Core Data Model and want to use it with the binary store you used before. If it's the issue, you need to make a Core Data Migration, the whole process of which is described here.
http://developer.apple.com/library/mac/#documentation/cocoa/conceptual/CoreDataVersioning/Articles/Introduction.html