I trained a single person wearing a weapon. However, no matter how many times I train, it draws a bounding box on the entire image, not each person wearing a weapon.
for annotation/images, I self-cropped image and created xml file like below
<annotation>
<folder>images</folder>
<filename>images_1.jpg</filename>
<path>I wrote nothing here just for privacy</path>
<source>
<database>Unknown</database>
</source>
<size>
<width>240</width>
<height>320</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>PersonHoldingRifle</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>1</xmin>
<ymin>1</ymin>
<xmax>238</xmax>
<ymax>307</ymax>
</bndbox>
</object>
</annotation>
Related
I am attempting to determine the dtype of root-level attributes of an existing hdf5 file using h5py. I have a crude solution that works but I find it unattractive and hope that there is a better way. The file root level attributes are shown from the HDF5View program.
I need to know that the attribute CHS Data Format has type of 'string length=9 or that Save Point ID is 64-bit floating-point, so that I can properly transform in code. I can get this information in a brute force manner, I am hoping there is a cleaner way.
hf = h5py.File(hdf5_filename, 'r')
for k in hf.attrs.keys():
print (k,hf.attrs[k],type(hf.attrs[k]),hf.attrs[k].dtype)
which yields:
CHS Data Format b'Version_1' <class 'numpy.bytes_'> |S9
Grid Name b'SMA' <class 'numpy.bytes_'> |S3
Latitude Units b'deg' <class 'numpy.bytes_'> |S3
Longitude Units b'deg' <class 'numpy.bytes_'> |S3
Project b'USACE_NACCS' <class 'numpy.bytes_'> |S11
Region b'Virginia_to_Maine' <class 'numpy.bytes_'> |S17
Save Point ID 1488.0 <class 'numpy.float64'> float64
Save Point Latitude b'41.811900' <class 'numpy.bytes_'> |S9
Save Point Longitude b'-71.398700' <class 'numpy.bytes_'> |S10
Vertical Datum b'MSL' <class 'numpy.bytes_'> |S3
This gives me the information I need but requires parsing and I would also like to have the additional information that is seen in the image from hdf5View
Even though this is likely not the best/clearest solution, I am posting in case it is of some assistance to others trying to achieve this same goal.
So I am trying to create custom datasets for object detection using the Tensorflow Object detection API. When working with open source datasets the annotation files I have come across as PASCAL VOC xmls or jsons. These contain a list of labels for each class for example:
<annotation>
<folder>open_images_volume</folder>
<filename>0d2471ff2d033ccd.jpg</filename>
<path>/mnt/open_images_volume/0d2471ff2d033ccd.jpg</path>
<source>
<database>Unknown</database>
</source>
<size>
<width>1024</width>
<height>1024</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>Chair</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>8</xmin>
<ymin>670</ymin>
<xmax>409</xmax>
<ymax>1020</ymax>
</bndbox>
</object>
<object>
<name>Table</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>232</xmin>
<ymin>654</ymin>
<xmax>555</xmax>
<ymax>1020</ymax>
</bndbox>
</object>
</annotation>
Here the annotation file describes 2 classes, Table & chair. I am only interested in detecting chairs, which is why the pbtxt file I have generated is simply
item {
id: 1
display_name: "Chair"
}
My question is that will the model train on simply the annotations of the class "Chair" because that's what I have defined via the label_map.pbtxt or do I need to manually scrape all the annotation files and remove the bounding box co-ordinates through regex or xmltree in order to make sure the additional bounding boxes do not interfere with training. So is it possible to select only custom classes for training even if the annotation files have additional classes through the TF API or is it necessary to clean up the entire datasets and manually remove unnecessary class labels? Will it affect training in any way?
You can use a .pbtxt that only has the classes that you need to train and you don't have to change the xmls.
Also, make sure to change the num_classes: your_num_classes.
I am trying to simulate an overtaking in my network, i came across a suggestion to use to create opposite lanes. It is however not working. I don't know if i omitted something. I also tried the command
netconvert --opposite.guess true --node-files (name) --edge-files (name) -t (name) -o (outputfile)
I have attached my code from the edge file.
<edges>
<edge id="0" from="0" to="2" type="1">
<neigh lane="a0"/>
</edge>
<edge id="2" from="2" to="4" type="1">
<neigh lane="a2"/>
</edge>
<edge id="4" from="4" to="6" type="1">
<neigh lane="a4"/>
</edge>
</edges>
This is the result in the terminal after netconvert:
netconvert --node-files curve.nod.xml --edge-files curve.edg.xml -t curves.type.xml -o curve.net.xml
Warning: Removing unknown opposite lane 'a0' for edge '0'.
Warning: Removing unknown opposite lane 'a10' for edge '10'.
Warning: Removing unknown opposite lane 'a11' for edge '11'.
Warning: Removing unknown opposite lane 'a13' for edge '13'.
Warning: Removing unknown opposite lane 'a15' for edge '15'.
Warning: Removing unknown opposite lane 'a2' for edge '2'.
Warning: Removing unknown opposite lane 'a4' for edge '4'.
Warning: Removing unknown opposite lane 'a6' for edge '6'.
Warning: Removing unknown opposite lane 'a7' for edge '7'.
Warning: Removing unknown opposite lane 'a8' for edge '8'.
Warning: Removing unknown opposite lane 'a9' for edge '9'.
Success.
The lanes you specify in neigh need to exist. A valid example would be:
<edges>
<edge id="a0" from="2" to="0" type="1"/>
<edge id="0" from="0" to="2" type="1">
<neigh lane="a0_0"/>
</edge>
</edges>
This is only correct when the edge a0 has only a single lane. The number after the _denotes the lane index and should be the last lane on the given edge.
I have a case based on MeetingScheduling example.
The results are fine.
The scheduling begin with Construction Heuristic phase.
Then there is a Local Search phase.
The CH phase reduce the hard and medium constraints penalties while the LS seems to reduce the soft constraints penalties.
I found that when I re-run the scheduling, the CH phase reduce again the hard and medium constraints penalties.
So, can we configure the solver to alternate CH et LS phases several times ?
The current solver config:
<?xml version="1.0" encoding="UTF-8"?>
<solver>
<solutionClass>org.optaplanner.examples.meetings.domain.MeetingSchedule</solutionClass>
<entityClass>org.optaplanner.examples.meetings.domain.Meeting</entityClass>
<scoreDirectorFactory>
<scoreDrl>org/optaplanner/examples/meetings/solver/meetingsScoreRules.drl</scoreDrl>
</scoreDirectorFactory>
<termination>
<minutesSpentLimit>20</minutesSpentLimit>
</termination>
</solver>
This should work:
<solver>
...
<constructionHeuristic/>
<localSearch>
<termination>...stepCountLimit or calculateCountLimit?...</termination>
</localSearch>
<constructionHeuristic/>
<localSearch>
<termination>...stepCountLimit or calculateCountLimit?...</termination>
</localSearch>
<constructionHeuristic/>
<localSearch>
<termination>...stepCountLimit or calculateCountLimit?...</termination>
</localSearch>
</solver>
And with the programatic API you can make it dynamic to n.
That being said, this is probably the suboptimal solution. The right solution would be reheating (not yet supported).
I'm working on a Win8 editor which is basically based on a Canvas and Shapes like Line, Rectangle, etc. on it. Those shapes can be manipulated by the user. Now I want to implement a custom shape that take a list of 2D points from a laser scan (used in architecture).
So my question is, which base primitive would you use to display lets say 500 points? I was thinking of a Path but then I get rather a set of connected lines (path, polygone) instead of just the dots. So what else?
This picture illustrates what I want to achieve. All blue dots should be in one shape that can be dragged by the user.
My first guess would be the PathGeometry filled with a lot of RectangleGeometries or EllipseGeometries. But I wonder what this means in terms of performance.
<Path Fill="LemonChiffon" Stroke="Black" StrokeThickness="1">
<Path.Data>
<RectangleGeometry Rect="50,50,5,5" />
<RectangleGeometry Rect="60,50,5,5" />
<RectangleGeometry Rect="70,50,5,5" />
...
</Path.Data>
</Path>