single person detection in latest bodypix - tensorflow

When I try the bokeh segmentation effect using body-pix#1.0.0, It detects/segments the person (A) in front of the camera. If another person (B) is standing behind, away from A, B is being blurred out. If the person B comes very close to the contour of A, then person B is also getting detected. This is the preferred behaviour.
Now when I try with body-pix#2.0.0, both Person A and B are getting detected even though I am using segmentPerson API. Pls note, person B is standing much away from person A, still both are getting detected. The advantage I see with 2.0 is that the contour of the person detected is much more accurate and smoother than that in 1.0 which had a gap in the contour and the bokeh effect was missing around this gap. In 2.0, the contour is more accurate. But multiple people are getting detected. Is there any parameter I could tweak to restrict this to single person detection and use the smoother contour?
Thanks

For those who wants to know the answer. Source: https://github.com/tensorflow/tfjs/issues/2547
If you want to use BodyPix 2.0 to only blur just a subset of people (e.g. the large people), a quick way would be to use BodyPix 2.0's Multi-Person Segmentation API: https://github.com/tensorflow/tfjs-models/tree/master/body-pix#multi-person-segmentation.
This method returns an array of PersonSegmentation object. In your case it will be an array of two PersonSegmentation object: one for Person A and one for Person B.
You could then remove certain people (in your case Person B) from that array and pass the resulting array (with only 1 element: Person A) to the drawBokehEffect https://github.com/tensorflow/tfjs-models/tree/master/body-pix#bodypixdrawbokeheffect.
To automate this process for other cases (3 or more people):
Each PersonSegmentation object has a .pose field that contains the 2D coordinates (in image pixel space) of the person's 17 keypoints. They can be used to compute the smallest bounding box area for each person. The person bounding box area can then be used as a criteria to remove small people in the image.

Related

Using MDAnalysis to extract coordinates in an array from pdb

I have a pdb file that is a subset of a much larger system. This pdb is special because I have some vectors based on this file's coordinate system. I want to draw these vectors and the basis vector of that system onto the pdb. Eventually I would like to visualize the vector and basis vectors as it moves through some MD simulation where I an update the vector position based on the trajectory over time.
To start, I would like to read a pdb that has coordinates that define the basis vectors that further define the other vectors I want to visualize. Right now I'm using this class in MDAnalysis:
https://docs.mdanalysis.org/1.0.0/_modules/MDAnalysis/coordinates/PDB.html#PDBReader
molecule=mda.coordinates.PDB.PDBReader('molecule.pdb')
This works and it reads the pdb just fine, but returns with this variable type
coordinates.PDB.Reader
I suppose this isn't a surprise, but I want to be able to print this variable and get some array of coordinate positions and atom types. I'd love to see the bonds as well but that's not necessary. Now when I print I get
<PDBReader molecule.pdb with 1 frames of 60 atoms>
I want something that would look like
[atomtype1,x1,y1,z1]...[atomtypen,xn,yn,zn]
Thank you,
Loading data: Universe
To get the coordinates in MDAnalysis you first load a Universe (you don't normally use the coordinate readers directly):
import MDAnalysis as mda
u = mda.Universe('molecule.pdb')
The Universe contains "topology" (atom types, bonds if available, etc) and the "trajectory" (i.e., coordinates).
Accessing per-atom data: Universe.atoms
All the atoms are stored in u.atoms (they form an AtomGroup). All information about the atoms is available from an AtomGroup. For example, all positions are available as a numpy array with
u.atoms.positions
The names are
u.atoms.names
and there are many more attributes. (The same works for residues: u.residues or u.atoms.residues gives the residues. You can also slice/index an AtomGroup to get a new AtomGroup. For example, the residues that belong to the first 20 atoms are u.atoms[:20].residues... AtomGroups are the key to working with MDAnalysis.)
Extracting atom names and positions
To build the list that you asked for:
names_positions = [[at] + list(pos) for at, pos in zip(u.atoms.names, u.atoms.positions)]
For example, with the test file PDB that is included with the MDAnalysisTests package:
import MDAnalysis as mda
from MDAnalysisTests.datafiles import PDB
u = mda.Universe(PDB)
names_positions = [[at] + list(pos) for at, pos in zip(u.atoms.names, u.atoms.positions)]
# show the first three entries
print(names_positions[:3])
gives
[['N', 52.017, 43.56, 31.555], ['H1', 51.188, 44.112, 31.722], ['H2', 51.551, 42.828, 31.039]]
Learning more...
For a rapid introduction to MDAnalysis have a look at the Quickstart Guide, which explains most of these things in more detail. It also tells you how to select specific atoms and form new AtomGroups.
Then you can have a look at the rest of the User Guide.
Bonds are a bit more tricky (you can get them with u.atoms.bonds but if you want to use them you have to learn more about how MDAnalysis represents topologies — I'd suggest you start by asking on user mailing list, see participating in MDAnalysis because this is where MDAnalysis developers primarily answer questions.)

Using collections to create random buildings with Blender

I had the idea of creating a fantasy city, and to avoid having the same house over and over, but not have to manually create hundreds of houses I was thinking on creating collections like "windows", "doors", "roofs", etc, and then create objects with vertex's assigned to specific groups with the same names ("windows" vertex groups, "doors" vertex groups, etc), and then have blender pick for each instance of a house a random window for each of the vertex in the group, same for doors, roofs, etc.
Is there a way of doing this? (couldn't find anything online), or do I need to create a custom addon? If so, any good reference or starting point where something close to this is done?
I've thought of particle systems, or child objects, but couldn't find a way to attach to the vertex a random part of the collection. Also thought of booleans, but it doesn't have an option to attach to specific vertex, nor to use collections. So I'm out of ideas of how to approach this.
What I have in mind:
Create basic shape, and assign vertex to the "windows" vertex group:
https://i.imgur.com/DAkgDR3.png
And then have random objects within the "Windows" collection attached to those vertex, as either a particle or modifier:
https://i.imgur.com/rl5BDQL.png
Thanks for any help :)
Ok, I've found a way of doing this.
I'm using 3 particle systems (doors, roofs and windows), each using vertex as emitters, and using vector groups to define where to display one of each the different options.
To avoid the particle emitter to put more than one object per vertex, I created a small script that counts the number of vertex of each vertex group and updates each of the particle system Emission number accordingly.
import bpy
o = bpy.data.objects["buildings"]
groups = ["windows", "doors", "roofs"]
for group in groups:
vid = o.vertex_groups.find(group)
vectors = [ v for v in o.data.vertices if vid in [ vg.group for vg in v.groups ] ]
bpy.data.particles[group].count = len(vectors)
I've used someone's code from stack overflow for counting the number of vectors in a vector group, but can't find again the link to the specific question, so if you see your code here, please do comment and I'll update my answer with the proper credit.

CorePlot - dynamic x-axis data using two arrays

This is more of an open discussion topic than anything else. Currently I'm storing 50 Float32 values in my NSMutableArray *voltageArray before I refresh my CPTPlot *plot. Every time I obtain 50 values, I remove the previous 50 from the voltageArray and repeat the process....always displaying the 50 values in "real time" on my plot.
However, the data I'm receiving (which is voltage coming from a Cypress BLE module equipped with a pressure transducer) is so quick that any variation (0.4 V to 4.0 V; no pressure to lots of pressure) cannot be seen on my graph. It just shows up as a straight line, varying up and down without showing increased or decreased slopes.
To show overall change, I wanted to take those 50 values, store them in the first index of another NSMutableArray *stampArray and use the index of stampArray to display information. Meanwhile, the numberOfRecordsForPlot: method would look like this:
- (NSUInteger)numberOfRecordsForPlot:(CPTPlot *)plotnumberOfRecords {
return (DATA_PER_STAMP * _stampCount);
}
This would initially be 50, then after 50 pieces of data are captured from the BLE module, _stampCount would increase by one, and the number of records for plot would increase by 50 (till about 2500-10000 range, then I'd refresh the whole the thing and restart the process.)
Is this the right approach? How would I be able to make the first 50 points stay on the graph, while building the next 50, etc.? Imagine an y = x^2 graph, and what the graph looks like when applying integration (the whole breaking the area under the curve into rectangles).
Look at the "Real Time Plot" demo in the Plot Gallery example app included with Core Plot. It starts off with an empty plot, adding a new point each cycle until reaching the maximum number of points. After that, one old point is removed for each new one added so the total number stays constant. The demo uses a timer to pass random data to the plot, but your app can of course collect data from anywhere. Be sure to always interact with the graph from the main thread.
I doubt you'll be able to display 10,000 data points on one plot (does your display have enough pixels to resolve that many points?). If not, you'll get much better drawing performance if you filter and/or smooth the data to remove some of the points before sending them to the plot.

Seven-body-mechanism in Modelica

I have to model a "seven-body-mechanism" in Modelica:
The initial angles are given:
Starting with the left side (K5 and K7):
The Modelica Model:
Is it possible to model for example K5 as one body-shape and just specify the center of mass?
Where can I set the initial angles for K5 and K7? In the model "revolute2" it is possible to set one "phi_start"
Which models should I use for the "fixed" B and O? There is this parameter: Position vector from world frame to frame_b, resolved in world frame.
edit: I think i can fix the problem with 2 different angles - I just added another revolute:
The next problem I have: how to model the revolute where K5 and K4 meet? I am not sure if i should also use 2 revolutes? How to model the fixes B and O? A is fixed to the origin, but I am not sure which position vector for B and O.
I always get an error "all forces connot be uniquely calculated"
Thank you very much for your help
Have a look at the Modelica.Mechanics.MultiBody.Examples.Loops.PlanarLoops_analytic example, this contains an example of the K4, K5, K6 and K7 mechanism. In this mechanism set the start value of the Revolute.
Well the crucial part in the mechanism is the connection between O and B (planar four link) which can be solved using e.g. Modelica.Mechanics.MultiBody.Joints.Assemblies.JointRRR as demonstrated in Modelica.Mechanics.MultiBody.Examples.Loops.PlanarLoops_analytic.
The binary members K5-K4 and K7-K6 are principally the same and they don't change the degrees of freedom of the abovementioned planar four link. So they have to be modelled in the same way (which means the revolute2 and revolute6 must be instantiated twice in your model) and be connected similarly to the planar four link once it is properly parameterized and initiated.
Optionally, you can model the mechanism using the PlanarMechanics library.

How can I compare two NSImages for differences?

I'm attempting to gauge the percentage difference between two images.
Having done a lot of reading I seem to have a number of options but I'm not sure what the best method to follow for:
Ease of coding
Performance.
The methods I've seen are:
Non language specific - academic Image comparison - fast algorithm and Mac specific direct pixel access http://www.markj.net/iphone-uiimage-pixel-color/
Does anyone have any advice about what solutions make most sense for the above two cases and have code samples to show how to apply them?
I've had success calculating the difference between two images using the histogram technique mentioned here. redmoskito's answer in the SO question you linked to was actually my inspiration!
The following is an overview of the algorithm I used:
Convert the images to grayscale—compare one channel instead of three.
Divide each image into an n * n grid of "subimages". Then, for subimage pair:
Calculate their colour composition histograms.
Calculate the absolute difference between the two histograms.
The maximum difference found between two subimages is a measure of the two images' difference. Other metrics could also be used (e.g. the average difference betwen subimages).
As tskuzzy noted in his answer, if your ultimate goal is a binary "yes, these two images are (roughly) the same" or "no, they're not", you need some meaningful threshold value. You could produce such a value by passing images into the algorithm and tweaking the threshold based on its output and how similar you think the images are. A form of machine learning, I suppose.
I recently wrote a blog post on this very topic, albeit as part of a larger goal. I also created a simple iPhone app to demonstrate the algorithm. You can find the source on GitHub; perhaps it will help?
It is really difficult to suggest something when you don't tell us more about the images or the variations. Are they shapes? Are they the different objects and you want to know what class of objects? Are they the same object and you want to distinguish the object instance? Are they faces? Are they fingerprints? Are the objects in the same pose? Under the same illumination?
When you say performance, what exactly do you mean? How large are the images? All in all it really depends. With what you've said if it is only ease of coding and performance I would suggest to just find the absolute value of the difference of pixels. That is super easy to code and about as fast as it gets, but really unlikely to work for anything other than the most synthetic examples.
That being said I would like to point you to: DHOG, GLOH, SURF and SIFT.
You can use fairly basic subtraction technique that the lads above suggested. #carlosdc has hit the nail on the head with regard to the type of image this basic technique can be used for. I have attached an example so you can see the results for yourself.
The first shows a image from a simulation at some time t. A second image was subtracted away from the first which was taken some (simulation) time later t + dt. The subtracted image (in black and white for clarity) then shows how the simulation has changed in that time. This was done as described above and is very powerful and easy to code.
Hope this aids you in some way
This is some old nasty FORTRAN, but should give you the basic approach. It is not that difficult at all. Due to the fact that I am doing it on a two colour pallette you would do this operation for R, G and B. That is compute the intensities or values in each cell/pixal, store them in some array. Do the same for the other image, and subtract one array from the other, this will leave you with some coulorfull subtraction image. My advice would be to do as the lads suggest above, compute the magnitude of the sum of the R, G and B componants so you just get one value. Write that to array, do the same for the other image, then subtract. Then create a new range for either R, G or B and map the resulting subtracted array to this, the will enable a much clearer picture as a result.
* =============================================================
SUBROUTINE SUBTRACT(FNAME1,FNAME2,IOS)
* This routine writes a model to files
* =============================================================
* Common :
INCLUDE 'CONST.CMN'
INCLUDE 'IO.CMN'
INCLUDE 'SYNCH.CMN'
INCLUDE 'PGP.CMN'
* Input :
CHARACTER fname1*(sznam),fname2*(sznam)
* Output :
integer IOS
* Variables:
logical glue
character fullname*(szlin)
character dir*(szlin),ftype*(3)
integer i,j,nxy1,nxy2
real si1(2*maxc,2*maxc),si2(2*maxc,2*maxc)
* =================================================================
IOS = 1
nomap=.true.
ftype='map'
dir='./pictures'
! reading first image
if(.not.glue(dir,fname2,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy2
read(unit2,err=11)rad,dxy
do i=1,nxy2
do j=1,nxy2
read(unit2,err=11)si2(i,j)
enddo
enddo
CLOSE(unit2)
! reading second image
if(.not.glue(dir,fname1,ftype,fullname))then
write(*,31) fullname
return
endif
OPEN(unit2,status='old',name=fullname,form='unformatted',err=10,iostat=ios)
read(unit2,err=11)nxy1
read(unit2,err=11)rad,dxy
do i=1,nxy1
do j=1,nxy1
read(unit2,err=11)si1(i,j)
enddo
enddo
CLOSE(unit2)
! substracting images
if(nxy1.eq.nxy2)then
nxy=nxy1
do i=1,nxy1
do j=1,nxy1
si(i,j)=si2(i,j)-si1(i,j)
enddo
enddo
else
print *,'SUBSTRACT: Different sizes of image arrays'
IOS=0
return
endif
* normal finishing
IOS=0
nomap=.false.
return
* exceptional finishing
10 write (*,30) fullname
return
11 write (*,32) fullname
return
30 format('Cannot open file ',72A)
31 format('Improper filename ',72A)
32 format('Error reading from file ',72A)
end
! =============================================================
Hope this is of some use. All the best.
Out of the methods described in your first link, the histogram comparison method is by far the simplest to code and the fastest. However key point matching will provide far more accurate results since you want to know a precise number describing the difference between two images.
To implement the histogram method, I would do the following:
Compute the red, green, and blue histograms of each image
Add up the differences between each bucket
If the difference is above a certain threshold, then the percentage is 0%
Otherwise the colors found in the images are similar. So then do a pixel by pixel comparison and convert the difference into a percentage.
I don't know any precise algorithms for finding the key points of an image. However once you find them for each image you can do a pixel by pixel comparison for each of the key points.