is there a way we can detect a person's body in the image and cut the only body part of the image.
If the person is naked, you could use skin color segmentation.
I am not sure if you really need to detect a person's body for your application. Maybe a "simple" background substraction algorithm would also work in your case.
With background substraction what you do is to build a background model, during the time there is no foreground (person) present in the image. Then you can use the background model to determine if a pixel belongs to the foreground or to the background. Unfortunately background substraction algorithm have problems with moving backrounds and sudden light changes...
Another idea would be to start with face detection á la Viola/Jones' haar like features...
If you thought a user might only use your app a few times, uploading an image to Amazon's mechanical turk is probably your best bet.
Yes, sometimes, but it is very hard. If the body is in front of a blue or green or other uniform color, it's easy. If the body is in focus and the background isn't then it's quite possible. Otherwise it is hard-to-impossible.
Start here (and use Google yourself):
http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=833514
http://www.patentstorm.us/patents/5987154/claims.html
http://www.diffusion.ens.fr/index.php?res=conf&idconf=949
http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V09-4KST3G7-1&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&_docanchor=&view=c&_searchStrId=1034815145&_rerunOrigin=google&_acct=C000050221&_version=1&_urlVersion=0&_userid=10&md5=08a38011d31bc84e7e9432d70ebb6ab5
http://web.mit.edu/shivani/www/Papers/2004/pami04.pdf
http://www.cs.cmu.edu/~jch1/research/old/sword_allocation.pdf
http://www2.computer.org/portal/web/csdl/doi/10.1109/TPAMI.2004.108
http://portal.acm.org/citation.cfm?id=1136661
Related
I want to process the captured video. I will try to capture the video of handwriting on paper / drawing on paper. But I do not want to show the hand or pen on the paper while live streaming via p5.js.
Can this be done using machine learning?
Any idea how to implement this?
If I understand you right you want to detect where in the image the hand is a draw an overlay on this position right?
If so You can use YOLO more information to detect where the hand is.
There are some trained networks that you can download maybe they are good enough, maybe you have to train your own just for handy.
There are also some libery for yolo and JS https://github.com/ModelDepot/tfjs-yolo-tiny
You may not need to go the full ML object segmentation route.
If the paper's position and illumination are constant (or at least knowable) you could try some simple heuristic comparing the pixels in the current frame with a short history and using the most constant pixel values. There might be some lag as new parts of your drawing 'become constant' so maybe you could try some modification to the accumulation, such as if the pixel was white and is going black.
I found that a deep-learning-based method (e.g., 1) is much more robust than a non-deep-learning-based method (e.g., 2, using OpenCV).
https://www.remove.bg
How do I remove the background from this kind of image?
In the OpenCV example, Canny is used to detect the edges. But this step can be very sensitive to the image. The contour detection may end up with wrong contours. It is also difficult to determine which contours should be kept.
How a robust deep-learning method is implemented? Is any good example code? Thanks.
For that to work you need to use Unet. You can search for that on github.
Unet transofrm is: I->I.
Space of the image will become image (of same or similar size).
You need to have say 10.000 images with bg removed. People, (long hair people), cats, cars, shoes, T-shirts, etc.
So you set different backgrounds on all these images as source and prediction should be images with removed background.
You can also do a segmentation model and when you find the foreground you can remove the bg.
I have some problems with this assignment. Given an image of a street nameplate, like this one
I have to detect the nameplate and mark it on the image with a rectangle, obtaining something like this:
Nameplates can be rotated, scaled and in different lighting conditions. The procedure must be automatic.
What i have tried so far is to isolate the nameplate from the background. I've tried with different thresholding methods, but the problem is that i have different images and one single method doesn't work with all of them, due to different lighting condition and noise. What i've thought is to perform a pre-processing on the images, to reduce noise and normalize light, but, again, how to choose pre-processing steps that work with every image in my dataset? And what for images that don't need pre-processing?
Another problem is that there might be other signs in the image with writings on them and i have to ignore them. So i've thought i could isolate the nameplate by that blue outline, but i don't know if that can be done(or if it is convenient) with template matching, also considering that part of the outline could be cut off from the image.
So what i'm asking is: is there an automatic way to isolate/detect only that type of nameplates that have the blue outline on them, regardless of orientation, light conditions, shadows on them, noise in the image, etc? What steps would you follow?
Thank You
We have a system where people are being taken a face shot via a DSLR camera. We need the people's images with transparent background. What we're currently doing is taking the image and editing and cropping it in Photoshop, removing the background image with the Magic Eraser tool.
What I am looking for is a way to parse the image and automatically erase the semi-white background we have, along with the resizing and cropping. Is there some kind of library or code sample that does this without requiring manual intervention?
This is a real complex problem. Like the answer below suggested you'll need to do a fuzzy match on each pixel and set it to be transparent but you also need to detected other nearby pixels to make sure they are not close in color. A white tag on the shirt, white eyelids, hair, pale skin reflecting the flash. All are candidates to be removed by any greedy fuzzy logic.
Think about the Magic Wand tool in Photoshop. How good is it at detecting the edges of the person in the picture? Yeah, and that's the top standard of image editing software with thousands of engineering hours behind it.
This is not a feasible request for a Q&A format, and this is one of those things that humans just do better than machine. BUT, that doesn't mean it's not possible, and who knows, you might be the one to do it. Just don't do it in VB.NET please :)
Some pseudo-code to get an idea of what you need to do:
Bitmap faceShot = Bitmap.FromFile(filepath)
foreach pixel in faceShot
//the following line is where the magic happens, you can do any fuzzy match on the color that suits you
//figure out your color range and do a fuzzy match percentage wise
if (pixel between RGB(255,255,255) and RGB(250,235,215)) //white and antique white
pixel.setAlpha=0
endif
end foreach
You could start with this as a starting point for processing a single image,
http://www.java2s.com/Code/VB/2D/ProcessanImageinvertPixel.htm
Basically, if you have a constant background color (like the TV green-screen), it's just a matter of selecting pixels close to the color you are erasing and setting their Alpha level to 0 (transparent). Treating the RGB values like XYZ coordinates, you can do a 3d distance from your background color, and make everything within a certain threshold transparent.
As an improvement, you could also make everything within another threshold semi-transparent so the edges right around hair and stuff like that look softer and less harsh.
Alternatively, you could probably do the same exact thing with good results in Photoshop, as it should support batch processing.
Edit, thinking about it some more, you may want to use a green screen type background as well instead of an off-white one like you stated, as you may make people's eyes transparent. I would definitely try to batch it in Photoshop/Gimp/etc.
before I start I'm not a developer, so apologies in advance for any potentially daft questions. Our developer has just integrated a custom soundcloud player for our website http://www.samplephonics.com/ (see when you hover over the top 8 or recent sample packs) but I want to change the light grey background colour around the waveform to white.
As far as I am aware it's configured so that the light grey mask displays the shape of the waveform as a hole, which then has solid light grey, grey and dark grey images behind to show the apparent colour of the waveform in idle, loading and playing states..
Does anyone know how we can change this mask to white, or have any ideas I can send his way? He used this example as a starting point for creating the player: http://static.soundcloud.com/demos/soundcloud-custom-player/examples/sc-player-minimal.html
Thanks in advance :)
I've previously answered this here – https://stackoverflow.com/a/14731623/236135
unless the library you want to use for waveform customisation is using HTML5 canvas, you won't be able to use change the color of that chrome (so no, not possible with either HTML5 Widget API or Custom Player API)
In short, you'd have to use canvas or manipulate the images on the server and retrieve them from server.