Tagging People and Items in a Photo as an Overlay - social-media

Facebook or instagram (possibly flickr) use to have a highlight and tag type of functionality with uploaded images. I think it's an overlay type of thing but I cant figure out what its called or which company has it if any. It's been a while since I have been on social media.
So lets say I have a picture of a dog in a park, bridge in the background and a large body of water under the bridge. In the image upload screen after uploading i would select or highlight the dog and set the label to "Wolf - German Shepherd Age 5". I would then select the bridge the label is "Golden Gate Bridge San Francisco, Ca". I hit submit and it shows the pic on the pag. The dog would be highlighted in a color like red and the bridge would be highlighted in blue. When your mouse hovers over a highlighted item the appropriate label would appear.
See the attached to see a very crude and rough example of what I'm talking about. The idea is in image 1 when you hover over the red bird with purple highlighting you get what you see in image 2 with the tool tip Identifying its a red bird.
Image 1
https://ibb.co/YbGWz7z
Image 2
https://ibb.co/3RH6f9K
Anyone remember the social media company that use to have this (or maybe still does) or know what this effect is called.

Related

PDF - Mass cropping of non-whitespace application

I have about 400 pdfs with a lot of dead space between the text and the page border.
Usually I'm using govert's pdf cropper to crop all the whitespace, but this time the pdf background color is (darn!) yellow,
and no software which I know (and I've searched for quite a while) can crop non-whitespace
(well, except maybe pdfcrop.pl -a Pearl library which supposedly can remove black spaces).
Anybody knows of a software that can perform such task?
The ideal app, I guess, would have the option to receive specific color to remove,
like rgb(192,192,192).
Thanks in advance.
The reason this is so difficult is that PDF has no concept of paper color or background color. So what you're seeing is not a different background color, but an object (typically a rectangle) painted in that yellow background color.
Most cropping tools simply calculate the bounding box of all objects on the page and then crop away everything outside that bounding box. Of course that doesn't work for your file because the bounding box will include the background rectangle object.
There are potentially a number of directions you could take this:
1) If all pages need to be cropped by the same amount, you could attempt to do cropping that way (simply passing a rectangle to the cropping tool to do the actual cropping).
2) There are tools (callas pdfToolbox - watch it, I'm associated with this tool, Enfocus PitStop...) that allow you to remove objects from a document and this could be done by specifying your yellow color. This would allow you to modify the PDF file by removing the background object and then perform the cropping you want to perform.

How to make a PhysicsBody based on Alpha Values

Suppose there there is a scene as follows:
There is a scene with the same size as the frame of the device. The scene has a red ball, which is able to move throughout the 'world'. This world is defined by black and white areas, where the ball is ONLY able to move in the area that is white. Here is a picture to help explain:
Parts of the black area can be erased, as if the user is drawing with white color over the scene. This would mean that the area in which the ball can be moved is constantly changing. Now, how would one go about implementing a physicsBody for the an edge between the white and black areas?
I tried redefining the physicsBody every time it is changed, but once the shape becomes complex enough, this isn't a viable solution at all. I tried creating a two-dimensional array of 'boxes' that are invisible and specify whether most of the area within each box is white or black, and if the ball touched a box that was black, it would be pushed back. However, this required heavy rendering and iterating over the array too much. Since my original array contained boxes a little bigger than a pixel, I tried making these boxes bigger to smooth the motion a little, but this eventually caused part of the ball to be stopped by white areas and appear to be inside the black area. This was undesired, since the user could feel invisible barriers that they seemed to be hitting.
I tried searching for other methods to implement this 'destructible terrain' type scene, but the solutions that I found and tried were using other game engines. To further clarify, I am using Objective-C and Apple's SpriteKit framework; and I am not looking for a detailed class full of code, but rather some pseudo-code or implementation ideas that would lead me to a solution.
Thank you.
If your deployment target is iOS 8, this may be what you're looking for...
+ bodyWithTexture:alphaThreshold:size:
Here's a description from Apple's documentation
Creates a physics body from the contents of a texture. Only texels
that exceed a certain transparency value are included in the physics
body.
where a texel is a texture element. You will need to convert an image to the texture before creating the SKPhysicsBody.
I'm not sure if it will allow for a hole in the middle like your drawing. If not, I suspect you can connect two physics bodies, a left half and a right half, to form the hole.

how to detect an image with iPhone camera?

I am trying to detect an image of a chocolate wrapper with iPhone/iPad Camera.
I already got the image frames from video using avcapture.
But i can't figure out how to detect a specific part of the image?
Note:- the specific part i am mentioning is a pink ribbon which will always be the same.
can i match the image? if yes how? or should i get bitmap pixel data and match the unique color codes (but they can vary depending on light conditions and angle at which image is taken)?
Try the following two APIs:
http://www.iqengines.com;
http://intopii.com
This Solution is for the scenario where you are trying to detect specific part of the image with a specific colour (This part will be called Reference image). so we are going to match the colour codes of our reference image with the one taken from the camera in real time.
Step 1.
Create a media layer which will scan the particular part of the image taken from camera. This will narrow down the search area for colour codes and make the process faster, as there are thousands of colour codes in the image.
Step 2.
Get the colour codes from the image we just scanned.Create an array of the colour codes.
Step 3.
Now repeat the step 2 for 3 or 4 different light conditions.Because some colour codes change under different types of lights. Match the colour codes array to get the common colour codes.(These will be referred now on as Reference colour codes) These colour codes will be there for most of the cases.
Step 4.
Convert the image taken from camera in a pixel buffer.(Basically a collection of pixels). Now convert the pixel buffer into colour codes and compare them with reference colour codes.If all of them or more than 50% are matched. you have potential match.
Note:- i Followed this process and it works like magic. it sounds more complicated than it is and it is enough robust to help you get it working in few hours. In my case the wrapper i was scanning was squeezed and tested under different lights(sun,white CFL lamp, a yellow light from the bulb, and taken from different angles and distances). Still most of the reference colour codes were always there.
Gud luck!

photo editing app are these functions possible?

User would open the app and it would ask if User wants to use the camera or use a saved picture.
If user selects camera, it would link with the camera view so that they can immediately take a picture.
Application would present the user with an outline of a human body so as to match up with a subject they wish to photograph.
Example – this would be clear except for the outline of the body and the user would be able to move the phone or have the subject move until they closely fit into the shape seen above. Once in position the subject would be photographed and the resulting photo WOULD NOT display the outline above. It is used for targeting and alignment only. Once photo is taken a set of clothing could be held up and the same outline above could be used to align the clothes within the shape and photographed thus allowing the two photos to be merged and the subject would see what the clothing would look like on them.
Application would then eliminate all of the image out side the body image. In more specific terms would isolate the subject from the subject’s surroundings. The result would be the subject alone in a blank field.
Now that subject is isolated application would allow for other images to be placed over subject image. (example: new clothes could be imaged in a similar manner as the subject and dragged on to the isolated image.
ok thanks for your answer.it really helps me alot..
now tell me if these functions are possible
Application will allow for adding isolated subject onto a background imaged stored on the iphone. (example: subject is photographed in San Francisco but background is replaced to make it appear subject is in New York City.
All functions described above should be available to be employed with stored images as well.
Image of subject should be able to be “morphed” to appear heavier or thinner.
Below are some additional desired features which are price sensitive – please provide an estimate based on adding some of these features. If a feature is listed below and it appears above please disregard.
Once photo is taken, app would ask user if they want to cut the head, replace the outfit or "try on". This would drive the subsequent actions.
If cut head is selected
User would circle the outline of the head and the app would cut out the image of the head and save to the side
User would then select another body from the picture, from a body template, or body from another saved/stored photo.
Once body is selected, user would touch the saved head and it will automatically fill on to the body.
If outfit is selected
User would touch key body parts such as the two shoulders, hands, waist, legs and feet from left to right. This would allow the app to know how best to superimpose the outfit.
User would then select an outfit from the many available templates
Once selected, user would hit a confirm type of button and this would put the new outfit on to the body.
The outfit should conform to the person's body in the photo. i.e. stretch side ways to make the person fatter, make thinner, shorter, longer, etc.
If app could be aware of spatial alignment, that would be ideal. i.e. if someone is turned to the side, the replaced outfit would be a side view.
Another feature should be is for the user to manually alter the photo with the new outfit to make the person look taller, shorter, skinnier, fatter, etc.
App would also have option pose for frontal and rear view photo
In the camera view when this option is selected, a standard body outline would show in the frame. The user would try and match as closely as possibly the subject to the body outline shown in the frame. Once closely matched, user would take the phone of the frontal and rear view of the subject.
At clothing stores, the user can take a front and back photo of the clothes and have it superimposed to the "posed" photo of the body they took.
User would open the app, and it would start as outlined in the first bullet point above
User would select the camera option
User would take the photo of the front and back of the clothing
Once photo taken, app would ask if they want to cut the head, replace the outfit, or try on..
User selects try on
App asks if it's a top, bottom, or both AND if user wants frontal/rear view of the clothing
User touches key points of the outfit as needed, ie. shoulders, wrist, waist, etc.
Once key points touched and confirmed, app automatically fits the clothing into the saved/posed photo
App would need to conform the clothing photo to the user's body shape and type.
The app should work for saved pictures as well. ie stores have pictures online and the user is able to copy and save the clothing photo from the store's website to their iphone and use this feature the same.
The targetting part can be easily done in 3.1, you’ll simply set a view with a transparent body image as the custom camera view (setCameraOverlayView docs). The body outline will only be visible in the view finder, not on the photograph. Once you have the image aligned, you can then separate the body image from the background using a PNG mask image with an alpha channel. This image would only show the parts that fall inside the body outline, and you can also have a decent feather on the border. I am just not sure about the quality of the resulting image, the aligning is bound to be pretty imprecise.

changing color of monitor

i would like to program a little app that will change the colors of the screen. im not talking about the darkness. i want it to mimic what it would look like if for example you put on blue lenses or red lenses. so i would like to input the color and i want the screen to look as though i put on lenses of that particular color. well i actually need the program to semi-permanently change the users experience on the computer. i need the computer for the entire session that it is turned on to be changed this color
Transparent, Click Through forms might help you out. It makes a nice see through form that lets mouse clicks pass through it. The solution is in VS2003 format, but it upsizes to 2008 nicely. You could take that sample, rip the sliders off, get rid of the borders and make it fullscreen + topmost. I don't know if it'll accurately simulate a lens though, someone more into optics can tell me if I'm wrong :-)
If the lenses you are trying to simulate are red, green or blue, simply zeroing the other two colour components of each pixel should work. A coloured filter lens works by passing only a certain wavelength of light, and absorbing the others. Zeroing the non-desired components of the colour should simulate this accurately, I believe.
To simulate cyan, magenta, or yellow lenses, zeroing the one other colour component (e.g. the red component in the case of cyan tinted glasses) should work.
I'm not sure how to generalise beyond these simple cases. I suspect converting to say HSV and filtering based on the hue might work.
To change this for the entire system and use it in interactions with ordinary programs, you could change the colour profile for the display. For paletted/indexed-colour displays, this could be done by changing the colour look-up table (CLUT) for the display adapter. PowerStrip is a handy utility with versatile colour controls that should be able to achieve this quickly and easily on modern display adapters (e.g. by adjusting the red, green and blue response curves independently).
I came across Color Oracle and thought it might help. Here is the short description:
Color Oracle is a colorblindness simulator for Windows, Mac and Linux. It takes the guesswork out of designing for color blindness by showing you in real time what people with common color vision impairments will see.
Take a snapshot of the screen, convert each pixel into its grayscale value, then change the pixel value to a percentage of red. This will preserve the contrast throughout the image while also presenting a red tone.
To convert to grayscale in C#:
https://web.archive.org/web/20141230145627/http://bobpowell.net/grayscale.aspx
Then, to convert to a shade of red, zero out the values in the green and blue for each pixel.
(You can probably do the above in one shot, but this should get you started.)