Object Recognition Programmatically? [closed] - object-detection

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Inspired by a recent Kickstarter campaign: http://www.kickstarter.com/projects/dominikmazur/camfind-a-mobile-visual-search-app?ref=category
The app uses the mobile camera to take a picture and identify virtually any object. Snapping a photo of a movie poster will recognize the movie and pull up results on the web for you about it, taking a picture of a product will show you websites that product is available for sale on.
My question is, is this realistic? I find it very intriguing, but it object detection really that simple? I'm interested in some feedback regarding resources to help someone get started in learning about this topic.

Computer vision and Pattern Recognition is not easy at all. It's an entire field related to Artificial Intelligence. It is, however, relatively straightforward to understand at a high level though. There is NO WAY they are doing this all on the client. The phones just aren't fast enough, and do not have even close to enough storage space.
What they are most likely doing is sending the image to their servers, then use some kind of nearest neighbour approximation on the image, and run the result through a decision tree look-up in a massive database on images which all have some hash. This will give a close match to an image they have (assuming they have A LOT of images in there database), even if only part of the image matches. Then, using the hash, they look up some other information about that image to send to the device.
Hope that Helps!

Related

Machine Learning & Image Recognition: How to start? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I've been a full stack web developer for 15 years now and would like to be involved in machine learning. There is already a specific scenario for this: We have a database with several million products and one product image each. There is also a database with about 5000 terms.
A product image is linked to several terms (usually 3 - 20), whereby the link still has a weighting (1-100%). The terms are always of a visual nature, that is, they describe a visually recognizable feature on the image.
The aim should now be to upload a new image (of course with thematic reference) and to get an answer with possible terms (including probability) based on the already classified images.
Do you have any advice on how best to start here? Is there a framework that comes close to this scenario? Is TensorFlow relevant for this task? What new language should I learn?
Thank you very much!
TensorFlow can be used, it's pretty "low-level" though. So if you're just starting out you might be better off using Keras with a TensorFlow backend as it's more userfriendly.
Regarding languages you will probably use Python. So if you don't know it already you should get started. In my opinion you can also learn it on-the-fly while practicing as you're already a developer.
As for tutorials you will have to probably pick out the relevant bits of many different tutorials. You could get started with something like this:
https://www.pyimagesearch.com/2018/05/07/multi-label-classification-with-keras/

How to prove that images were stolen? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
How can I prove that images were stolen from a website?
Is there any way to check from when an another website have the same images? I have no any access to the server.
Thanks for any idea!
UPDATE:
No, I'm not the one who forgot watermark. An old client of mine just found me with this question. Actually found Google cached page which we can use, but still interested if any other solution is exist. Like any image format contains any date attribute in it?
If you're using a Unix-based operating system, you might have access to cURL. Try running
curl --remote-time --remote-name http://url-to-your-image/
and see if you get a timestamp that is different from the exact time you downloaded the file. Not all servers respond with the time, but it might be worth an attempt.
But generally, if it's your original work, then you should have a copy of the image with higher resolution and/or lower compression rate, right? That should be enough to prove which of the images is the stolen one. Intellectual property rights on the Internet is a mess, though, for several reasons. But even if you can't take legal actions, you might have better luck convincing an administrator to remove the content.

How does Google store the index? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Lately I have been reading about web crawling, indexing and serving. I have found some information on the Google Web Masters Tool - Google Basics about the process that Google does to crawl the Web and serve the searches.
What I am wondering is how they save all those indexs? I mean, that's a lot to store right? How do they do it?
Thanks
I'm answering myself because I found some interesting stuff that talks about Google index:
In Google Webmasters YouTube Channel, Matt Cutts give us some references about the architecture behind Google Index: Google Webmaster YouTube Channel
One of those references, and from my point of view a worth reading, is this one: The Anatomy of a Large-Scale Hypertextual Web Search Engine
This helped me to understand it better, and I hope it help you too!
They use a variety of different types of data stores depending on the type of information. Generally, they don't use SQL because it has too much overhead and isn't very compatible with large-scale distribution of information.
Google actually developed their own data store that they use for large read-mostly applications such as Google Earth and the search engine's cache. This supports distributing information over a very large number of computers with each piece of information stored on three or four different computers. This allows them to use cheap hardware -- if one computer fails, the others immediately begin restoring all the data it held to the appropriate number of copies

How to make real natural photos less-real for games? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am a web developer trying to make a 2D game for the first time, I am not good in graphic design so I am using raster natural real photos as graphics for my game like this one:
http://www.cgtextures.com/texview.php?id=23142
But the overall looking of the game is not good because the graphics look very 'real' and unprofessional, how easily can I convert the photos to be more like this:
http://fc06.deviantart.net/fs44/f/2009/076/4/3/VW_DragBus_Destroyer_Carbon_by_M2M_design.jpg
I know you are laughing now as it seems it is not easy to convert a real photo to a such professional polished brilliant vector one, but I need something close, can I use some combinations of Photoshop filters and tricks to accomplish this? can I convert the photos to vector graphics then convert them to raster graphics again and add some effects maybe?
Thanks.
The only thing I can think off is to run a filter over the image so that it reduces the detail, this would amount smoothing the image with quite a high value.
If you consider that when tyding up a photo taken with a high ISO value (say 1600 which creates a lot of noise in the image) a value of 50% smoothing would reduce the noise but leave detail intact.
You would be looking to really go overboard on this value say 400% which would reduce the image to one that looks like it's been painted almost.

Direct screen pixel/framebuffer access [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'd like to try and create a program playing a game. I.e. "a bot".
I want to be able to directly access the pixels on the screen. I.e. have my program "see" a game and then "make a move"(or at least draw a picture of what move it would make).
Both Windows and Linux advice is appreciated, though my guess is that it should be easier to do on Linux.
I'm guessing this could be done with some X/Gnome call?
I'm not afraid of C, even complex samples are welcome.
SDL is a cross-platform library that allows you to directly access framebuffer pixels. You can learn about accessing the pixels on screen through the pixel access example on the documentation wiki.
Generally speaking, bots don't see the game graphics but see the underlying data structure instead, unless you are trying to do something related to computer vision.