Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am a web developer trying to make a 2D game for the first time, I am not good in graphic design so I am using raster natural real photos as graphics for my game like this one:
http://www.cgtextures.com/texview.php?id=23142
But the overall looking of the game is not good because the graphics look very 'real' and unprofessional, how easily can I convert the photos to be more like this:
http://fc06.deviantart.net/fs44/f/2009/076/4/3/VW_DragBus_Destroyer_Carbon_by_M2M_design.jpg
I know you are laughing now as it seems it is not easy to convert a real photo to a such professional polished brilliant vector one, but I need something close, can I use some combinations of Photoshop filters and tricks to accomplish this? can I convert the photos to vector graphics then convert them to raster graphics again and add some effects maybe?
Thanks.
The only thing I can think off is to run a filter over the image so that it reduces the detail, this would amount smoothing the image with quite a high value.
If you consider that when tyding up a photo taken with a high ISO value (say 1600 which creates a lot of noise in the image) a value of 50% smoothing would reduce the noise but leave detail intact.
You would be looking to really go overboard on this value say 400% which would reduce the image to one that looks like it's been painted almost.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have to make a website that will render multiple images at once like a shirt which will have multiple parts like a collar, sleeves, etc. I have to combine all these images at once and show the image of the whole shirt.
How could I do this as fast as possible to give the user a better experience?
I recommend you to use a npm module called sharp. By using that, you can resize your image to smaller, which will make your image loaded faster. Besides, you can serve dynamic image based on client device resolution, and that could obviously decrease the time image loaded.
sharp website: https://github.com/lovell/sharp
just you can load images from low quality to high quality using progressive jpeg-https://www.hostinger.in/tutorials/website/improving-website-performance-using-progressive-jpeg-images
if you want load your images faster you can do image optimization:
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/automating-image-optimization/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Is there any efficient way by which we can we calculate the area covered by trees using machine learning in any google earth image. We can re-train our data using tensorflow and inception trained dataset to identify whether there is tree or not, but I can't think of any way to find out how many trees or how kuch area it is covering. Is there anything we can do.
I use Python, Tensorflow for machine learning.
P.s : don't know much about machine learning but can work with steps.
In computer vision there exists different ways for finding objects in images:
image classification will tell you if an image is something (i.e. this image is a cat)
image detection will tell you where something is in an image (i.e. it will draw a box around a cat)
image segmentation will try to extract the exact contour of something in an image (i.e. the precise contour of the cat, not just a box containing it)
You need a neural network capable of doing the second or third task with aerial images of trees.
Then simply sum all the tree' areas and compare the result with the image size.
Here you can find a Tensorflow network for doing object detection https://github.com/tensorflow/models/tree/master/research/object_detection.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Is it possible to convert an expanded blend to a simple lightweight vector shape, without all these inbetween paths of all n steps? It seems like a complicated object to work with since computer has to recalculate all the changes that are made to the inside paths.
Go to Object > Blend > Expand. Then, with all of the steps selected, go to Pathfinder and merge all the shapes together.
I don't believe there is a way to convert it back to a single vector shape as it would have to be able to translate the blend into either a linear gradient, radial gradient, or gradient mesh.
The beauty of blends is that they aren't bound by the same rules that allow the gradient or gradient mesh tools to work, and you can get some really awesome color blends across complicated shapes.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Inspired by a recent Kickstarter campaign: http://www.kickstarter.com/projects/dominikmazur/camfind-a-mobile-visual-search-app?ref=category
The app uses the mobile camera to take a picture and identify virtually any object. Snapping a photo of a movie poster will recognize the movie and pull up results on the web for you about it, taking a picture of a product will show you websites that product is available for sale on.
My question is, is this realistic? I find it very intriguing, but it object detection really that simple? I'm interested in some feedback regarding resources to help someone get started in learning about this topic.
Computer vision and Pattern Recognition is not easy at all. It's an entire field related to Artificial Intelligence. It is, however, relatively straightforward to understand at a high level though. There is NO WAY they are doing this all on the client. The phones just aren't fast enough, and do not have even close to enough storage space.
What they are most likely doing is sending the image to their servers, then use some kind of nearest neighbour approximation on the image, and run the result through a decision tree look-up in a massive database on images which all have some hash. This will give a close match to an image they have (assuming they have A LOT of images in there database), even if only part of the image matches. Then, using the hash, they look up some other information about that image to send to the device.
Hope that Helps!
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am using Box2d for a topdown game. The "ground" is a series of tiles, where each tile is a static body with a sensor shape. Can I make friction take effect for this, even though the objects aren't really "colliding" with the ground?
If Box2d won't let me do this, I considered trying to implement my own by detecting what force is currently moving the object, and applying a force opposite to it, but I'm not quite sure how to detect that force.
Another way of doing this is to set linearDamping on your body. You could set this differently depending on the tile your object is on.
Friction is directed against the velocity of the body, regardless of other forces.
If setting linear damping isn't enough or relying on a property of the b2Body is inappropriate, you can easily compute nonlinear friction forces and call ApplyLinearImpulse() or ApplyLinearForce() every frame.
Query the velocity with b2Body.GetLinearVelocity(), scale (nonlinearly) the result as desired to get the force, and invert the sign of both components.
If you decide to stop the body (when it is slow enough to stick), SetLinearVelocity() does the trick without computations.
ApplyImpulse() instead of ApplyForce() works much better.