Friction in Box2d [closed] - physics

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I am using Box2d for a topdown game. The "ground" is a series of tiles, where each tile is a static body with a sensor shape. Can I make friction take effect for this, even though the objects aren't really "colliding" with the ground?
If Box2d won't let me do this, I considered trying to implement my own by detecting what force is currently moving the object, and applying a force opposite to it, but I'm not quite sure how to detect that force.

Another way of doing this is to set linearDamping on your body. You could set this differently depending on the tile your object is on.

Friction is directed against the velocity of the body, regardless of other forces.
If setting linear damping isn't enough or relying on a property of the b2Body is inappropriate, you can easily compute nonlinear friction forces and call ApplyLinearImpulse() or ApplyLinearForce() every frame.
Query the velocity with b2Body.GetLinearVelocity(), scale (nonlinearly) the result as desired to get the force, and invert the sign of both components.
If you decide to stop the body (when it is slow enough to stick), SetLinearVelocity() does the trick without computations.

ApplyImpulse() instead of ApplyForce() works much better.

Related

Any way to manually make a variable more important in a machine learning model? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Sometimes you know by experience or by some expert knowledge some variable will play a key role in this model, is there a way to manually make the variable count more so the training process can speed up and the method can combine some human knowledge/wisdom/intelligence.
I still think machine learning combined with human knowledge is the strongest weapon we have now
This might work by scaling your input data accordingly.
On the other hand the strength of a neural network is to figure out
which features are in fact important and which combinations with other
features are important - from the data.
You might argue, that you'll decrease training time. Somebody else might argue that you're biasing your training in such a way that it might even take more time.
Anyway if you would want to do this, assuming a fully connected layer, you could increasedly initialize the weights of the input feature you found important.
Another way, could be to first pretrain the model according to a training loss, that should have your feature as an output. Than keep the weights and switch to the actual loss - I have never tried this, but it could work.

Convert expanded blend to one simple vector shape [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Is it possible to convert an expanded blend to a simple lightweight vector shape, without all these inbetween paths of all n steps? It seems like a complicated object to work with since computer has to recalculate all the changes that are made to the inside paths.
Go to Object > Blend > Expand. Then, with all of the steps selected, go to Pathfinder and merge all the shapes together.
I don't believe there is a way to convert it back to a single vector shape as it would have to be able to translate the blend into either a linear gradient, radial gradient, or gradient mesh.
The beauty of blends is that they aren't bound by the same rules that allow the gradient or gradient mesh tools to work, and you can get some really awesome color blends across complicated shapes.

Object Recognition Programmatically? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Inspired by a recent Kickstarter campaign: http://www.kickstarter.com/projects/dominikmazur/camfind-a-mobile-visual-search-app?ref=category
The app uses the mobile camera to take a picture and identify virtually any object. Snapping a photo of a movie poster will recognize the movie and pull up results on the web for you about it, taking a picture of a product will show you websites that product is available for sale on.
My question is, is this realistic? I find it very intriguing, but it object detection really that simple? I'm interested in some feedback regarding resources to help someone get started in learning about this topic.
Computer vision and Pattern Recognition is not easy at all. It's an entire field related to Artificial Intelligence. It is, however, relatively straightforward to understand at a high level though. There is NO WAY they are doing this all on the client. The phones just aren't fast enough, and do not have even close to enough storage space.
What they are most likely doing is sending the image to their servers, then use some kind of nearest neighbour approximation on the image, and run the result through a decision tree look-up in a massive database on images which all have some hash. This will give a close match to an image they have (assuming they have A LOT of images in there database), even if only part of the image matches. Then, using the hash, they look up some other information about that image to send to the device.
Hope that Helps!

How to make real natural photos less-real for games? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I am a web developer trying to make a 2D game for the first time, I am not good in graphic design so I am using raster natural real photos as graphics for my game like this one:
http://www.cgtextures.com/texview.php?id=23142
But the overall looking of the game is not good because the graphics look very 'real' and unprofessional, how easily can I convert the photos to be more like this:
http://fc06.deviantart.net/fs44/f/2009/076/4/3/VW_DragBus_Destroyer_Carbon_by_M2M_design.jpg
I know you are laughing now as it seems it is not easy to convert a real photo to a such professional polished brilliant vector one, but I need something close, can I use some combinations of Photoshop filters and tricks to accomplish this? can I convert the photos to vector graphics then convert them to raster graphics again and add some effects maybe?
Thanks.
The only thing I can think off is to run a filter over the image so that it reduces the detail, this would amount smoothing the image with quite a high value.
If you consider that when tyding up a photo taken with a high ISO value (say 1600 which creates a lot of noise in the image) a value of 50% smoothing would reduce the noise but leave detail intact.
You would be looking to really go overboard on this value say 400% which would reduce the image to one that looks like it's been painted almost.

Direct screen pixel/framebuffer access [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'd like to try and create a program playing a game. I.e. "a bot".
I want to be able to directly access the pixels on the screen. I.e. have my program "see" a game and then "make a move"(or at least draw a picture of what move it would make).
Both Windows and Linux advice is appreciated, though my guess is that it should be easier to do on Linux.
I'm guessing this could be done with some X/Gnome call?
I'm not afraid of C, even complex samples are welcome.
SDL is a cross-platform library that allows you to directly access framebuffer pixels. You can learn about accessing the pixels on screen through the pixel access example on the documentation wiki.
Generally speaking, bots don't see the game graphics but see the underlying data structure instead, unless you are trying to do something related to computer vision.