Best physics engine with VB.net - vb.net

I'm building a simple program. Basically some simple meshes, some cubes, etc. I'll be having them crash around a bit through (against some solid objects). I've worked with a couple of rendering engines but nothing like what I want (i.e. like, with physics :] ).

Give this a try http://sourceforge.net/projects/vbphysxdx9/
It uses PhysX by Nvidia. You will need a Nvidia graphics card with PhysX to use it though.

Related

Direct X 11 and GPU acceleration (Simulations etc) - where to start?

I am using SharpDX and am fairly comfortable with it at the moment, but for an assessment for university, I need to create a demo which utilizes some sort of GPU acceleration. This is an 'independent research' task - what that means is, I assigned myself this task. I am legitimately interested in GPU acceleration in games, but right now it feels like I threw myself way too far into the deep.
I am planning to do a particle system (it will be a very BASIC system, with particles firing/falling/dieing) but I need a starting point.
Can someone point me in the right direction? Articles to read? things to consider? I have googled my heart out on things like "GPU acceleration DirectX", but I can't find any solid results! I wish I had a sort of 'hello world' for GPU acceleration..!
If you want to understand how to build a GPU particle system, I suggest you to read the book "Practical Rendering And Compution with Direct3D11", where you will find an entire chapter dedicated on how to implement a simple GPU particle system.
Then the most trickiest part is probably the sorting algorithm which is not detailed in the previous book, but you can have a look at ComputeShaderSort11 sample from the old DirectX June 2010 that could help you a lot (the implementation is quite efficient).
Also, I did a full particle engine on the GPU with SharpDX at my work, so It is perfectly achievable with SharpDX.

Best approach for music visualization/interaction app

I'm am an experienced flash developer who's been learning objective-c for the last 5 months.
I am beginning the development of an app previously prototyped in Flash and I'm trying to guess what could be the best approach to port it to iOS.
My app is kind of a music game. It consists of some dynamic graphics (circles growing and rotating), with typography also changing and rotating. Everything moves in sync with music. And at the same time the user can interact with the app (moving and rotating things) and some sounds will change depending on his actions.
Graphics can't be bitmaps because they get redrawn every frame.
This was easy to develop with Flash due to its management of vector graphics. But I'm not sure what would be the best way to do it in objective-c.
My options, I guess are things like: Core Graphics, OpenGL, maybe Cocos2D (not sure if that would be to kill a flea with a sledgehammer). Or even things like OpenFrameworks or Cinder, but I rather use objective-c other than c++.
Any hint on where to look at will be appreciated.
EDIT:
I can't really put a real screenshot due to confidentiality issues. But it is something similar to this
But it will be interactive and sections would change size and disappear depending on the music and user interaction.
Which graphics library should you use? The answer is going to depend a lot on what you know or could learn. OpenGL will use hardware acceleration, so it's probably fastest. But OpenGL doesn't have built-in functions for drawing arc segments or any curves or text at all, so you'd probably have to do it yourself. Also, OpenGL is notoriously difficult to learn.
Core Graphics has many cool methods for drawing vector graphics (rectangles, arcs, general paths, etc.), but might be slower than you want, depending on what you're trying to do. Without having code to actually run it's hard to say.
It looks like Cocos2D is built on OpenGL and is made to be simple. I see lots of mention of sprites on their website, but nothing about vector graphics. (I've never used it, so it could be there and I'm just not seeing it.)
If I were in your position, I'd look into cocos2d and see if it does vector graphics at all. If not, I might give Core Graphics a try and see what performance was like. I know OpenGL can do what you want, but it can be difficult to learn, so I'd probably do that last.

Complex gestures on the iPhone

Is there a high level library that handles complex gestures l ike detecting triangles / loops / circles? Is it even possible to build such a library with what Apple already has?
Thanks,
Teja
You can use a "Dollar Recognizer"... its pretty accurate and very easy to use from a single training template. There is even an effort started for an iPhone implementation, although its not been released yet. An implementation is being used by AlphaCount.

Which platform should i choose for scientific computing?

What are the pros and cons in choosing PS3 as a platform for scientific computing in detriment of GPU's? Is It the better choice ?
Stick with a PC, you will have a far easier life at the end of the day. I also wouldn't be surprised if you get more horsepower out of GPU's.
p.s., from what I know dispatching work to the cells is not an enjoyable task :D
I'd go for GPU, for three reasons:
(a) GPU code can be developed, tested, and run on pretty much any PC you may want to use, with the only dependency being a $150 video card, whereas CELL/PS3 is a much more custom development environment and won't run natively on your laptop, etc.;
(b) I'm willing to bet a lot that GPUs and Cuda will be alive and well in 5 years, but I wouldn't put money on PS3 being around that long -- what are you going to do if PS4 has a totally different architecture and CELL effectively dies?
(c) There's a more vibrant research and development community around GPU than there is around PS3/Cell (outside of strict game development), so you're likely to be in more good company, have example code and tools to work with, etc.
There is no broad "better" choice, it is all dependent on the situation and what you're doing. Probably the biggest PRO to a PS3 is they're cheap by comparison. A computer can more easily scale bigger though (for a price) when looking into things like CUDA.
CUDA is pretty slick. I was shown a presentation recently demonstrating how easy it is to get at the power of the GPU's many cores using a C++ based syntax. If I was starting a parallel computing project now, I would probably take the PC/GPU-based route.
A major objection to the PS3 (which is already quite a wacky choice unless you're under some pretty extreme price/performance constraints) has to be that Sony are dropping support for installation of other OS. In future, PS3s without the disabling firmware update may become harder and harder to get hold of.

Intro to GPU programming [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Everyone has this huge massively parallelized supercomputer on their desktop in the form of a graphics card GPU.
What is the "hello world" equivalent of the GPU community?
What do I do, where do I go, to get started programming the GPU for the major GPU vendors?
-Adam
Check out CUDA by NVidia, IMO it's the easiest platform to do GPU programming. There are tons of cool materials to read.
[http://www.nvidia.com/object/cuda_home.html][1]
[1]: http://www.nvidia.com/object/cuda_home.html
Hello world would be to do any kind of calculation using GPU.
You get programmable vertex and
pixel shaders that allow execution
of code directly on the GPU to
manipulate the buffers that are to
be drawn. These languages (i.e.
OpenGL's GL Shader Lang and High
Level Shader Lang and DirectX's equivalents
), are C style syntax, and really
easy to use. Some examples of HLSL
can be found here for XNA game
studio and Direct X. I don't have
any decent GLSL references, but I'm
sure there are a lot around. These
shader languages give an immense
amount of power to
manipulate what gets drawn at a per-vertex
or per-pixel level, directly
on the graphics card, making things
like shadows, lighting, and bloom
really easy to implement.
The second thing that comes to mind is using
openCL to code for the new
lines of general purpose GPU's. I'm
not sure how to use this, but my
understanding is that openCL gives
you the beginnings of being able to
access processors on both the
graphics card and normal cpu. This is not mainstream technology yet, and seems to be driven by Apple.
CUDA seems to be a hot topic. CUDA is nVidia's way of accessing the GPU power. Here are some intros
I think the others have answered your second question. As for the first, the "Hello World" of CUDA, I don't think there is a set standard, but personally, I'd recommend a parallel adder (i.e. a programme that sums N integers).
If you look the "reduction" example in the NVIDIA SDK, the superficially simple task can be extended to demonstrate numerous CUDA considerations such as coalesced reads, memory bank conflicts and loop unrolling.
See this presentation for more info:
http://www.gpgpu.org/sc2007/SC07_CUDA_5_Optimization_Harris.pdf
OpenCL is an effort to make a cross-platform library capable of programming code suitable for, among other things, GPUs. It allows one to write the code without knowing what GPU it will run on, thereby making it easier to use some of the GPU's power without targeting several types of GPU specifically. I suspect it's not as performant as native GPU code (or as native as the GPU manufacturers will allow) but the tradeoff can be worth it for some applications.
It's still in its relatively early stages (1.1 as of this answer), but has gained some traction in the industry - for instance it is natively supported on OS X 10.5 and above.
Take a look at the ATI Stream Computing SDK. It is based on BrookGPU developed at Stanford.
In the future all GPU work will be standardized using OpenCL. It's an Apple-sponsored initiative that will be graphics card vendor neutral.
CUDA is an excellent framework to start with. It lets you write GPGPU kernels in C. The compiler will produce GPU microcode from your code and send everything that runs on the CPU to your regular compiler. It is NVIDIA only though and only works on 8-series cards or better. You can check out CUDA zone to see what can be done with it. There are some great demos in the CUDA SDK. The documentation that comes with the SDK is a pretty good starting point for actually writing code. It will walk you through writing a matrix multiplication kernel, which is a great place to begin.
Another easy way to get into GPU programming, without getting into CUDA or OpenCL, is to do it via OpenACC.
OpenACC works like OpenMP, with compiler directives (like #pragma acc kernels) to send work to the GPU. For example, if you have a big loop (only larger ones really benefit):
int i;
float a = 2.0;
float b[10000];
#pragma acc kernels
for (i = 0; i < 10000; ++i) b[i] = 1.0f;
#pragma acc kernels
for (i = 0; i < 10000; ++i) {
b[i] = b[i] * a;
}
Edit: unfortunately, only the PGI compiler really supports OpenACC right now, for NVIDIA GPU cards.
Try GPU++ and libSh
LibSh link has a good description of how they bound the programming language to the graphics primitives (and obviously, the primitives themselves), and GPU++ describes what its all about, both with code examples.
If you use MATLAB, it becomes pretty simple to use GPU's for technical computing (matrix computations and heavy math/number crunching). I find it useful for uses of GPU cards outside of gaming. Check out the link below:
http://www.mathworks.com/discovery/matlab-gpu.html