This is a follow-up question of How do you draw text in DirectX 11?
In Direct3D-12, things got much more complex and since it's new I couldn't find any suitable libraries online.
I'm building a basic Direct3D12 FPS Test application, and I like to display the FPS data on screen with my rendered image.
The general answer to questions like this is "if you have to ask, then you probably should be using DirectX 11." DirectX 12 is a graphics expert API that provide immense control, and is not particularly concerned with ease-of-use for novices. See this thread for more thoughts in this vein.
With that out of the way, one option is to use device interop and Direct2D/DirectWrite. See Working with Direct3D 11, Direct3D 10 and Direct2D.
UPDATE: DirectX Tool Kit for DirectX 12 is now available. It includes a SpriteFont / SpriteBatch implementation that will draw text on Direct3D 12 render targets. See this tutorial.
Pure DirectX 12, then you need to load the font glyph data into a vertex buffer and render with a vertex shader and pixel shader. You mentioned libraries online, will this is expert stuff and fortunately James Stanard at Microsoft release a how to with their open source MiniEngine project. He handles multiple fonts, antialiasing, and drop shadows in DirectX 12.
Find the project files at GitHub https://github.com/Microsoft/DirectX-Graphics-Samples/tree/master/MiniEngine and check out Textrender.h and Textrender.cpp
If you want maximum feature set with minimum work you probably should go with DirectWrite on top of a D3D11 interop device, like Chuck said in his answer.
If you want to roll your own high performance text rendering you may want to take a look at the text renderer in the miniengine example repository on github, it has some interesting ideas.
Unfortunately the only ways have already been described. Interface with DirectWrite or create your glyph file system.
What you are doing is importing a texture file with glyphs on it, cutting out small squares around each character from the glyph texture file, and then gluing it all together to form a string. It results in some faster drawing (in some case).
I think the approach to this as referenced by the others is slightly outdated and destined to fail. Direct3D11 had the same lack of text drawing support as Direct3D12 (perhaps misinformation on that). It was Direct3D9 which had the built in text drawing support, which nonetheless worked fine, and later supported sprite batch drawing where you could render all text in one sprite.
It seems backwards to state that you simply "need to know" or "are not an expert" to implement such a basic yet tedious system. Such a system is destined to fail in the same way why no one wants to use Assembly to code something they can code in C and onward.
The D3D11 and D3D12 math library also suffers from the same failures. To define and convert vectors you are better off including D3D9X math or custom math structures because the newer methods included are so backwards. "Someone" made it and must like it, but I remember making a complaint showing how easy it is to do vector operations before vs afterward, it nearly doubles or triples the amount of lines needed to perform basic vectors operation and conversions, not even counting the amount of references and learning time you would need to see how someone else's lib works. It seems to be a big failure presented by mathematicians who were never good at programming.
Related
I am writing code for rrt(rapidly exploring random trees) which is a sampling based motion planning algorithm.I wrote the code in MATLAB but now i am writing it in c++.
I want to know how can we plot the robot path in real time with all the obstacles.
What I want is this: I want to see my robot traversing the space.So basically it's about the graphics.I am trying to use sfml but I am having problems with it.Some people suggested using opencv or opengl but I think they are not easy to use.I am looking for a simple to use library.
If opencv or opengl is the answer then please tell me what specifically i need to use in these libraries.I am working on linux(ubuntu 11.10)
You might want too look into using the internal matlab compiler for generating a standalone application directly from your M-code. That way you don't have to rewrite everything form scratch.
I have used the following link a couple of times just to refresh my memory
http://technologyinterface.nmsu.edu/5_1/5_1f/5_1f.html
Eg if you have made an M function with the following content(Example from link):
function y=PolyValue(poly,x)
poly=[1 2 -1 4 -5];
x=[5, 6];
y=polyval(poly, x)
you could use the command
mcc -m PolyValue
to compile the program.
This command would then give you the files necessary for implementation in a larger c++ program.
It should even support Gui elements and graphs.
Something like http://www.ros.org/news/2011/01/open-motion-planning-library-ompl-released.html
may be what you are looking for.
I've worked in both OpenCV for some image recognition projects and OpenGL for rendering displays and whether you go with a library like above or render it yourself is really up to how complex the display needs to be. Ask yourself some questions about how many different obstacle scenarios you are looking at. Are there a large multitude of possible shapes for the obstacles and the robot? Is the problem deterministic (in terms of both the robot's movement and the environment)?
In terms of OpenGL and OpenCV being not easy to use for those new to them, this is very much the case, but choosing to work in C++ makes the problem more difficult for beginners. As another user has mentioned, wrapping your Matlab code instead of throwing it away may be a viable option. Even running the matlab engine in the background to run your scripts through C++ may be viable, if speed is not a critical factor. See http://au.mathworks.com/help/matlab/matlab_external/introducing-matlab-engine.html for more information.
I am trying to understand few things on Mac related to OpenGL framework integration in the form of layers. Well basically when I want to understand 3D technologies present in OS X and which layer is OpenGL's actual implentation layer.
From reading apple docs, below is what I have understood so far:
1.NSOpenGLContext object wraps a low-level, platform-specific Core OpenGL (CGL) context.
= This makes it clear that NSOpenGL makes use of CGL.
2.The AGL (Apple Graphics Library) API is part of the Apple implementation of OpenGL in Mac OS X.
= So, does AGL and CGL are related in any way?
3.CGL (Core OpenGL) is the lowest-level programming interface for the Apple implementation of OpenGL.
= Does it mean Standard OpenGL API's are just wrapper over CGL?
4.CoreAnimation seems to be combo of Core Graphics, Open-GL and Quick-time. But I am not sure what it uses underneath it, I mean actual implementation layer, is it again CGL?
Things are not completely clear to me. I am still reading though and I have asked somewhat related question in past but with incomplete knowledge.
I would really appreciate if someone can share his understanding on matter.
NSOpenGLContext, AGL and CGL are all APIs for setting up an OpenGL context you can draw into.
Use NSOpenGLContext unless you already know you have a reason not to.
Use AGL if you are writing a Carbon application or if you need compatibility with Mac OS 9 (As of 2012, that basically means: don't).
Both AGL and NSOpenGLContext are implemented on top of CGL. However, not all the necessary parts of CGL are actually public APIs. Last time I checked, the only public parts of the CGL API where the ones that allow you to create a fullscreen OpenGL context. If you want OpenGL in a window or you want the option of showing dialog boxes or some NSViews on top of your OpenGL, you probably can't use CGL.
CoreAnimation is a framework for (mostly UI) animations; you can use CoreAnimation without using OpenGL directly. I have never used it myself, but I assume it also allows you to create an OpenGL context for an animation layer. Use it if you already have other reasons to use CoreAnimation, or if you want to combine OpenGL graphics and Mac GUI widgets in creative ways.
I'm using using System.Graphics for my latest project (A simple 2D application). Problem is, it gets a horrible FPS and I'm only drawing 8x8 tiles (usually 10-12 is enough to bring it down to 12FPS).
A friend of mine suggested that I use DirectX. He also suggested XNA but I opposed because I don't want my clients to have to install the XNA distributional. DirectX is common enough (to my knowledge) and I can just include the dll's if I need to.
So, my search began. I've been looking only for DirectX 2D tutorials for VB.net. I've had no solid success thus far. In truth, all I need to do is be able to draw bitmap x at position pos.
I've been using System.Graphics and a hacked up bitmap as my buffer thus far so I'll go for any improvement that I can get.
I'm using VB.net so I'll be ok if you give my one for C#, I'm pretty good at being able to read it (and I have a nice converter too). I would just prefer VB.net to save time.
Thanks! :)
This article in MSDN Magazine was in 2003 edition had a nice example of managed DirectX code in action:
http://msdn.microsoft.com/en-us/magazine/cc164112.aspx
Sadly enough, currently, there's no managed library of DirectX (a.k.a. DirectX .NET wrapper) in DX 10 and DX 11. Microsoft only provided managed library for DX 9.0a and DX 9.0b.
In Managed DirectX wikipedia, you'll see that it's being replaced by Microsoft XNA.
If you download current/latest DirectX SDK, you will have samples only in C++ and HLSL codes.
If you really need fancy UI and also nice animation and 3D drawing based on DirectX, you can go use WPF, especially WPF in .NET 3.0 SP2 (or simply download and install .NET 3.5 SP1). WPF is build on top of DirectX 9.0c stack, without worrying to know large libraries of DirectX 9 API. You'll also get 3D primitives support too.
More on WPF, visit this:
http://msdn.microsoft.com/en-us/library/ms754130.aspx
I know this will be a difficult question, so I am not necessarily looking for a direct answer but maybe a tutorial or a point in the right direction.
What I am doing is programing a robot that will be controlled by a remote operator. We have a 3D rendering of the robot in SolidWorks. What I am looking to do is get the 3D file into VB (probably using DX9) and be able to manipulate it using code so that the remote operator will have a better idea of what the robot is doing. The operator will also have live video to look at, but that doesn't really matter for this question.
Any help would be greatly appreciated. Thanks!
Sounds like a tough idea to implement. Well, for VB you are stuck with MDX 1.1(Comes with DirectX SDK) or SlimDX (or other 3rd party Managed DirectX wrapper). The latest XNA (replacement for MDX 1.1/2.0b) is only available for C# coder. You can try some workaround but it's not recommended and you won't get much community support. These are the least you need to get your VB to display some 3d stuffs.
If you want to save some trouble, you could use ready made game engine to simplified you job. Try Ogre, and it's managed wrapper MOgre. It was one of the candidate for my project. But I ended up with SlimDX due to Ogre not supporting video very well. But since video is not your requirement, you can really consider it. Most sample would be in C# also, so you need to convert to VB.Net to use. It won't be hard.
Here comes the harder part, you need to export your model exported from SolidWorks to DirectX Format (*.x). I did a quick search in google and only found a few paid tools to do that. You might need to spend a bit on that or spend more time looking for free converter tools.
That's about it. If you have more question, post again. Good Luck
I'm not sure what the real question is but what I suspect that you are trying to do is to be able to manipulate a SW model of a robot with some sort of a manual input. Assuming that this is the correct question, there are two aspects that need to be dwelt with:
1) The Solidworks module: Once the model of the robot is working properly in SW, a program can be written in VB.Net that can manipulate the positional mates for each of the joints. Also using VB, a window can be programmed with slide bars etc. that will allow the operator to be able to "remotely" control the robot. Once this is done, there is a great opportunity to setup a table that could store the sequencial steps. When completed, the VB program could be further developed to allow the robot to "cycle" through a sequence of moves. If any obstacles are also added to the model, this would be a great tool for collission detection and training off line.
2) If the question also includes the incorporation of a physical operator pendent there are a number of potential solutions for this. It would be hoped that the robot software would provide a VB library for communicating and commanding the Robot programatically. If this is the case, then the VB code could then be developed with a "run" mode where the SW robot is controlled by the operator pendent, instead of the controls in the VB window, (as mentioned above). This would then allow the opertor to work "offline" with a virtual robot.
Hope this helps.
I'm writing a program that needs to take input from an XBox 360 controller. The input will then be sent wirelessly to an RC Helicopter that I am building.
So far, I've learned that this can be done using either the XInput library from DirectX, or the Input framework in XNA.
I'm wondering if there are any other options available. The scope of my program is rather small, and having to install a large gaming library like DirectX or XNA seems like excessive. Further, I'd like the program to be cross platform and not Microsoft specific.
Is there a simple lightweight way I can grab the controller input with something like Python?
Edit to answer some comments:
The copter will have 6 total propellers, arranged in 3 co-axial pairs. Basically, it will be very similar to this, only it will cost about $1,000 rather than $15,000. It will use an Arduino for onboard processing, and Zigbee for wireless control.
The 360 controller was selected because it is well designed. It is very ergonomic and has all of the control inputs needed. For those familiar with helicopter controls, the left joystick will control the collective, the right joystick with control the pitch and roll, and the analog triggers will control the yaw. The analog triggers are a big feature for the 360 controller. PS and most others do not have them.
I have a webpage for the project, but it is still pretty sparse. I do plan on documenting the whole design though, so eventually it will be interesting.
http://tricopter.googlecode.com
On a side note, would it kill Google to have a blog feature for googlecode projects?
I would like the 360 controller input program to run in both Linux and Windows if possible. Eventually though, I'd like to hook the controller directly to an embedded microcontroller board (such as Arduino) so that I don't have to go through a computer, but its not a high priority at the moment.
It is not all that difficult. As the earlier guy mentioned, you can use the SDL libraries to read the status of the xbox controller and then you can do whatever you'd like with it.
There is a SDL tutorial: http://sdl.beuc.net/sdl.wiki/Handling_Joysticks which is fairly useful.
Note that an Xbox controller has the following:
two joysticks:
left joystick is axis 0 & 1;
left trigger is axis 2;
right joystick is axis 3 & 4;
right trigger is axis 5
one hat (the D-pad)
11 SDL buttons
two of them are joystick center presses
two triggers (act as axis, see above)
The upcoming SDL v1.3 also will support force feedback (aka. haptic).
I assume, since this thread is several years old, you have already done something, so this post is primarily to inform future visitors.
PyGame can read joysticks, which is what the X360 controller shows up as on a PC.
Well, if you really don't want to add a dependency on DirectX, you can use the old Windows Joystick API -- Windows Multimedia -> Joystick Reference in the platform SDK.
The standard free cross plaform game library is Simple DirectMedia Layer, originally written to port Windows games to Unix (Linux) systems. It's a very basic, lightweight API that tends to support the minimal subset of features on each system, and it has bindings for most major languages. It has very basic joystick and gamepad support (no force feedback, for example) but it might be sufficient for your needs.
Perhaps the Mono.Xna library has added GamePad support, which would provide the cross platform functionality you were looking for:
http://code.google.com/p/monoxna/
As far as the concerns about the library being too heavy weight, sure, for this option it may be true ... however, it could open up opportunities to do some nice visualization in the future.
disclaimer: I'm not familiar with the status of the mono xna project, so it may not have added this feature yet. But still, 'tis an option :-)