I am working on a 3D project for Windows Store (Metro Application) and using Visual Studio 2012 Express for Windows 8 and Blender for creating 3D objects. I am importing an fbx mesh file of a 3D object (using Visual Studio Starter Kit) and I want some part of that object to be translucent (50% opacity). I have tried 3 png textures with Lambert shader on the 3D object and the following are the results which I am getting:
1) Opacity:100%
Object Appearance: Opaque
2) Opacity: 0%
Object Appearance: Transparent
3) Opacity:50%
Object Appearance: Opaque (same as 100%)
I want to achieve translucency but even after using 50% opacity of texture, I am not able to get what I want. Please suggest me some solution.
Any help will be highly appreciated.
The Visual Studio Starter Kit is mostly just a simplified demo of the VS exporter. It doesn't support blend states, so changing the material setting isn't going to do anything.
You may have more luck making use of DirectX Tool Kit's Model support for CMO, which has more control over alpha blending usage.
Related
Presently I'm learning the basics of real-time raytracing with the DXR API in DirectX 12 Ultimate. I'm studying the D3D12 raytracing samples on the official GitHub and am using an i9/Intel Iris Xe/RTX3070 laptop and building the programs in VS2022.
Since the samples were written for Windows 10 and I'm using a hybrid graphics PC, a Debug build will run in Windows 11 after adding D3D12_MESSAGE_ID_RESOURCE_BARRIER_MISMATCHING_COMMAND_LIST_TYPE to D3D12_INFO_QUEUE_FILTER during device creation (see DirectX 12 application is crashing in Windows 11). The only trouble is that none of the sample programs change to fullscreen (i.e. borderless windowed) mode when pressing the Alt+Enter key combination. The programs always stay in windowed mode.
This hasn't worried me so far, because I've been copying the raytracing code over to a template (based on DirectX Tool Kit for Windows desktop) where fullscreen toggling works properly. In this way, I was able to run the HelloWorld and SimpleLighting samples successfully in both windowed mode and fullscreen (2560x1440 monitor resolution).
However, this hasn't been so successful for the ProceduralGeometry sample, which introduces intersection shaders. Once again, the original sample program renders the scene properly, but only in a bordered window. But when the code is reproduced in the template where I can toggle to fullscreen, the raytraced scene does not render properly.
In the scene, the triangle geometry used for the ground plane of the scene renders ok, but a translucent bounding box around the fractal pyramid is visible, and all other procedural geometry also appears translucent. Every couple of seconds, the bounding box for the metaballs also appears briefly, then vanishes.
I was able to determine that by freezing the scene, the reason for the translucency was that the following frames were being presented in sequence:
triangle ground plane quad only
floor geometry plus opaque fractal pyramid bounding box
all of the above plus opaque metaball bounding box
completed scene with opaque geometry and no bounding boxes
At the native framerate (165Hz on my machine), this results in both the procedural geometry and bounding boxes always being visible, but 'see-through' due to all the partially complete frames being presented to the display. This happens in both windowed and fullscreen modes, but it's worse in fullscreen, because the scene gets affected by random image corruption not seen in windowed mode.
I've been grappling with this issue for a few days and can't work out the problem. The only changes I've made to the sample program are the Windows 11 fix, and using a template for proper fullscreen rendering, which the original sample ignores or doesn't implement properly.
Hopefully someone can shed light on this perplexing issue!
I found the problem. Each sample has a header file called DXSampleHelper.h. For the ProceduralGeometry sample, this header file was updated with a helper class to manage structured buffers, which is very similar to the helper class for constant buffers.
The CopyStagingToGpu() method, which consists of a one line memcpy operation in both classes, is slightly different for the structured buffer class:
memcpy(m_mappedBuffers + instanceIndex * NumElementsPerInstance(), &m_staging[0], InstanceSize());
The same method in the constant buffer class is:
memcpy(m_mappedBuffers + instanceIndex, &m_staging[0], InstanceSize());
I.e. I was missing instanceIndex * NumElementsPerInstance() and thus the procedural geometry instances within the structured buffer were not correctly aligned in GPU memory.
How I can implement packed bubble graph in WinRT. I am trying to achieve graph similar to attached image..
I tried for a similar implementation / sample for silverlight / Windows 8 in google, But didn't get any. Please anyone help me to achieve the same graph. My main issue is with implementing the logic correctly.
What kind of project are you using? There's actually a way to do this in each kind of supported project.
For javascript you can use the Canvas element, which has easy 2D api's for drawing circles and text.
In C++ you can use the DirectX 2D api's to draw circles. In C# you can embed a DirectX panel into your xaml and then use DirectX to draw the circles.
In C# or C++ you can also solve this problem with just xaml, using a ListView with Canvas for an ItemsPanel, and Circle objects. Here is a blog post I found with a tutorial (except with rectangles):
http://zamjad.wordpress.com/2010/03/17/using-canvas-as-a-itempanel-template-in-listbox/
Not sure if this fits your need. If you are not averse to using JS + HTML in WinRT, d3.js should be very very useful
Example of packed bubble chart - http://bl.ocks.org/mbostock/4063269
http://d3js.org/
Hope this helps!
I have set up a Kinect device and written a simple program that reads the stream to a QImage using OpenNI 2.0. I have set up skeleton tracking with NiTE 2.0, so I have access to the coordinates of all the 15 joints. I have also set up a simple scene using SceniX. The hand coordinates provided by the skeleton tracking are beeing used to draw 2 boxes to represent the hands.
I would like to bind the whole skeleton to a (rigged)model, and cant seem to find any good tutorials. Anyone have any idea how I should proceed?
depending on your requirements you could look at something like this for Unity Engine https://www.assetstore.unity3d.com/en/#!/content/10693
There is also a Plugin for the Unreal 4 Engine called KINECT 4 UNREAL FROM OPAQUE MULTIMEDIA
But if you have to write it all by hand for yourself, i have done something similar using OpenGL.
I used Assimp http://assimp.sourceforge.net/ to be able to load animated Collada models and OpenNi with NiTE for skeletal tracking. I then used the rotation data from the Nite skeleton and applied it to the corresponding Bones of my rigged mesh, overwriting the rotation values of the animation. Don't use positional Data. It will strech your bones and distort the mesh.
There are many sources for free 3D Models, like TF3DM.com . I for myself used a custom Rig for my models to be suitable for my code. So you might look into using Blender and how to Rig a Model.
Also remember that the Nite Skeleton has no joint for the Pelvis, and that Nite joints don't inherit their parents rotation, contrary to the bones in a rigged model.
I hope this helps to have something to go on.
You can try DigitalRune, they have examples of binding a rigged model to joints. They have mentioned some examples too. try http://www.digitalrune.com/Support/Blog/tabid/719/EntryId/155/Research-Augmented-Reality-with-Microsoft-Kinect.aspx
Also you would need to know to animate model in blender and export it to XNA or to your working graphics framework. Eg:-http://www.codeproject.com/Articles/230540/Animating-single-bones-in-a-Blender-3D-model-with#SkinningSampleProject132
I am using Microsoft Kinect SDK and I would like to know whether it is possible to get Depth Frame, Color Frame as well as the skeleton data for all the frames at 30fps? Using Kinect Explorer I can see that the color and the depth frame are nearly at 30fps, but as soon as I choose the view the skeleton, it drops to around 15-20fps.
Yes, it is possible to capture color/depth at 30fps while capturing the skeleton.
See image below, just in case you think me dodgy. :) This is a raw Kinect Explorer running, straight from Visual Studio 2010. My work development platform is an i5 Dell laptop.
I have a GDI rendering system. And I would like enhance its capabilities by including XAML.
I am looking for a tool which would parse a XAML described image and draw the image using GDI. As the GDI does not support advanced features like Alpha blending, anti-aliasing, linear gradient etc, I just need the tool to draw the basic objects like rectangle, circles etc.
Is there any tool/api in C/C++ that could help me.