Integrate FBX SDK to DirectX - mesh

I am working with FBX SDK 2013.3 and project on DirectX .
I need some basic knowledge,
Think that i have a cube, exported from Maya with .fbx extension, it has animation but problem isn't that now.
Now i need Load this .fbx file to DirectX, so indices,vertex positions must be handle.
I looked FBX SDK Documentations, Samples ( mostly ViewScene sample ) and i obtain some information.
const char* lFilename = ".\..\..\..\DuzZemin.fbx";
/* Memory Management */
FbxManager* myManager = FbxManager::Create();
FbxIOSettings* ioSettings = FbxIOSettings::Create(myManager,IOSROOT);
myManager->SetIOSettings(ioSettings);
// Importer
FbxImporter* myImporter = FbxImporter::Create(myManager,"");
if( !myImporter->Initialize(lFilename,-1,ioSettings))
{
/* Error Handling */
}
/* Scene for Imported File */
FbxScene* myScene = FbxScene::Create(myManager,"My Scene");
myImporter->Import(myScene);
// Memory DeAllocation
myImporter->Destroy();
FbxNode* rootNode = myScene->GetRootNode();
FbxMesh* myMesh = FbxMesh::Create(myScene,"");
int indexCount = myMesh->GetControlPointsCount();
and when build this code snippet, i am not getting any error. But in runtime ,
indexCount return with 0 value.
Do you see any wrong or missed requirement ?
Thanks for interest.

Well you create a mesh, but you don't point it to a mesh in your scene. When i have loaded an .fbx file i create a mesh pointer and then go into my scene and grab the mesh. Consider this code.
FbxMesh* mesh;
FbxNode* node = myScene->GetRootNode()->GetChild(0);
//I want my mesh to be formed by triangles so i convert the mesh into triangles
FbxGeometryConverter lConverter(node->GetFbxManager());
lConverter.TriangulateInPlace(node);
mesh = node->GetMesh();
now you should be able to get a correct vertex count from your mesh, and be able to extract the vertices. NOTE: If your .fbx file consist of more than one mesh then you would need to iterate thru the scene untill you find the desired mesh.
Hope this helps

FbxMesh* mesh;
FbxNode* node = myScene->GetRootNode()->GetChild(0);
//I want my mesh to be formed by triangles so i convert the mesh into triangles
FbxGeometryConverter lConverter(node->GetFbxManager());
lConverter.TriangulateInPlace(node);
mesh = node->GetMesh();

Related

How can I convert "rs2::video frame" to "CvCapture* "?

I'm newbie to the Intel Realsense SDK and coding in Visual Studio 2017(C or C++) for Intel Realsense camera D435.
In my example I have the following,
static rs2::frameset current_frameset;
auto color = current_frameset.get_color_frame();
frame = cvQueryFrame(color);
I've got an error on line 3 as "can not convert 'rs2::video_frame' to 'CvCapture'"
I've not being able to find a solution to this issue and it's proving difficult and resulted in more errors.
Does anyone know how I can overcome this problem?
Thanks for the help!
The cvQueryFrame accepts cvCapture instance, and it is used to retrieve the frame from camera. In LibRS, the frame you retrieved back can be used already, you don't have to get back it again. attached the snippet of CV example in LibRS, you can refer to the complete code here
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
using namespace cv;
const auto window_name = "Display Image";
namedWindow(window_name, WINDOW_AUTOSIZE);
while (waitKey(1) < 0 && cvGetWindowHandle(window_name))
{
rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = color_map(data.get_depth_frame());
// Query frame size (width and height)
const int w = depth.as<rs2::video_frame>().get_width();
const int h = depth.as<rs2::video_frame>().get_height();
// Create OpenCV matrix of size (w,h) from the colorized depth data
Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP);
// Update the window with new data
imshow(window_name, image);
}

How to make large 2d tilemap easier to load in Unity

I am creating a small game in the Unity game engine, and the map for the game is generated from a 2d tilemap. The tilemap contains so many tiles, though, is is very hard for a device like a phone to render them all, so the frame rate drops. The map is completely static in that the only moving thing in the game is a main character sprite and the camera following it. The map itself has no moving objects, it is very simple, there must be a way to render only the needed sections of it or perhaps just render the map in once. All I have discovered from researching the topic is that perhaps a good way to do it is buy using the Unity mesh class to turn the tilemap into a mesh. I could not figure out how to do this with a 2d tilemap, and I could not see how it would benefit the render time anyways, but if anyone could point me in the right direction for rendering large 2d tilemaps that would be fantastic. Thanks.
Tile system:
To make the tile map work I put every individual tile as a prefab in my prefab folder, with the attributes changed for 2d box colliders and scaled size. I attribute each individual prefab of the tile to a certain color on the RGB scale, and then import a png file that has the corresponding colors of the prefabs where I want them like this:
I then wrote a script which will place each prefab where its associated color is. It would look like this for one tile:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Map : MonoBehaviour {
private int levelWidth;
private int levelHeight;
public Transform block13;
private Color[] tileColors;
public Color block13Color;
public Texture2D levelTexture;
public PlayerMobility playerMobility;
// Use this for initialization
void Start () {
levelWidth = levelTexture.width;
levelHeight = levelTexture.height;
loadLevel ();
}
// Update is called once per frame
void Update () {
}
void loadLevel(){
tileColors = new Color[levelWidth * levelHeight];
tileColors = levelTexture.GetPixels ();
for (int y = 0; y < levelHeight; y++) {
for (int x = 0; x < levelWidth; x++) {
// if (tileColors [x + y * levelWidth] == block13Color) {
// Instantiate(block13, new Vector3(x, y), Quaternion.identity);
// }
//
}
}
}
}
This results in a map that looks like this when used with all the code (I took out all the code for the other prefabs to save space)
You can instantiate tiles that are in range of the camera and destroy tiles that are not. There are several ways to do this. But first make sure that what's consuming your resources is in fact the large number of tiles, not something else.
One way is to create an empty parent gameObject to every tile (right click in "Hierarchy" > Create Empty"
then attach a script to this parent. This script has a reference to the camera (tell me if you need help with that) and calculates the distance between it and the camera and instantiates the tile if the distance is less than a value, otherwise destroys the instance (if it's there).
It has to do this in the Update function to check for the distances every frame, or you can use "Coroutines" to do less checks (more efficient).
Another way is to attach a script to the camera that has an array with instances of all tiles and checks on their distances from the camera the same way. You can do this if you only have exactly one large tilemap because it would be hard to re-use this script if you have more than a large tilemap.
Also you can calculate the distance between the tile and the character sprite instead of the camera. Pick whichever is more convenient.
After doing the above and you still get frame-drops you can zoom-in the camera to include less tiles in its range but you'd have to recalculate the distances then.

Projection issue with ESRI JSAPI and ArcGIS map service

I was trying to obtain a gpx file with some coordinates by drawing on an Openlayers map with an ArcGIS baseMap.
When I draw the polyline and create the gpx, if I open it on Google Earth, what I see is not what I drawed before, the line is totally different from the original and not positioned where I drawed it.
I know it's a projection problem, I've tried trasforming the geometry object from Mercator to Geographic, also getting directly the geographic coordinates from the map coordinates, but nothing.
I tried to set "spatialReference" to 4362 and then to 3857, but nothing changes.
I'm going to use that .gpx on a gps device (the next week I'll go to the Svalbard islands and I need some gps tracks to go around Longyearbyen by snowmobile, there there aren't any sign of life out the town, so I must be prepared to it), when I'll be there I'll adjust the output right for the device they will rent to me, but now I need to save on the .gpx file almost the right coordinates.
I'm getting from the map those coordinates:
lat: 61.22582068741976
lon: 4.684820015391338
when I'm expecting instead something around 78. lat and 15. lon.
This is some of the code I use to create the map (I'm not pasting the code I know it's not responsible of my problems):
var initialExtent = new esri.geometry.Extent({"xmin":505615.5801124362,"ymin":8678955.717187276,"xmax":525935.6207525175,"ymax":8689168.654279819,"spatialReference":{"wkid":32633,"latestWkid":32633}});
map = map = new esri.Map("map", {extent: initialExtent, logo : false});
basemapURL = "http://geodata.npolar.no/ArcGIS/rest/services/inspire1/NP_TopoSvalbard_U33_CHL/MapServer/";
map.addLayer(new esri.layers.ArcGISTiledMapServiceLayer(basemapURL));
Here I'm using wkid 32633 that is the default for that map, tried to change with known ones, but nothing happened.
And now the code I use to get the coordinates:
dojo.connect(tb, "onDrawEnd", getCoords);
function getCoords(geo) {
var r=confirm("Salvare tracciato?");
if (r==true) {
geo = esri.geometry.webMercatorToGeographic(geo);
for ( var path = 0; path < geo.paths.length; path ++ ) {
for ( var pt = 0; pt < geo.paths[path].length; pt++ ) {
tra += '<wpt lat="'+geo.paths[path][pt][1]+'" lon="'+geo.paths[path][pt][0]+'"></wpt>';
}
}
}
}
"tra" is a variable that stores all the code I'll insert into the gpx file with an other function.
The "webMercatorToGeographic" function transform the map coordinates to geographic ones.
Thanks to Devdatta Tengshe on GIS.stackexchange I got what I need:
Use the proj4js library in your application. To do this, follow these
steps:
Download the latest library from
https://github.com/proj4js/proj4js/releases In your HTML file use
You'll
need to define the projections before you can use it. You can do this
by using the following lines of code: var UTM33N= "+proj=utm +zone=33
+ellps=WGS84 +datum=WGS84 +units=m +no_defs"; var GCS84="+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs";
When you need to transform, you can use the following line: var
transformed=proj4(UTM33N,GCS84,[x,y]); where x & y are the coordinates
in your given projection. As output you'll get an array with two
elements, the longitude, & the Latitude
That worked fine.

Frame Listener in QMLOgre Lib Freeze Window

I'm newbie in using ogre3D and I need help on a certain point!
I'm trying a library mixing ogre3D engine and qtml :
http://advancingusability.wordpress.com/2013/08/14/qmlogre-is-now-a-library/
this library works fine when you want to draw some object and rotate or translate these objects already initialise in a first step.
void initialize(){
// we only want to initialize once
disconnect(this, &ExampleApp::beforeRendering, this, &ExampleApp::initializeOgre);
// start up Ogre
m_ogreEngine = new OgreEngine(this);
m_root = m_ogreEngine->startEngine();
m_ogreEngine->setupResources();
m_ogreEngine->activateOgreContext();
//draw a small cube
new DebugDrawer(m_sceneManager, 0.5f);
DrawCube(100,100,100);
DebugDrawer::getSingleton().build();
m_ogreEngine->doneOgreContext();
emit(ogreInitialized());
}
but If you want to draw or change the scene after this initialisation step it is problematic!
In fact in Ogre3D only (without the qtogre library), you have to use a frameListener
which will connect the rendering thread and allow a repaint of your scene.
But here, we have two ContextOpengl: one for qt and the other one for Ogre.
So If you try to put the common part of code :
createScene();
createFrameListener();
// La Boucle de rendu
m_root->startRendering();
//createScene();
while(true)
{
Ogre::WindowEventUtilities::messagePump();
if(pRenderWindow->isClosed())
std::cout<<"pRenderWindow close"<<std::endl;
if(!m_root->renderOneFrame())
std::cout<<"root renderOneFrame"<<std::endl;
}
the app will freeze! I know that startRendering is a render loop itself, so the loop below never gets executed.
But I don't know where to put those line or how to correct this part!
I've also try to add a background buffer and to swap them :
void OgreEngine::updateOgreContext()
{
glPopAttrib();
glPopClientAttrib();
m_qtContext->functions()->glUseProgram(0);
m_qtContext->doneCurrent();
delete m_qtContext;
m_BackgroundContext= QOpenGLContext::currentContext();
// create a new shared OpenGL context to be used exclusively by Ogre
m_BackgroundContext = new QOpenGLContext();
m_BackgroundContext->setFormat(m_quickWindow->requestedFormat());
m_BackgroundContext->setShareContext(m_qtContext);
m_BackgroundContext->create();
m_BackgroundContext->swapBuffers(m_quickWindow);
//m_ogreContext->makeCurrent(m_quickWindow);
}
but i've also the same error:
OGRE EXCEPTION(7:InternalErrorException): Cannot create GL vertex buffer in GLHardwareVertexBuffer::GLHardwareVertexBuffer at Bureau/bibliotheques/ogre_src_v1-8-1/RenderSystems/GL/src/OgreGLHardwareVertexBuffer.cpp (line 46)
I'm very stuck!
I don't know what to do?
Thanks!

Multipass forward renderer in DirectX10

I am implementing a Forward Renderer with DirectX 10. I want it to handle an unlimited amount of lights so I can later compare its performance with a Deferred Renderer. So basically the algorithm I am using is: for every object, for every light -> set light, draw object. Using additive blending, I render the object for each light summing the contribution of every light on it. Everything works using an additive blending and disabling depth writes. The problem I have is that, using this simple approach, different object get blended together (because depth writes are disabled), while I just want a single object to be blended with the different light contribution's on it but still to obscure other objects behind it. How can I do this? Is a Z pre-pass the solution? Any suggestion will be very appreciated. Thanks.
This are the blending and depth/stencil states I use in my HLSL shader:
DepthStencilState NoDepthWritesDSS
{
DepthEnable = true;
DepthWriteMask = Zero;
StencilEnable = true;
StencilReadMask = 0xff;
StencilWriteMask = 0xff;
FrontFaceStencilFunc = Always;
FrontFaceStencilPass = Incr;
FrontFaceStencilFail = Keep;
BackFaceStencilFunc = Always;
BackFaceStencilPass = Incr;
BackFaceStencilFail = Keep;
};
BlendState BlendingAddBS
{
AlphaToCoverageEnable = false;
BlendEnable[0] = true;
SrcBlend = ONE;
DestBlend = ONE;
BlendOp = ADD;
SrcBlendAlpha = ZERO;
DestBlendAlpha = ZERO;
BlendOpAlpha = ADD;
RenderTargetWriteMask[0] = 0x0F;
};
There's several options to handle multiple lights, if you want to implement it using multipass a depth pre pass is your best option (then you do draw again using LESS_EQUAL comparison on your depth state).
This approach will most likely be quite unefficient on a high number of lights/objects tho.
I recommend this article which explains how to render several lights, it has different interesting implementations. The compute tile will not work in directx10, but the geometry sprite version can be easily ported (I have a dx9 version of it)
If you still want forward rendering, there's also the light indexed technique, implementation example here