I'm programmimg a museum app and I'd like to display a 3D model that responses to the user touches, like pinch to zoom or moving arround the model. I've searched a lot but all I found is game engines that seem very complicated for this thing. Is there any way to import the models (it doesn't matter the format that they have), display it and make it touch responsive? If the code (or the engine) is open source would be better, I'd prefer a free app than a paid app.So many thanks!
Update: right now I'm able to load the 3D model using cocos3D, but as I've said on an answer, the model I can load is very low-ploy. It's an app for a museum and I'd have to be a much more detailed model. I'm using the cocos3D standard template project that shows the animated "hello world", just changed the .pod file to load the one I want and started adding a few modifications to support user touch interaction. I'm reducing about 80% the quantity of original polygons to load it (this is how looks a small part of the model
). If I try to load the model reducing about 50% the original (which looks great, like these
), the app crashes and gives me this log crash:
** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'OpenGL ES 1.1 supports only GL_UNSIGNED_SHORT or GL_UNSIGNED_BYTE types for vertex indices'
* First throw call stack:
(0x22cc012 0x1ca9e7e 0x22cbe78 0x173ff35 0x1b550f 0x186751 0x180a81 0x17b750 0x11de32 0x1270d4 0x1263ac 0x14f1a2 0x13ca01 0x14ee02 0x14d45e 0x14d3c2 0x14bb22 0x14a452 0x14efcc 0x14d493 0x14d3c2 0x1643e3 0x162a41 0x10c197 0x10c11d 0x10c098 0x3d79c 0x3d76f 0x85282 0x16e9884 0x16e9737 0x8b56f 0xc4192d 0x1cbd6b0 0x505fc0 0x4fa33c 0x4fa150 0x4780bc 0x479227 0x51bb50 0xbef9ff 0xbf04e1 0xc01315 0xc0224b 0xbf3cf8 0x2fd4df9 0x2fd4ad0 0x2241bf5 0x2241962 0x2272bb6 0x2271f44 0x2271e1b 0xbef7da 0xbf165c 0x1ca506 0x2a55)
libc++abi.dylib: terminate called throwing an exception
(lldb)
It can't load all the polygons and crashes. Is there any solution for that? Or I must star looking another way to load the model? It you want more information just ask. Thanks.
I used Cocos3D to import a Earth model and rotate it according to the gestures made by the users. You can give it a look, it's not a complex thing to do.
Have a look at this post for some sample code about loading the model. For handling rotation, I found very useful this post.
Related
How are we supposed to implement the environment's render method in gym, so that Monitor's produced videos are not black (as they appear to me right now)? Or, alternatively, in which circumstances would those videos be black?
To give more context, I was trying to use the gym's wrapper Monitor. This wrapper writes (every once in a while, how often exactly?) to a folder some .json files and an .mp4 file, which I suppose represents the trajectory followed by the agent (which trajectory exactly?). How is this .mp4 file generated? I suppose it's generated from what is returned by the render method. In my specific case, I am using a simple custom environment (i.e. a very simple grid world/maze), where I return a NumPy array that represents my environment's current state (or observation). However, the produced .mp4 files are black, while the array clearly is not black (because I am also printing it with matplotlib's imshow). So, maybe Monitor doesn't produce those videos from the render method's return value. So, how exactly does Monitor produce those videos?
(In general, how should we implement render, so that we can produce nice animations of our environments? Of course, the answer to this question depends also on the type of environment, but I would like to have some guidance)
This might not be an exhaustive answer, but here's how I did.
First I added rgb_array to the render.modes list in the metadata dictionary at the beginning of the class.
If you don't have such a thing, add the dictionary, like this:
class myEnv(gym.Env):
""" blah blah blah """
metadata = {'render.modes': ['human', 'rgb_array'], 'video.frames_per_second': 2 }
...
You can change the desired framerate of course, I don't know if every framerate will work though.
Then I changed my render method. According to the input parameter mode, if it is rgb_array it returns a three dimensional numpy array, that is just a 'numpyed' PIL.Image() (np.asarray(im), with im being a PIL.Image()).
If mode is human, just print the image or do something to show your environment in the way you like it.
As an example, my code is
def render(self, mode='human', close=False):
# Render the environment to the screen
im = <obtain image from env>
if mode == 'human':
plt.imshow(np.asarray(im))
plt.axis('off')
elif mode == 'rgb_array':
return np.asarray(im)
So basically return an rgb matrix.
Looking at the gym source code, it seems there are other ways that work, but I'm not an expert in video rendering so for those other way I can't help.
Regarding your question "how often exactly [are the videos saved]?", I can point you to this link that helped me for that.
As a final side note, video saving with a gym Monitor wrapper does not work for a mis-indentation (as of today 30/12/20, gym version 0.18.0), if you want to solve it do like this guy did.
(I'm sorry if my English sometimes felt weird, feel free to harshly correct me)
I've been trying to make a Unity game that uses the Xbox One Kinect (V2).
I followed the instructions in this tutorial:
http://www.imaginativeuniversal.com/blog/2015/03/27/unity-5-and-kinect-2-integration/
There are two sample scenes in this zip file: (1) KinectView and (2) GreenScreen.
When I run the first sample (KinectView), the image looks warped, like the right part of the screenshot below:
When I run the second sample (the GreenScreen scene) I get a Null frame error:
Now I'm not really concerned the warping issue in the first scene (KinectView). I am concerned with the Background Removal feature in the second scene (GreenScreen). All I need is to see myself clipped against a custom background.
Can anyone help me figure out how to fix this NULL MSFR Frame issue?
I have uploaded the zipped project in case anyone is interested:
https://www.sendspace.com/file/j2ftqz
Thank you very much.
Update:
I have been messing with some of the Shader options in the Inspector, and noticed that all shader options work except the DX11\GreenScreenShader one. Some of them look like a normal video capture; others are better lit (additive/multiply/alpha blend/etc...).
Why is it that the DX\GreenScreenShader option is the only one that does not work, and instead show nothing more than a pink square.
Screenshot below.
Open the shader in question on your favorite text editor and change these two lines:
1) Line 15. from Texture2D _MainTex; to UNITY_DECLARE_TEX2D(_MainTex);
2) Line 59. from o = _MainTex.Sample(SampleType, i.tex); to o = UNITY_SAMPLE_TEX2D( _MainTex, i.tex );
Updating the shaders as shown above will solve the issue you describe.
Secondary Reference/Source: https://forum.unity.com/threads/kinect-v2-0-sdk-green-screen-demo-for-unity-3d-not-working-why.467687/
I'm new to using Ogre and especially Recast/Detour, and I need a little help.
I'm loading a terrain in Ogre and creating a navigation mesh over the top of it with Recast/Detour. I wanted to loaded more complex terrains because as of right now, I can only load .mesh files which as far as I know can't contain other objects, like buildings, etc. I have two ways that I can think of to do this:
1) Export the .obj files with Blender to .scene files. Then use a third party .scene loader, like DotScene, to load these into Ogre. Then I'd have to figure out how to get Recast to create the navigation mesh on top of a whole scene.
2) Or use Ogre's new terrain loading system, which I haven't read much up on yet.
So if you've worked on a project that uses Ogre and Recast/Detour, how did you accomplish the loading of your terrains and creation of your navigation meshes?
EDIT:
I found a third option that will allow me to keep my current solution but to also load complex terrains. I figured out a way to combine Ogre meshes into one giant mesh file using Blender. I can still load the terrain as a .scene but the navmesh creation procedure does not work with the entities loaded that way, whereas a giant mesh loaded can use the same functionality I already had.
I have no experience with Recast or Detour, hence cannot really comment on your question, but I can point you to OgreCrowd which is a project that works with Ogre::Terrain + Recast/Detour and is open-source. So it might provide some inspirations/ideas/pointers:
Ogre Forum Thread: OgreCrowd - a crowd component for Ogre using Recast/Detour
This corresponding video shows that it can handle Terrain plus additional objects on top of it, so it matches your scenario.
I'm using matplotlib housed in a wxPython panel to do some heavy duty plotting. My issues comes when using native panning tool - it's appears as though matplotlib tries to constantly redraw the canvas as you drag the pan handle around. With the amount of data I'm plotting this is getting really choppy (already optimized with Collections for data etc)
In terms of performance I think it would be much preferable for the canvas to just draw once when the mouse is released at the end of a pan. I realise this will mean I have to extend the WxAgg NavigationToolbar2 class with my own, but I'm wondering if anyone has attempted something similar to this and can advise me on which functions to override?
many thanks
I've spent a lot of time making modification on the matplotlib backends, I've never done this specific change, but I can show you one line of code to comment out that will stop the dynamic updating:
I presume you are using the WxAgg backend, if this is the case, open this file: C:\Python27\Lib\site-packages\matplotlib\backends\backend_wx.py
And comment out the line indicated here:
def dynamic_update(self):
d = self._idle
self._idle = False
if d:
#self.canvas.draw() #<--- Comment out to stop the redrawing during the Pan/Zoom
self._idle = True
I tested this and it seems to nicely solve your issue. I did some quick digging and I didn't see any other functions calling this procedure so you might even be able to just change it to:
def dynamic_update(self):
pass
...Which is the same code you'll find in the base NavigationToolbar2 class
(And of course, if you're happy with this change you can do a little more work to make your own custom backend with this kind of modification. Just to make sure you don't lose the change when upgrading matplotlib)
So ive been doing some iphone development with some OpenglES in it, but i am getting a rather weird error when i call prepareToDraw on my effect. My program in short simulates dice rolling (trying to learn objective-c and opengl). The program works fine for the most part, i can use everything ive programmed my app to do (with its bugs in the physics but ill fix that later). The problem comes in after ive used the part that contains the OpenGL.
The program contains 2 menu's you have to go through in order to reach the screen that is using OpenGL, once you have used the apps OpenGL part and go back to the previous menu, then try go back to the OpenGL part again, i get a print out saying GL ERROR: 0x0501. ive narrowed it down to be caused by the prepareToDraw method from my effect. The other weird part about it, is if i go back, then forward again, the OpenGL works again, and can be repeated again and again for it to be working and breaking every second time you go into the OpenGL part.
I've been searching around for similar problems to mine, but each time its got something to do with loading textures that are not a power-of-two texture, which doesnt help me because im not even using textures yet, just colored vertices.
ive pastebin'd my two code files where the problem should lie
Dice.m: http://pastebin.com/ze1DEEzs
in the draw method you'll see my printouts have narrowed down where the problem lies, which is the prepareToDraw method. (line 308)
RollViewController.m: http://pastebin.com/VycwAh3R
this file is where i setup the effect and the context etc, so i must be doing something wrong in here to cause the prepareToDraw method to mess up every 2nd time i run the OpenGL part of the program. i have a feeling it has something to do with not letting go some kind of resource to do with the context and the effect, but i cant find anything about deleting context and effect (probably because you dont need to but im not sure).
I hope there is someone out there that has run into the same problem and can answer my question and i hope its not just a silly mistake because ive been trying to solve this for a while now :)
thanks
After much pain and suffering i finally found a fix to the problem. Im not exactly sure why this is a problem, but creating the context within the OpenGL part (aka RollViewController.m) is not the way to do it. Instead you should create it once throughout the lifetime of your program and just set your current context for your glkview to be the context you have made. Maybe someone can enlighten me why recreating the context is a bad idea.
In my code i have a profile object that gets passed around between views and menu's so that they can all communicate with the same data, so i just defined a public context within my profile object so that everything can get access to the context instead of creating their own (and breaking).
The error seems general and might be because of different issues. I avoided it(for now) by not using mipmaps. I commented the following code I had and don't have the error anymore.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_LINEAR_MIPMAP_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D)