As many others I've been in a crazy search for a good OpenGL ES 2.0 tutorial. But I seem to have tasted any available but I'm still not confident. I know that 1.1 has a fixed pipeline whereas 2.0 leaves everything upon the programmer's taste. So I thought I might learn lights, textures and stuff from 1.1 tutorials and then after learning those concepts I can do all the stuff that's not provided as ready functions manually using shaders. DO you think I'm on a right way or should I stop?
Your question is a little more subjective, so my answer will necessarily contain personal opinions, but I believe that it is no longer useful to start learning OpenGL ES with 1.1 and that people should start with 2.0. On iOS, the number of active devices in the field that can't handle 2.0 is estimated to be somewhere less than 5% (based on sales, the highest this number could possibly be is 16%). I can't speak to Android statistics, because I've had a harder time pinning down numbers there due to the diversity of available hardware, but that also appears to be very much in 2.0's favor.
When it comes to the technical side of things, I don't think there is much to be gained by learning 1.1 first, because you'll end up tossing a lot of that out the window when moving to 2.0. There isn't much of a point in learning the specific API calls to set up lights and material properties in 1.1 when you'll have to create your own shader programs to do this in 2.0. You don't really gain any insight into how these lights, etc. work in the fixed function pipeline of 1.1, because that just acts like a black box. When it came time for me to move to 2.0, I found that I didn't really understand what had been calculated for me in 1.1, so I didn't gain much from experience with the older API.
The things that are the same between the two can be learned just as well from starting with 2.0.
One large advantage that 1.1 has over 2.0 is the availability of sample code and tutorial material. However, that's getting better over time as people migrate to the new API, and I describe some of the best material I've found on the topic (with an emphasis on iOS) in this answer. As I indicate there, I taught classes on both 1.1 and 2.0 that can be found for free on iTunes U. The course notes for that link to several simple 2.0 examples I assembled to demonstrate how to replicate 1.1 capabilities. I also talk about how to transition some 1.1 code to 2.0 in this answer (the PowerVR reference I link to there is particularly good for showing how 1.1 effects can be generated in 2.0 shaders).
Personally, I feel much more at home with OpenGL ES 2.0 than I ever did with 1.1, in large part because I feel the 2.0 API is much simpler. I don't have to remember all of the commands to set up my scene in just the right way, and what I can and cannot do with the API. I just write simple GLSL code to do what I want.
Related
I want to create a application which converts 2d-images/video into a 3d model. While researching on it i found out similar application like Trnio, Scann3D, Qlone,and few others(Though few of them provide poor output 3D model). I also find out about a technology launched by the microsoft research called mobileFusion which showed the same vision i was hoping for my application but these apps were non like that.
Creating a 3D modelling app is complex task, and achieving it to a high standard requires a lot of studying. To point you in the right direction, you most likely want to perform something called Structure-from-Motion(SfM) or Simultaneous Localization and Mapping (SLAM).
If you want to program this yourself OpenCV is a good place to start if you know C++ or Python. A typical pipeline involves; feature extraction and matching, camera pose estimation, triangulation and then optimised using a bundle adjustment. All pipelines for SfM and SLAM follow these general steps (with exceptions of course). All of these steps are possible is OpenCV although Googles Ceres Solver is an excellent open-source bundle adjustment. SfM generally goes onto dense matching which is where you get very dense point clouds which are good for creating meshes. A free open-source pipeline for this is OpenSfM. Another good source for tools is OpenMVG which has all of the tools you need to make a full pipeline.
SLAM is similar to SfM, however, has more of a focus on real-time application and less on absolute accuracy. Applications for this is more centred around robotics where a robot wants to know where it is relative to its environment, but it not so concerned on absolute accuracy. The top SLAM algorithms are ORB-SLAM and LSD-SLAM. Both are open-source and free for you to implement into your own software.
So really it depends what you want... SfM for high accuracy, SLAM for real-time. If you want a good 3D model I would recommend using existing algorithms as they are very good.
The best commercial software in my opinion... Agisoft Photoscan. If you can make anything half as good as this i'd be very impressed. To answer your question what resources will you require. In my opinion, python/c++ skills, the ability to google well and a spare time to read up on photogrammetry and SfM properly.
I am a very beginner in software and I am asking or a direction to proceed for research technologies to build my app. I am having just an idea for the app. I am trying to build something like zomato but different services. The idea of location based system is similar. I searched online and came to know about GIS systems. But while researching further, it seems I've to create a map all together. This feels redundant to build as we have api of google maps.
But can i use this api to build a system "ON" it????
Any tutorials or some direction in this direction would be helpful.
Also what is difference between GIS and gps based apps.
As you see, I am not very clear in the fundamentals of the GIS and GPS based apps
Thanks for the help
Regarding Android, you have almost all you need by combining the platform API and the comprehensive Google Maps Android API. Regarding the later, it's actually a matter of opting by convenience and possibly paying a licence fee to Google, versus developing your own solutions of aggregating free or cheaper services from elsewhere.
Most problems solved by apps are not the same problems solved by classical GIS software, since the former are more consumer-oriented (using public transportation, navigating a route, planning a trip, finding a nearby restaurant), and the later are more specialist-oriented, typically solving larger-scale and more technical issues (detecting regions with flood risk, monitoring deforestation, calculating volumes of terrain to be bulldozed, etc.)
You should not, IMO, be discouraged by the seemingly hard technical concepts of geography and map making. Your best bet is to have a clear vision of what actual problems you app should be solving, and study the geography topics gradually, as the need arises.
A bit of consideration on your question about GIS:
If it were created today, the GIS acronym would mean any software dealing with geographic data, be it a mobile app or a workstation software suite destined to specialized professional use.
But when it was created, the term meant almost exclusively the later sense, and so it has a lot of tradition and cultural legacy to it - which is of couse not always a good thing. Specifically (at least in my experience), it seems to me the jargon and concepts used by the classic GIS community are a bit impenetrable to the newcomer, specially if she comes from the software-development field instead of the geo-sciences field.
But geographic information availability has gone from scarcity to overwhelming abundance, and so have its enabling technologies: GPS satellites, mobile computing and mobile connectivity.
I'm currently using a Processing Kinect library which supplies a depth map. I was wondering how I could take that and use it to create a 2D skeleton, if possible. Not looking for any code here, just a general process I could use to achieve those results.
Also, given that we've seen this in several of the Kinect games so far, would it be difficult to have multiple skeletons running at once?
Disclaimer: the reason why you still didn't get an answer for this question is probably because that's a current research problem. So I can't give you a direct answer but will try to help with some information and useful resources for this topic.
There are mainly 2 different approaches to create a skeleton from a depth map. The first one is to use machine learning, the second is purely algorithmic.
For the machine learning one, you'd need many samples of people doing a predetermined move, and use those samples to train your favorite learning algorithm. That's the approach that was taken and implemented by Microsoft in the XBox (source), it works really well BUT you need millions of samples to make it reliable... quite a drawback.
The "algorithmic" approach (understand without using a training set) can be done in many different ways and is a research problem. It's often based on modeling the possible body postures and trying to match that with the depth image received. That's the approach that was chosen by PrimeSense (the guys behind the kinect depth camera technology) for their skeleton tracking tool NITE.
The OpenKinect community maintains a wiki where they list some interesting research material about this topic. You might also be interested in this thread on the OpenNI mailing list.
If you're looking for an implementation of a skeleton tracking tool, PrimeSense released NITE (closed source), the one they made: it's part of the OpenNI framework. That's what's used in most of the videos you might have seen that involve skeleton tracking. I think it's able to handle up to 2 skeletons at the same time, but that requires confirmation.
The best solution is to use FAAST (http://projects.ict.usc.edu/mxr/faast/) which requires OpenNI. I have struggled to get OpenNI to work on my computer. I have not seen an approach yet using Code Laboratories' CL NUI.
An algorithmic approach is http://code.google.com/p/skeletonization/ but you may have a problem because your depthmap only represents surfaces and no closed objects.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm a beginner in game development and game programming. I have experience in computer graphics - mainly OpenGL
In those days Finally, I have some spare time to polish my game coding skills.
But when coming to program a simple 3d game, I couldn't find any good resource for free textures and models for 3d graphics (for 2d game for example, I found many resources for sprite sheets and so on).
Is there any good resource you're familiar with for 3d game textures/models?
This is not a programming queston.
As far as I know, good, free and high-quality modeling resources does not exist (from "good", "free" and "high-quality", select two).
There are multiple free model repositories, but quality of content is generally poor, and there are few places where you can buy models.
There are free textures in multiple places (like this one), and they are easier to find than good free models.
Also, most of free content frequently includes some kind of catch - "non-commercial use only", "creative commons share alike"(i.e. if you make derivative, it should use same license), or it is under GPL.
Anyway, if you're okay with "Creative Commons share alike" and GPL, then you can probably use content from some of opensource games (OpenArena ), and get quite a lot of textures from wikipedia or wikimedia commons, flickr, and you can google for "free textures". You should be careful about using content from opensource games - some opensource projects (like war$ow and sauerbraten) use closed-source/restricted licenses for game content (i.e. you're free to reuse modify engine, but you cannot modify game content and you cannot use it with modified engine. Reasons are pretty obvious).
Anyway, it depends on what kind of model you want. It is pretty easy to find "easy" stuff like boxes, barrels, etc, because everyone can do that. When it comes to guns and vehicles, there will be a trouble - quality will drop, and number of good models will decrease. And if you want a fully rigged animated character with multiple animation, normally you can forget about it - such content is almost impossible to find. But you can probably use mods for Q3 and Q2 if you want characters (you can forget about physics in this case, though)
I'd recommend to forget about "free stuff", and try to make content yourself or hire someone to do that.
If you decide to make content yourself, then you'll need digital photo camera and (optionally) graphic tablet. You can make mediocre textures from photos (digital camera is cheap) using gimp, gimp-resynthesizer plugin, gimp-texturisze plugin, high-pass filters, etc. You can also make normal maps using blender or gimp, and there are even tutorials about extracting them from photos (you still will need to process them by hand). Modeling and animation can be done in blender (after 1 or two weeks of training) using reference photos. Low poly modeling is pretty quick (20 minutes to make a low-poly low-quality gun, hour or two to make simple character), but texture and animation will take more (setting up animation for character can take a few hours for amateur, making one animation for character will take at least several hours as well, making texture unwrap - hour, painting texture - up to few days, depending on quality you want, available reference material, availability of graphic tablet, etc). It is possible to cut corners a bit - for example, for making animations, you can film motion using photo camera(or video camera), and then use it for rotoscoping. Also, you'll need to find some kind of model format blender can export to, or you'll have to write an export plugin in python.
The Blender foundation has a large model repository which may be of use.
There are some free models at Turbosquid that I use sometimes for my XNA games.
But of course, the best stuff is not free.
My experience is that there is very little in the way of quality 3d models with animation and full rigging freely available. There a few companies like this who sell suitable models cheaply and I guess most hobbyists could afford one or two models from them fairly easily which would probably be sufficient for learning. (I have no connection to them but I did buy one model pack from them which I quite liked)
It would be nice if there were a few more generally freely available 3d animated models around though. I even think it might be in the interests of some of the companies that make them to give a few away. If I'd been able to get further in my hobby projects I might have spent £100-200 in total on some nice model packs to make my project better, but due to the lack of any real 3d animated models I ended up losing interest in all my 3d projects before I got to the point of thinking maybe I'd spend a little money on this hobby. I wonder if the availability of a few more free quality models would actually significantly increase the size of the market for those companies as more people got their projects to the point where they were willing to spend a little money on it.
Some company should make a nice model pack with a few static models and a couple of fully rigged and animated humans and "monsters" and say that if the community donates £10000 they'll release them for free use. I suspect there are enough people out there who would like a few quality models they might reach this target in the same way that Blender was originally sold to the public.
I know that it's been a long time since this question was asked, but I ran into same problem when programming in XNA and I found a good solution. As long as you don't need rigged / animated models, Google Warehouse is the best place to search. As far as I know, each model submitted to Google Warehouse is available on Creative Commons license. You just need to:
Download and install Google Sketchup (Sketchup download)
Browse to find a model (Google Warehouse) - there's a 3D preview for each one!
Get a plugin to export Sketchup models to .X - I recommend the '3D RAD' plugin (3D RAD download)
If your model does not look good after the export, try to separate it into several less complex ones.
you are looking for open game art ...
http://thefree3dmodels.com/ has a multitude of free 3D models. I've used a few of these for animation purpose, maybe it'll help you too.
Initial tests indicate that GDI+ (writing in VB.NET) is not fast enough for my purposes. My application needs to be able to draw tens of thousands of particles (coloured circles, very preferably anti-aliased) in a full screen resolution at 20+ frames per second.
I'm hesitant to step away from GDI+ since I also require many of the other advanced drawing features (dash patterns, images, text, paths, fills) of GDI+.
Looking for good advice about using OpenGL, DirectX or other platforms to speed up particle rendering from within VB.NET. My app is strictly 2D.
Goodwill,
David
If you want to use VB.NET, then you can go with XNA or SlimDX.
I have some experience in creating games with GDI+ and XNA, and I can understand that GDI+ is giving you trouble.
If I where you I'd check out XNA, it's much faster than GDI+ because it actually uses your video card for drawing and it has a lot of good documentation and examples online.
SlimDX also looks good but I don't have any experience with it. SlimDX is basically the DirectX API for .NET.
The only way to get the speed you need is to move away from software rendering to hardware rendering... and unfortunately that does mean moving to OpenGL or DirectX.
The alternative is to try and optimise your graphics routines to only draw the particles that need to be drawn, not the whole screen/window.
I would agree with JaredPar that you're better off profiling first to determine if your existing codebase can be improved before making a huge switch to a new framework. DirectX is not the easiest framework if you're unfamiliar with it.
The most significant speed increase I found, when writing a game maker with GDI+, was to convert my bitmaps to Format32bppPArgb;-
SuperFastBitmap = ConvertImagePixelFormat(SlowBitmap, Imaging.PixelFormat.Format32bppPArgb)
If they are not in this format already, you'll see the difference immediately when you convert.
It's possible the problem is in your algorithm and not GDI+. Profiling is the only way to know for sure. Without a profile it's very possible you will switch to a new GUI framework and hit the exact same problems.
If you did profile, what part of GDI+ was causing a problem?
As Jared said,
it could be that a significant fraction of your cycles are not going into GDI, and you might be able to reduce those.
A simple way to find those is to halt it at random a few times and examine the stack. The chance that you will catch it in the act of wasting time is equal to the fraction of time being wasted.
Any instruction or call instruction that appears on more than one such sample is something that, if you could replace it, you would see a speedup.
In general, the method is this.
As you're working in VB.net, have you tried using WPF (Part of .net since 3.0)? As WPF is based on DirectX rather than GDI+, that should give you the speed you need, although developing WPF is not straight-forward at all.
Because the GDI+ is not moved by the graphics card, it's slow to render because it uses the CPU to render. At least, you can use DirectX or SlimDX.
(sorry for bad english)
See This: http://msdn.microsoft.com/en-us/library/windows/desktop/ff729480%28v=vs.85%29.aspx
http://www.codeproject.com/Articles/159586/Starting-DirectX-with-Visual-Basic-NET