3D java library - java-3d

as a beginner, I've tried to use java to make a 2D object.
then I want to make a 3D object but in the editor application that I use does not support Java 3D libraries. roughly, how I could incorporate 3D java libraries into the application that I use.
I use the "jcreator" and "bluej" as java editor.

Java doesn't come with any 3D libraries built in, so you're going to have to download and install one yourself. You can get the Java 3D API here. Here's a rough tutorial to take a look at as well, this may help too. Just as a heads up, there is far more math involved in 3D programming than there is with 2D. Make sure you have a solid grasp on how 2D graphics work before you jump in to 3D. Good luck, have fun.
Also, if you're looking into creating games in Java then I would suggest lwjgl (Lightweight Java Game Library).

Related

How do I convert OpenGLES shaders to Metal compatible ones?

I have a project which uses about 2 dozen .vsh and .fsh files to draw 2D tiles using OpenGLES. Since that is deprecated, I want to convert my project to Metal. My head is now swimming with vocabulary and techniques involved in both systems - graphics is not my forte.
Can I use OpenGLES to compile the .vsh/.fsh files, and then save them in a metal-compatible format? The goal would be to then use the saved information in a metal-centric world and remove all the OpenGLES code from the project. I've spent a few days on this already, and yet I don't understand the processes enough to fully attempt the transition to Metal. Any/all help is appreciated.
I saw this: "On devices that support it, the GLSL code you provide to SKShader is automatically converted to Metal shading language and run on a Metal renderer", which leads me to believe there is a way to get this done. I just don't know where to begin. OpenGL ES deprecated in iOS 12 and SKShader
I have seen this:
Convert OpenGL shader to Metal (Swift) to be used in CIFilter, and if it answers my question, I don't understand how.
I don't think this answers it either: OpenGL ES and OpenGL compatible shaders
Answers/techniques can use either Objective-C or Swift - the existing code is Objective-C, the rest of the project has been converted to Swift 5.
There are many ways to do what you want:
1) You can use MoltenGL to seamlessly convert your GLSL shaders to MSL.
2) You can use open-source shader cross-compilers like: krafix, pmfx-shader, etc.
I would like to point out that based on my experience it would be better in terms of performance that you try to rewrite the shaders yourself.

What iOS 3d engine to use for very simple 3d scene?

I am planning to develop an iOS app and would love to have some very simple 3d objects the user could interact with either by gestures or gyro. I don't need complex animations or game logic. I have developed simple apps but never used 3d. What frameworks could be best for such task and does it take a lot to learn and implement?
Cocos engine. Google cocos2D but look for the 3D information. Cheers
Using OpenGL with GLKit seems to benefit you the most here.
I started to use NinevehGL (http://nineveh.gl). It is a very simple framework to load and handle 3D objects for iOS.
I guess OpenGL can fit your need.

Need to add an interactive 3D model to my otherwise non-3D app

As briefly as I can; are there any frameworks available that I can drop into an iPad app I'm working on, along with a 3D model, and allow me to add a view that will present the model in an interactive format?
Model needs to be rotatable, and ideally I would like to be able to add interactive points on to the model that pop up modal views when tapped.
I have never worked with 3D before in any respect so I'm coming at that part as a complete novice. The 3D model is being supplied to me and will be available in "various formats". The rest of the app is pure Objective-C in which I'm proficient enough.
I have Googled and Googled and have come up with nothing so far.
Failing there being any drop-in frameworks, how much of a challenge is it likely to be to get myself up to speed with what I would need to know? Are there any good starting points to expand my knowledge here?
3D is a complex matter, if you don't see your future dealing with it on a regular basis I wouldn't recommend writing your own solutions for it.
The closest you can find to a drag and drop framework would be the SDK of the iPhone / iPad GPU's manufacturer. It's pretty easy to use.
PowerVR SDK Download
After a free registration on their website, you can download the SDK that contains lots of samples with source code. Their framework displays 3D models in their own POD format, which is of course heavily optimized for the iOS devices. Ask your 3D model provider to give you the models in POD format (you can find POD converters / exporters for Maya etc. on PowerVR's website as well).

OpenKinect Maturity

I'm interested in writing some homebrew code for the Microsoft Kinect console. I have a few applications which I think would translate well to the platform. I've been toying with the idea of giving it a shot using the OpenKinect drivers and libraries. Obviously this would be a lot of work, but I am wondering just how much. Does anyone have experience with OpenKinect? Do you get only the raw video/audio data from the device, or has anyone written higher level abstractions to make common tasks easier?
The OpenKinect library is basically a driver — at least for now — so don't expect much high functions from it. You will more or less get the raw data from both the depth and the video cameras.
This is basically an array received in a callback function each time a frame arrives.
You can give it a try by following the instructions provided on the OpenKinect website, it's really quick to install and try it, and you can play a bit with the glview application provided to get a feeling of what's possible.
I've set up a few demos using opencv, and got pretty cool results even though I didn't have much background in computer vision so I can only encourage you to try it yourself!
Alternately, if you're looking for more advanced functions, the OpenNI framework was just released this week and provides some impressive high level algorithms such as skeleton tracking and some gesture recognition. Part of the framework is proprietary algorithms from PrimeSense (like the powerful skeleton tracking module...). I haven't tried it yet and don't know how well it integrates with the kinect and the different OS, but since a bunch of guys from different groups (OpenKinect, Willow Garage...) are working hard on it that shouldn't be an issue within a week.
Elaborating further on what Jules Olleon wrote, i've worked with OpenNI (http://www.openni.org) and the algorithms above it (NITE), and I highly recommend using these frameworks. Both frameworks are well-documented, and come with numerous samples from which you can start out.
Basically, OpenNI abstracts the lower-level details of working with the sensor and its driver for you, and gives you a convenient way to get what you want from a "generator" (e.g. xn::DepthGenerator for getting the raw depth data). OpenNI is open-source and free to use in any application. OpenNI also handles the platform-abstraction for you. As of today, OpenNI is supported and works fine for Windows 32/64 and linux, and is in the process of being ported to OSX. Bindings are available for use in multiple programming languages (C, C++, .NET, Python, and a few others I believe).
NITE has additional interfaces built above OpenNI, which give you higher-level results (e.g. track a hand-point, skeletons, scene analysis etc). You'll want to check the subtleties of NITE's license regarding when/where you can use it, but it's still probably the easiest and fastest way to get analysis (e.g. skeleton) for now. NITE is closed-source, so PrimeSense need to supply a binary version for you to use. Currently windows and linux versions are available.
I haven't worked with with OpenKinect but I've been working with OpenNI and SensorKinect for a few months now for my research. If you are planning to work with raw data from Kinect, they work great in giving you depth and video (they don't support motor control). I've used it with C++ and OpenGL in both Windows 64bit and Ubuntu 32bit with almost no modifications to the code. It's very easy to learn if you know basic c++. Installing it might be a little headache.
For more advanced features such as skeleton detection, gesture recognition, etc., I highly recommend using the middlewares such as NITE with OpenNI or the ones provided in here: Middlewares developed around OpenNI rather than re-inventing the wheel. Nite is also very easy to use once you have OpenNI working; e.g. joint recognition is something around 10-20 extra lines of code.
Something that I would recommend to my younger self would be to learn and work with a basic game engine (e.g. Unity) rather than directly with OpenGL. It would give you a lot better and more enjoyable graphics, less hassle and would also enable you to easily integrate your program with other tools such as PhysX. I haven't tried any, but I know there are some plugins for using Kinect drivers in Unity.

Multi-threaded Scientific data visualization in c++

I have a process which generates a data vector from a sensor.I'm using Intel Integrated Performance Primitives v5.3 update 3 for Windows on IA-32 to process it further for some calculations.I want to know if there is any c++ library which allows to plot the vector as a histogram/bar chart during data acquisition.I can write the multi-threaded code,but need information on availability of plotting functions in C++. This thing is pretty simple in MATLAB ,but I want to do it using c++ .
Suggestions are welcome !!
You can try one of these:
CBarChart
Scientific charting control
High-speed Charting Control
VTK
ChartDirector
Charting Library
GDCHART
Carnac Chart Library
As mentioned above, VTK is an open source C++ viz library. And more importantly, it is by-design parallelized. It will try and use whatever hardware you can give it.
MathGL plotting functions can be executed in separate thread and can be parallelized by user-side.
I use OpenInventor for scientific visualisation. It may be (in some ways) a relic of the old SGI days, but it is still being supported and works well. Regarding graphing and other scientific visualisation, look at MeshViz and other extensions from Mercury:
Mercury (formerly TGS) OpenInventor or Coin, a dual-licensed (GPL + commercial) alternative
MeshViz extension
It has charting, vector visualisation, etc. It's quite comprehensive.
It's not free, but they do trial licenses so you can determine if it suits your needs.