I'd like to use Vuforia with HTC Vive camera texture.
To use this, I added CameraRig that provided by SteamVR and
an Script that brings its CameraTexture.
Now, I can see forward through HTC Vive hdmount Camera and I applied Vuforia package.
but, evenif I project target Image, this program cannot found target.
just show me camera texture.
here are images that reflect my state.
Trying using a Texture2D with Vuforia (assuming you can) after loading it with the buffer from the camera and applying it to the texture:
https://medium.com/#dariony/about-mixed-reality-part-2-4a371f03d910
Related
I want to implement live 360 video using Theta s camera. I already have done implementation
using three.js for showing images in 360 and live audio stream using WEB RTC.
I found implementation using a-frame for images and video but I couldn't find anything on how to implement using live stream in 360 from my camera input. My question
is how to get the input from my Theta s camera and to show in 360 perspective.
My idea is to get the video from my camera input, to glue on a ball object and to be shown on full screen.
I am rendering my scene into a texture so that I can apply post-processing before displaying the final result. However, when I added this feature, MSAA/CSAA stopped working. Is there a way (other than performance intensive FSAA) to get anti-aliasing to work?
I am targeting multiple platforms (android 2.2+, iphone 3gs+, all ipads), so I am looking for a way to do this without requiring extensions (unless they are ubiquitous).
Yes, MSAA works by default on default frame buffer only. You should use multisample textures .This thread can be helpful:
Multisampled render to texture in ios
Why should you ever want something like this?
I want to track a single user that is mounted above the ground in a horizontal position. The user is facing downwards to allow free movement of legs and arms. Think of swimming for example.
I mounted the Kinect at the ceiling facing downwards so I have a free view of all extremities.
The sensor is rotated 90° in z-axis to have the maximum resolution (you're usually taller than wide).
Therefore the user is seen from the backside, rotated by 90°. It is impossible to get a proper skeleton from OpenNI 1.5. My tests showed that OpenNI is expecting the user facing the camera with the head up in y-axis (see my other answer). Microsofts SDK is the same but I excluded it here because it won't allow you to change the source code and cannot be adapted. OpenNI 2.0 is not working with the current SensorKinect to interface the Kinect in Linux. So:
Which class is generating the skeleton in OpenNI 1.5.x?
My best guess would be to rotate the prototype skeleton by y 180° and z 90°. If you know where I could find this.
EDIT: As I just learned there is no open source software that generates a skeleton from depth images so I fall back to the question in the header:
How can I get a user skeleton from a rotated back view?
I am about to launch a new app and would lik eto increase quality of the graphics. In my case the graphics is the logo and the custom buttons. I do not know if this impact Core Plot but that is also part of the package.
There is quite a few posts about this but there are still things i do not fully understand.
I am reading this quote from "amattn":
It's trivial:
1.Only include #2x images in your project.
2.Make sure those images have the #2x suffix.
The system will automatically downscale for non-retina devices.
The only exception is if you are doing manual, low level Core Graphics drawing.
You need to adjust the scale if so. 99.9% though, you don't have to worry about this.
From this post: Automatic resizing for 'non-retina' image versions
My question in regards to this is:
1. Should i do the testing on retina simulator only, as if i place a #2 grapic on
non-retina it will be too big? ...or is there an other way of doing it?
2. Should i always use a ImageView or is it OK to drag the image on the screen,
this is the logo i am talking about?
3. What about custom made buttons with images, how should i do with those?
4. What is the normal process to manage the different screen sizes, do people
add images for all displays or using this type of process?
Should i do the testing on retina simulator only, as if i place a #2 grapic on non-retina it will be too big? ...or is there an other
way of doing it?
It doesn't really matter which simulator you test on because as long as your non-retina and retina graphics are named correctly (image and image#2x) the correct image will be displayed automatically.
Should i always use a ImageView or is it OK to drag the image on the screen, this is the logo i am talking about?
When you drag and image from the project directly onto a view in interface builder you don't really see it happen but it has automatically created and image view which is containing the image your dropped in.
What about custom made buttons with images, how should i do with those?
[myButton setImage:[UIImage imageNamed:#"myFileName"]];
As shown in the above code you should always use the non-retina fle name when you reference the image a UI element should use. That was if iOS detects the device is retina it can automatically use the #2x version in its place.
What is the normal process to manage the different screen sizes, do people add images for all displays or using this type of process?
Yes, including multiple image resolutions common practice and is required for iPhone apps (not sure about iPad) to include both retina and non-retina images. But regardless of the requirements, you should definitely support both device resolutions to keep your customers happy!
Kinect camera has a very low resolution RGB image. I want to use point cloud from the depth kinect but want to texture map it with another image taken from another camera.
Could anyone please guide me how to do that?
See the Kinect Calibration Toolbox, v2.0 http://sourceforge.net/projects/kinectcalib/files/v2.0/
2012-02-09 - v2.0 - Major update. Added new disparity distortion model and simultaneous calibration of external RGB camera.