I want to implement live 360 video using Theta s camera. I already have done implementation
using three.js for showing images in 360 and live audio stream using WEB RTC.
I found implementation using a-frame for images and video but I couldn't find anything on how to implement using live stream in 360 from my camera input. My question
is how to get the input from my Theta s camera and to show in 360 perspective.
My idea is to get the video from my camera input, to glue on a ball object and to be shown on full screen.
Related
I am making a DJI Mobile SDK app and have setup an application that gets live video from the drone and displays it in a view, but I need to pull a single frame from the video feed to work with and cannot figure out how to do it!
One method would be to take a picture with the drone and then download it from the SD card, but I do not require the full resolution image and it feels like there must be a simple method to just get a single frame from the video preview.
The code which casts the video stream is:
-(void)videoFeed:(DJIVideoFeed *)videoFeed didUpdateVideoData:(NSData *)videoData {
[[DJIVideoPreviewer instance] push:(uint8_t *)videoData.bytes length:(int)videoData.length];
}
any ideas on how to pull an individual from from the feed? Or maybe is there a way to have an iOS app just take a screenshot and work with that?
Thanks!
Im not very familiar with IOS. for android there is a sample which use DJI msdk to grab the still images and use the image for Panorama stitching https://github.com/DJI-Mobile-SDK-Tutorials/Android-PanoramaDemo.
The equivalent IOS version of Panorama stitching is here. https://github.com/DJI-Mobile-SDK-Tutorials/iOS-PanoramaDemo
Maybe you can get idea on how to grab the still image from there.
There are several threads about this in android.
Ios would not be different i think.
how to get bitmap data from drone camera stream. android application
Get the bitmap from the fpvWidget is by far the simpliest and fastest solution.
public Bitmap getFrameBitmap() {
return fpvWidget.getBitmap();
}
I am using SDK 2.3 and develop an Android application with AS-15 and 20 camera that is exclusively dealing with liveview.
I unable to obtain from Liveview a higher resolution than 640x360px, while the camera specs mention a 1920×1080/30P (HQ).
How can I get the full resolution?
Is this a limitation of the API ? Why?
I've found that some (other) cameras implement get/setLiveviewSize and with the L it says
XGA size scale (the size varies depending on the camera models, and some camera models change the liveview quality instead of making the size larger.)
What are the models with the highest liveview resolution?
Why should you ever want something like this?
I want to track a single user that is mounted above the ground in a horizontal position. The user is facing downwards to allow free movement of legs and arms. Think of swimming for example.
I mounted the Kinect at the ceiling facing downwards so I have a free view of all extremities.
The sensor is rotated 90° in z-axis to have the maximum resolution (you're usually taller than wide).
Therefore the user is seen from the backside, rotated by 90°. It is impossible to get a proper skeleton from OpenNI 1.5. My tests showed that OpenNI is expecting the user facing the camera with the head up in y-axis (see my other answer). Microsofts SDK is the same but I excluded it here because it won't allow you to change the source code and cannot be adapted. OpenNI 2.0 is not working with the current SensorKinect to interface the Kinect in Linux. So:
Which class is generating the skeleton in OpenNI 1.5.x?
My best guess would be to rotate the prototype skeleton by y 180° and z 90°. If you know where I could find this.
EDIT: As I just learned there is no open source software that generates a skeleton from depth images so I fall back to the question in the header:
How can I get a user skeleton from a rotated back view?
Kinect camera has a very low resolution RGB image. I want to use point cloud from the depth kinect but want to texture map it with another image taken from another camera.
Could anyone please guide me how to do that?
See the Kinect Calibration Toolbox, v2.0 http://sourceforge.net/projects/kinectcalib/files/v2.0/
2012-02-09 - v2.0 - Major update. Added new disparity distortion model and simultaneous calibration of external RGB camera.
I'm using the OSX QTKit sample code from here: http://bit.ly/mAaHGI
I'd like to crop the video, both on the screen and the saved file, to simulate different aspect ratios. What is the best way to do this?
It's a bit more involved than just calling a crop method, but Core Video allows you to manipulate the video stream. You can find the Core Video Programming Guide here:
http://developer.apple.com/library/mac/#documentation/GraphicsImaging/Conceptual/CoreVideo/CVProg_Intro/CVProg_Intro.html