I searched the web, browsed through the libGDX wiki, but without success.
My Question:
Is there a way, to access the camera of smartphones, let the user
take a photo, and then store the image in a Texture-instance?
I could imagin something like this:
#Override
public void onCamTrigger(){
ApplicationType appType = Gdx.app.getType();
switch (appType) {
case Android: case iOS:
Texture someTexture = new Texture(Gdx.input.getCamera().getImage());
//do something with the Texture instance...
someTexture.dispose();
break;
default:
break;
}
}
Of course this is pure fiction! I know that there's a lot more to this like opening the camera, displaying it, then take a photo etc. . But is there a convenience method like this? If so, how does it work? On Android, i think i could implement it without using any convenience methods offered by libGDX, but i have no idea on how this works on iOS =/
Libgdx does not wrap the platform Camera APIs. You will need to use platform-dependent code (on both Android and iOS and GWT) to access the camera.
As Metaphore notes in the comments, the Libgdx Wiki has an entry with lots of details for the Android case: https://github.com/libgdx/libgdx/wiki/Integrating-libgdx-and-the-device-camera
Related
I am making a DJI Mobile SDK app and have setup an application that gets live video from the drone and displays it in a view, but I need to pull a single frame from the video feed to work with and cannot figure out how to do it!
One method would be to take a picture with the drone and then download it from the SD card, but I do not require the full resolution image and it feels like there must be a simple method to just get a single frame from the video preview.
The code which casts the video stream is:
-(void)videoFeed:(DJIVideoFeed *)videoFeed didUpdateVideoData:(NSData *)videoData {
[[DJIVideoPreviewer instance] push:(uint8_t *)videoData.bytes length:(int)videoData.length];
}
any ideas on how to pull an individual from from the feed? Or maybe is there a way to have an iOS app just take a screenshot and work with that?
Thanks!
Im not very familiar with IOS. for android there is a sample which use DJI msdk to grab the still images and use the image for Panorama stitching https://github.com/DJI-Mobile-SDK-Tutorials/Android-PanoramaDemo.
The equivalent IOS version of Panorama stitching is here. https://github.com/DJI-Mobile-SDK-Tutorials/iOS-PanoramaDemo
Maybe you can get idea on how to grab the still image from there.
There are several threads about this in android.
Ios would not be different i think.
how to get bitmap data from drone camera stream. android application
Get the bitmap from the fpvWidget is by far the simpliest and fastest solution.
public Bitmap getFrameBitmap() {
return fpvWidget.getBitmap();
}
First of all let me give a bit of context to make the question more precise:
I am developing an Android game (using SurfaceView, Canvas, etc), and it is working perfectly for the gameplay part. I initially tried to use View-derived elements on top of it for menus and other UI elements, and rapidly realized by experimentation and looking at some questions here on Stackoverflow that this was a really bad idea, since they dont mix well (say a LinearLayout on top of a SurfaceView).
I see 3 possible paths:
A) Continue using View elements on top of SurfaceView (and deal with the problems with it, such as horrendous lag)
B) Draw UI elements manually on SurfaceView/Canvas. Something like: canvas.drawBitmap(menuBitmap, posX, posY, ...); and then handle the touches manually, and suffer with screen fragmentation
C) Use a library/framework designed specifically for this that handles all the drawing of UI, touch on buttons, drag to scroll, etc. Something like the View and its derived elements, but designed for games and apps that draw using SurfaceView.
Are there more options that Im not seeing? And "C" seems the best to me, but is there a library for that? Which one?
Edit: forgot to ask the most obvious question, also: How does other professional/commercial games deal with this?
Thanks
I really don't like these game engines (AndEngine, Corona, Unity3D, cocos2d, etc). I want to learn Android, which will be more useful for my professional life than engine X or Y.
All my games were created without game engines. I use option (A) for almost everything and I don't see any lag or sluggishness.
Examples:
The game screen for Minesweeper 3D is a set of layouts and views plus a SurfaceView to hold the OpenGL 3D field.
The game screen for Box Topple is also a set of layouts and views together with a custom view created to handle and display the box2d physics engine.
See my other games as well...
I had these same exact issues.
Then I found a free game engine called AndEngine.
This engine handles just about everything you need.
For example:
OpenGL wrapper classes for simple, efficient drawing
Sprite classes (animated or not)
Particle effects
Cameras to view different parts of the scene you draw on
Scrolling backgrounds
Online multiplayer and Box2D physics engine extensions
etc.
And for your case:
It has a HUD that you attach to a camera, which stays static as the camera moves.
You can then attach buttons (already built-in, with click events) to this.
Here is some example code to create a button:
moveRightButton = new ButtonSprite(10, 10, moveRightButtonTexture, getVertexBufferObjectManager()) {
#Override
public boolean onAreaTouched(TouchEvent event, float x, float y) {
if (event.isActionDown()) {
player.moveRight();
// Set to 'clicked' image
this.setCurrentTileIndex(1);
}
else if (event.isActionUp()) {
// Set to 'unclicked' image
this.setCurrentTileIndex(0);
}
return super.onAreaTouched(event, x, y);
};
};
You can find the engine and its extensions at https://github.com/nicolasgramlich?tab=repositories.
The main problem with the engine is that there is no documentation.
However, there is a forum dedicated to the engine at http://www.andengine.org/forums/.
Also, you can download AndEngine Examples (via the same repository I mentioned above), created by the developer to give examples of usages of various aspects of the engine.
It takes a bit to get used to, but it has been very rewarding.
I have a game that I've been working on for iOS. We let our users tweet the results of their games and I thought it'd be fun to add a badge or something to the tweet to show details.
I create an image using UIKit. Then, I attach that image in iOS 6.0 with -[SLComposeViewController addImage:] or in iOS 5.* with [TWTweetComposeViewController addImage:] but neither of them will attach an the image.
If I use Facebook or Weibo, the image attaches fine. With Twitter, no luck at all.
Has anybody had any luck attaching an image to a tweet?
If you're receiving NO for the return value of TWTweetComposeViewController, the documentation says: YES if successful. NO if image does not fit in the currently available character space or the view was presented to the user.
I would like to start front camera of the iPad when app starts.
How do I do it programmatically?
Please let me know.
First thing you need to do is to detect if your device has got front-facing camera. For that you need to iterate through the video devices.
Try this method of UIImagePickerController:
+ (BOOL)isCameraDeviceAvailable:(UIImagePickerControllerCameraDevice)cameraDevice
This is a class method and UIImagePickerControllerCameraDevice can take two values:
- UIImagePickerControllerCameraDeviceRear
- UIImagePickerControllerCameraDeviceFront
Example code:
if( [UIImagePickerController isCameraDeviceAvailable: UIImagePickerControllerCameraDeviceFront ])
{
// do something
}
Note that this is available for iOS 4.0 and later.
Also I am not sure if there is any API's to start the front-facing camera up front. The camera always seems to start in the same mode that the user left it the last time it was used. Maybe by design Apple did not expose any API's to change this. Maybe Apple wanted the users to make a call on this.
Nevertheless you can atleast detect the availability of Fron Camera & provide your feature.
If I understand your question correctly, all you have to do is open your Camera to be in Front Mode instead of Rear Mode, so write this inside the method where you call the picker for the first time:
picker.cameraDevice=UIImagePickerControllerCameraDeviceFront;
Hope this answers your question.
I'm currently using an AVPlayer, along with an AVPlayerLayer to play back some video. While playing back the video, I've registered for time updates every 30th of a second during the video. This is used to draw a graph of the acceleration at that point in the video, and have it update along with the video. The graph is using the CMTime from the video, so if I skip to a different portion of the video, the graph immediately represents that point in time in the video with no extra work.
Anywho, as far as I'm aware, if I want to get an interface similar to what the MediaPlayer framework offers, I'm going to have to do that myself.
What I'm wondering is, is there a way to use my AVPlayer with the MediaPlayer framework? (Not that I can see.) Or, is there a way to register for incremental time updates with the MediaPlayer framework.
My code, if anyone is interested, follows :
[moviePlayer addPeriodicTimeObserverForInterval: CMTimeMake(1, 30) queue: dispatch_queue_create("eventQueue", NULL) usingBlock: ^(CMTime time) {
loopCount = (int)(CMTimeGetSeconds(time) * 30);
if(loopCount < [dataPointArray count]) {
dispatch_sync(dispatch_get_main_queue(), ^{
[graphLayer setNeedsDisplay];
});
}
}];
Thanks!
If you're talking about the window chrome displayed by MPMoviePlayer then I'm afraid you are looking at creating this UI yourself.
AFAIK there is no way of achieving the timing behaviour you need using the MediaPlayer framework, which is very much a simple "play some media" framework. You're doing the right thing by using AVFoundation.
Which leaves you needing to create the UI yourself. My suggestion would be to start with a XIB file to create the general layout; toolbar at the top with a done button, a large view that represents a custom playback view (using your AVPlayerLayer) and a separate view to contain your controls.
You'll need to write some custom controller code to automatically show/hide the playback controls and toolbar as needed if you want to simulate the MPMoviePlayer UI.
You can use https://bitbucket.org/brentsimmons/ngmovieplayer as a starting point (if it existed at the time you asked).
From the project page: "Replicates much of the behavior of MPMoviePlayerViewController -- but uses AVFoundation."
You might want to look for AVSynchronizedLayer class. I don't think there's a lot in the official programming guide. You can find bits of info here and there: subfurther, Otter Software.
In O'Really Programming iOS 4 (or 5) there's also a short reference on how to let a square move/stop along a line in synch with the animation.
Another demo (not a lot of code) is shown during WWDC 2011 session Working with Media in AV Foundation.