I'm trying to build sort of a Camera app using Codename one, taking a picture is no problem. But streaming camera feed to the background, like it is on a regular camera on mobile phones, so you can actually see what you're about to film or photograph.
We don't currently support some of the more elaborate AR API's introduced by Google/Apple but we do support placing a camera view finder right into your app with a new cn1lib: https://github.com/codenameone/CameraKitCodenameOne
Since this is implemented in a library you can effectively edit the native code and add functionality as needed.
The original answer is out of date by now I'm keeping the original answer below for reference:
You can record video or take a photo with Codename One.
However, augmented reality type applications where you can place elements on top of the camera viewfinder are currently not supported by Codename One. This functionality is somewhat platform specific and hard to implement in a portable manner.
Related
I am trying to add a functionality to my status bar app for Mac OS X. I would like to be able to move my item along the bar, as you can do for the native OS tools like the Bluetooth or the WiFi icon.
Cheers
Update for macOS Sierra: Apple improved NSStatusItem. Items can now be reordered by ⌘-dragging. This works for all Apple’s menu items and all third party apps.
Pretty much all the reasons you could ever want to use NSMenuExtra have been removed now, which is great. So, the answer now is to just use NSStatusItem. No further action is required.
What you are looking for is NSMenuExtra and not NSMenuItem.
Apple uses NSMenuExtra for the system menu icons including Wi-Fi and Bluetooth. Although it looks similar to the regular NSMenuItem, NSMenuExtra has some special features, notably to keep their relative order after rebooting, and Command-Draggable by user.
Unfortunately, NSMenuExtra is totally undocumented, so if you are targeting the Mac App Store, it's better for you to stick with the standard NSMenuItem. Otherwise, there is a bunch of tutorials about how to create an NSMenuExtra. For example, here are two of them:
NSMenuExtra – working with undocumented APIs
Building NSMenuExtra - A Small Tutorial
Unfortunately there is no 'good' way to do that, however you can check this question for the hack which can do that:
How to drag NSStatusItems
I am currently testing a DSC-HX400. While I am able to do almost everything I need to with the camera there are a couple of items that are not exposed via the API that have frustrated my efforts.
1) The camera does not seem to offer an option, via the API or the camera itself, to capture images in RAW format. It does offer standard & fine JPEG format but both of those are leaving artifacts in the image that become extremely noticeable when you zoom in with an image editor. Is there a way to get the camera to capture RAW images? I do not need the SDK to return the data just to save it out to the card. If getting the RAW data is impossible has anyone found an inventive way to clean up the artifacts?
2) The camera supports both still shoot and movie mode but the API will only expose the mode that I am currently in. It makes it impossible to transition between still to movie mode (to allow recording) from the API but I can do that same transition by pressing a single button on the camera. Once I am recording a movie the API will allow me to transition back to still mode (by cancelling recording). Is there plans to support the ability to trigger a movie recording via the API if you are in a still capture mode (Seeing the firmware already supports this functionality)?
Answers to the questions below:
If the camera cannot capture RAW images, the API will not be able to either. I do not know of a way to capture RAW images but can only comment with regards to the API as I am not an expert on usage of the camera itself.
You can change between still and movie mode by using the "setShootMode" API.
I want to create controls that lets the user decide crop area of a bitmap, in the common way, having four corners on the image. I saw that there is a sample C# app in the Microsoft site for this - http://www.microsoft.com/en-us/showcase/details.aspx?uuid=bef08d57-fa4d-4d9c-9080-6ee55b8623c0
But I cannot figure out how to do this strictly WinJS. Do I need to create custom controls - if so how? Any sample code will help a great deal.
I have an example in my codeSHOW project. It's the demo called Rx Crop. It uses Reactive Extensions (which are awesome by the way), but if you didn't want that dependency you could probably just use the example to figure out how to do it without.
BTW, the codeSHOW project has a bug and a usability issue currently. I have an update in certification. For now, just make sure you select the Rx Crop demo on the home screen and then click See the Code. If you hit See the Code with no demo selected it will crash.
Do me a favor and rate the app. Thanks.
If I wanted to create a mobile app that allows the user to take pictures with their phone, record audio notes and record video, how would I do that?
I was browsing through the Sencha Touch 2 API and while I see documentation on video and audio files, it seems like it is just providing a way for me to access files stored on the phone - not actual triggers to record, or take pictures.
Am I missing something?
How would I do what I want?
In order for Sencha Touch to have access to your phone capabilities, you need to use a product like Phone Gap
Unless there is a HTML5 api for doing those sorts of things I don't think you can do that. I know on PhoneGap there are native extensions added into that platform for access to things like microphone, camera, etc. I don't know if Sencha Touch has added any of those sorts of extensions in order for you do this.
Just thinking out of the box here, but you might be able to put Sencha javascript into a Web View from within an Android Java process. Then the Java code could expose an object in its process as an extension point to the Javascript engine for access to Camera, Microphone, what not.
I'm building in FlashDevlop as pure AS3.
I'm looking at building a kiosk that uses two screens. Its used to administer tests. So one screen has the test, second the controls for admin the test. I have played with wide app but its not very elegant and I really would like both screens to run full screen on each screen. Is it possible to have one air app spawn two native air windows? A secondary question is it possible to detect multiple screens and target a screen to full screen to? Even something as simple as checking the window size to detect would work, im just not sure I can move and if the low level api will fullscreen on that screen. I could not find any examples of this in the docs.
What docs did you look into? I found it right away.
You'll need the Screen class if you want information on the screens that are connected to the PC. And here's some documentation on using it.
To create new windows, just instantiate a new NativeWindow class and call activate() on it when you're done configuring.
There's a lot of other useful stuff for you in the flash.display package. All the AIR stuff is marked with a little AIR icon. I have to admit that it would have been easier to find if they had put these classes in a separate AIR package.