I am currently testing a DSC-HX400. While I am able to do almost everything I need to with the camera there are a couple of items that are not exposed via the API that have frustrated my efforts.
1) The camera does not seem to offer an option, via the API or the camera itself, to capture images in RAW format. It does offer standard & fine JPEG format but both of those are leaving artifacts in the image that become extremely noticeable when you zoom in with an image editor. Is there a way to get the camera to capture RAW images? I do not need the SDK to return the data just to save it out to the card. If getting the RAW data is impossible has anyone found an inventive way to clean up the artifacts?
2) The camera supports both still shoot and movie mode but the API will only expose the mode that I am currently in. It makes it impossible to transition between still to movie mode (to allow recording) from the API but I can do that same transition by pressing a single button on the camera. Once I am recording a movie the API will allow me to transition back to still mode (by cancelling recording). Is there plans to support the ability to trigger a movie recording via the API if you are in a still capture mode (Seeing the firmware already supports this functionality)?
Answers to the questions below:
If the camera cannot capture RAW images, the API will not be able to either. I do not know of a way to capture RAW images but can only comment with regards to the API as I am not an expert on usage of the camera itself.
You can change between still and movie mode by using the "setShootMode" API.
Related
I'm using the package Agora Video SDK for Unity and I have followed these two tutorials:
https://www.agora.io/en/blog/agora-video-sdk-for-unity-quick-start-programming-guide/
https://docs.agora.io/en/Video/screensharing_unity?platform=Unity
Up to here, it is working fine. The problem is that instead os sharing my screen, I want to send a texture. To do so, I'm loading a png picture and trying to set it to the mTexture you find in the second link. It seems to be working on my computer, but it is like it doesn't arrive to the target computer.
How can I send a texture properly?
Thanks
did you copy every line of the code from the example as is? You may not want to do the ReadPixel part since this reads the screen. You may just read the raw data from your input texture and send it with the PushVideoFrame every update.
Our aim is to show portait video (vertical orientation in terms of TokBox) without black areas right and leftside after archiving. Now it looks like landscape with black areas on right and left side.
We are using php server and android client for streaming.
Our steps to convert live stream in video on demand through archieving are:
start session
update stream with the parameter layoutClassList = verticalPresentation (php library)
start archieving
live stream is on -> create subsriber and watch the stream. IMPORTANT! The stream has no black areas and has CORRECT presentation on subsriber side!
stop archieving
waiting TokBox upload archieving file to Amazon s3 bucket -> the file ALREADY contains black areas right-leftside. WRONG! (please watch the video on link for better understanding https://s3-us-west-1.amazonaws.com/edtv-dev1-input/46176492/9f26ef23-aee6-42f2-8c51-d8e2685abcc9/archive.mp4 )
processing the file
Are thereabove the correct steps to achieve the goal - get video file without black areas (in portrait orientation)? Are we missing anything?
Is archieving process on TokBox sensitive to horizontal/vertical presentation? is it possible to archive the video in vertical orientation?
UPDATE: What we wanted was not composed, but INDIVIDUAL stream! TokBox creates zip file, but Amazon AWS was able to transcode it and get the correct result both in portrait and landscape orientations.
NOTE: As a default result file on Amazon AWS after Individual stream archiving is *.zip (json + video file in it). The trascoder we used gave us video without sound. So we added lambda that unzipped the file. Now everything is ok, but took a lot of time and headache.
Tokbox developer here
For composed archiving, the only two options currently available for output resolution are 640x480 and 1280x720. Trying to fit a portrait video into a canvas of the available resolutions will result in the video you are seeing.
Possible solutions:
Use the custom layout control [1]: you can override the "object-fit" property to "cover". This may not result in exactly what you want, since the output resolution will still be 640x480 or 1280x720, but the video will occupy the whole canvas, at the expense of cropping the top and bottom part. See [2]
The best solution in my opinion is to use "individual stream archiving", where the resolution will be kept as the original, and you get a file per stream. Please check [3]
https://tokbox.com/developer/guides/archiving/layout-control.html
https://developer.mozilla.org/en-US/docs/Web/CSS/object-fit
https://tokbox.com/developer/rest/#start_archive
How can we get URL within the zip created by opentok which was uploaded in s3
I am developing a python remote control application using the open source pysony library. The program should be able to shoot images in a loop, download them and then delete the artifacts on the camera (We can't manually format the sd card, since the idea is that, of a remote control application).
I'd like to point out, that I have read this post about the correct way of remotely deleting files. I have read the Sony API documentation and have successfully managed to control everything I need to, but the deletion of images. The camera in question is a Sony a6300, updated to the latest firmware as is its API.
The problem in question is the fact that the camera returns a success response ({'result': [], 'id': 1}) after trying to delete a set of image URIs, but the images still remain on the camera. I am using the remote control app and am connected directly to the camera wifi (making this the standard 1:1 connection). When I issue the delete command, the screen of the camera shortly displays a "controlling with smartphone... you cannot directly operate this device" message.
I have searched all around the www an can't seem to find and aswer.
Thank you in advance!
I was trying to create two things. Both for desktop mac. Both which involve recording screen/audio.
In first thing, which is my main priority right now, I am making a song identifier. The second thing, is a screen capture (with audio) thing.
I was thinking of using AVFoundation. I don't see any sound recording capabilities though, just playing - https://developer.apple.com/library/mac/documentation/AVFoundation/Reference/AVAudioPlayerClassReference/index.html#//apple_ref/doc/uid/TP40008067
Is it possbile to record system audio somehow?
Thanks
I've used this document in the past to figure out the live screen recording part. https://developer.apple.com/library/mac/qa/qa1740/_index.html
You'll probably also find the code snipped in the AVCaptureSession overview useful.
The gist of it is that AVCaptureSession is the object that controls all your inputs and outputs for the given capture session. In this case it would be AVCaptureScreenInput and I believe for audio you want AVCaptureDeviceInput of type audio. There is a way to get the list of all the available devices for a AVCaptureDevice of a specific type. Then you add AVCaptureMovieFileOutput to your session output.
I know that's a little high level, but that technical Q&A as well as looking into getting particular input types should help.
I'm trying to build sort of a Camera app using Codename one, taking a picture is no problem. But streaming camera feed to the background, like it is on a regular camera on mobile phones, so you can actually see what you're about to film or photograph.
We don't currently support some of the more elaborate AR API's introduced by Google/Apple but we do support placing a camera view finder right into your app with a new cn1lib: https://github.com/codenameone/CameraKitCodenameOne
Since this is implemented in a library you can effectively edit the native code and add functionality as needed.
The original answer is out of date by now I'm keeping the original answer below for reference:
You can record video or take a photo with Codename One.
However, augmented reality type applications where you can place elements on top of the camera viewfinder are currently not supported by Codename One. This functionality is somewhat platform specific and hard to implement in a portable manner.