From the documentation of Qt5 I get it that there are many widgets and classes that deal with camera input. On the other hand the documentation reads alot like intended for mobile phone cameras or even real cameras. With a viewfinder, record and snapshot buttons etc.
All I want is a widget inside my desktop Qt5 program that lets me see the video stream of my webcam (/dev/video0, v4l2). All parameters controlled via the code. Resolution, brightness and whatever the camera supports. No GUI elements.
Minimal but working code examples are appreciated. Either C++/Qt5 or pyqt5. But a hint which classes I should use in which connection would be a start as well.
Thank you very much!
P.S. Please, no answers that consists only(!) of a link to a documentation page as if that was self-explenatory. There is a camera example but did not help me much. Otherwise I would not have to ask here.
Documentation like this http://qt-project.org/doc/qt-5/qtmultimediawidgets-camera-example.html is really all you need.
The minimal working example:
(Tested on ubuntu with pseye camera. If this is only camera in system, you don't have to specify device path )
#include "mainwindow.h"
#include "ui_mainwindow.h"
#include <QCamera>
#include <QMediaPlayer>
#include <QVideoWidget>
MainWindow::MainWindow(QWidget *parent) :
QMainWindow(parent),
ui(new Ui::MainWindow)
{
ui->setupUi(this);
camera = new QCamera(this);
videoWidget = new QVideoWidget();
ui->mainLayout->addWidget(videoWidget);
camera->setViewfinder(videoWidget);
camera->start();
}
MainWindow::~MainWindow()
{
delete ui;
}
Related
I'm using an AVCaptureSession to create a screen recording (OSX), but i'd also like to add the computer audio to it (not the microphone, but anything that's playing through the speakers) i'm not really sure how to do that so the first thing i tried was adding an audio device like so:
AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
After adding this device the audio was recorded, but it sounded like it was captured through the microphone. Is it possible to actually capture the computer's output sound this way? like quicktime does.
Here's an open source framework that supposedly makes it as easy to capture speaker output as it is a screenshot.
https://github.com/pje/WavTap
The home page for WavTap does mention that it requires kernel extension signing privileges to run under MacOS 10.10 & newer, and that requires signing into your Apple Developer Account and submitting this form. More information can be found here.
Using the supplied Android demo from
https://developer.sony.com/downloads/all/sony-camera-remote-api-beta-sdk/
Connected up to the WIFI connection on a Sony QX1. The sample application finds the camera device and is able to connect to it.
The liveview is not displaying correctly. At most, one frame is shown and the code hits an exception in SimpleLiveViewSlicer.java
if (commonHeader[0] != (byte) 0xFF) {
throw new IOException("Unexpected data format. (Start byte)");
}
Shooting a photo does not seem to work. Zooming does work - lens is moving. Camera is working fine when using the PlayMemories app directly, so no hardware issue.
Hoping from advice from Sony on this one - standard hardware and demo application should work.
Can you provide some details of your setup?
What version of Android SDK are you compiling with?
What IDE and OS are you using?
Have you installed the latest firmware? (http://www.sony.co.uk/support/en/product/ILCE-QX1#SoftwareAndDownloads)
Edit:
We tested the sample code using a QX1 lens and the same setup as you and were able to run the sample code just fine.
One thing to check is whether the liveview is ready to transfer images. To confirm whether the camera is ready to transfer liveview images, the client can check “liveviewStatus” status of “getEvent” API (see API specification for details). Perhaps there is some timing issue due to connection speed that is causing the crash.
For a university project i'm working on a DJ mixing app. I'm essentially tackling this project from a 'teach yourself from scratch by googling everything and analysing pre existing source code' type of way so go easy.
I have looked at the Mixer Host sample project from apple found here: http://developer.apple.com/library/ios/#samplecode/MixerHost/Introduction/Intro.html#//apple_ref/doc/uid/DTS40010210
I can't work out how to replace the pre selected audio files (guitar + beat) with a song URL from the iPod library selected from a media picker, or, in this case - 2 media pickers.
Is it a case of grabbing the URL of the iPod library song selected and putting in place of the URL of the preselected audio file?
If someone could point me in the right direction, tell me how i'm completely going about this the wrong way, or even do the coding for me (joke), it would be greatly appreciated.
You can't actually simply stream from the ipod library; you need to copy the files into the documents directory of the app.
try this: http://www.subfurther.com/blog/2010/12/13/from-ipod-library-to-pcm-samples-in-far-fewer-steps-than-were-previously-necessary/
You can use third party libraries to play a iPod library songs using AudioUnit. The below link is useful to you
Clickhere!
Is it possible to open the camera capture and force it to take black&white pictures?
The CameraCaptureUI API doesn't provide the ability to change the color/BW of the camera, but you can add a media capture filter to make the change.
This SDK sample should give you what you need...be aware that to add the grayscale filter requires a C++ project (which is included in the sample):
http://code.msdn.microsoft.com/windowsapps/Media-Capture-Sample-adf87622
I need to implement audio recording feature in my mac app.i am using the Apple sample
http://developer.apple.com/library/mac/#samplecode/AudioDataOutputToAudioUnit/Listings/main_m.html. Everything is working fine .The only issue is that the audio file created is echoing while playing.Please help!Later I checked, Apple sample also has the same problem
From the description of the sample code:
The built application uses a QTCaptureSession with a QTCaptureDecompressedAudioOutput to capture audio from the default system input device, applies an effect to that audio using a simple effect AudioUnit, and writes the modified audio to a file using the CoreAudio ExtAudioFile API.
From CaptureSessionController.m:
/* Create an effect audio unit to add an effect to the audio before it is written to a file. */
OSStatus err = noErr;
AudioComponentDescription effectAudioUnitComponentDescription;
effectAudioUnitComponentDescription.componentType = kAudioUnitType_Effect;
effectAudioUnitComponentDescription.componentSubType = kAudioUnitSubType_Delay;
It looks like this delay is intentional, as part of the demo.
Well I found the answer myself.Just need to comment
effectAudioUnitComponentDescription.componentSubType = kAudioUnitSubType_Delay;
From CaptureSessionController.m: