Exposure settings in uvcdynctrl for a Logitech C920? - webcam

I managed to set the exposure value, as well as any other controllable value of the Logitech C920 webcam, manually and fixed. (I am doing computer vision research with it). However, I can not find any source that tells me in which unit the exposure value is in:
uvcdynctrl -d /dev/video1 -s"Exposure (Absolute)" -- 10
It seems to get as low as 3, and as high as 255, lower values resulting in darker images (shorter shutter speed / exposure time).
Can anyone tell me the equation how these values are mapped to milliseconds?

Related

RightLight technology in Logitech webcams

I note one feature for Logitech webcams - if the RightLight option is turned on in the webcam properties, then output frame rate decreases in two times (1920x1080 with 30 fps -> ~15 fps) for rendering. For test I used DirectShow and the Logitech webcams: c270, c310 and c920.
Any ideas - how to force a webcam to work with 30 fps with RightLight?
RightLight halves the framerate as part of how it functions. You are better off adding another light and turning off Rightlight.

Machine learning with continuous training/model update (tensorflow or something else)

I am wondering if there are projects/examples with any machine learning library (tensorflow, etc) which can do continous training kind of simulate an animal or pet.
What do I mean by animal/pet?
Let's assume I have these hardware robot.
Inputs:
Touch sensor, which returns number from 0 to 255 depends on touching force.
Microphone.
Webcam.
Outputs:
Moving module, which can move forward/backward and left/right. Let's say just some simple wheels system with 4 input pins. If I send +5v (binary 1) in pin 1, it goes forward, to pin 2, it goes backward, to pin 3, left, pin 4, right.
Speakers.
Everything is connected to central Computer (raspberry Pi or if not enough CPU/Memory then Microsoft Surface Pro with 4 cores i7 3+GHz CPU and 32 GB RAM).
Idea is to connect hardware inputs mentioned above to input of neural network, outputs to output of neural network and put this conditions:
Minimize bad feelings and maximize good.
If touch sensor returns number more than 128, it is bad feelings (pain}, if touch sensor returns less than 127 it is good feelings (pet). If battery is less than 20% it is bad feelings. Loud noise from microphone is bad feelings. In programming terms 3 variables to minimize and one to maximize.
When I connect it all together and switch it on, I will train it like a baby. Show some pictures, tell something, pet it for good work, etc. Show where is the battery (maybe I will put wireless charger, that it can do it by itself). I understand, that it will take long time, maybe years.
My problem now, that most of the examples which I found so far works like train first and then use this already trained neural network. Or use pre-trained by others neural network. I could not find an example with continuous training and usage of neural network.
Questions:
Is it possible to implement this with current machine learning technologies/libraries (tensorflow, etc)? Let's consider first only software part, if I have unlimited hardware.
If it is not possible, then why?
If it is possible, then links to examples or general approach description will be very helpful.
If it is possible, then what hardware will be needed?
P.S. of course I do not expect it to be as smart as human, even not as a dog/cat. Maybe like fly or mosquito :)
Also I would like to get high level answer without going very deep into details like, how would you implement moving module, etc. And everything as simple as possible.

GNU Radio and bladeRF on Raspberry Pi (simple FSK system)

I am having a problem porting a GNU Radio setup from PC (windows 10, USB3) to Raspberry Pi 2 (USB2). USB bandwidth and CPU should not be a problem I think (only around 30% utilization while running). Essentially it looks like the RPi is 'pausing' during transmission, while the PC is not. The receiver is running on PC in both cases. I am including a pic of what I see after the FSK demod when running transmitter on PC vs Pi (circled 'pause' area), as well as a picture of my (admittedly sloppy) schematic. Any help/tips is greatly appreciated.gnuradio schemreceived signals
Edit: It appears it may actually be processing limitations. Switching from 9400 baud to 2400 baud makes the issue go away. If anyone has experience with GNURadio...am I doing anything overly inefficient or should I just drop comm rate?
The first thing I would do would be to lower your sample rates.
You don't need 1.5Ms/s if you are going to keep only the lowest 32k in your low pass filter.
Then you could do the same for your second stage after the quadrature demod if it's not enough (by the way, the sample rate of your second low pass filter does not seem to match the actual sample rate of the stage which is still 1.5Ms/s if I'm not mistaken).
Anyway, Gnuradio uses a lot of processing power so try not to use a sampling rate way above what you actually need ;)
In your case, you could cut the incoming sample rate down to 64k (say 80 for safety). 18 times less samples to process might do the trick :)

FFT/FHT - Specific frequency range - Arduino

So I'm working on a project where I need to analyze audio with an Arduino.
Basically it's a light organ, and I need to do some Beat Detection
in order to match the LEDs to shift color based on the tempo of the song.
I have successfully managed to get the FFT and FHT library from Open Music Labs
to work with my arduino, but the lower end of the spectrum seems to be very
narrow if terms of resolution.
I have tried to find information of how to broaden the resolution in that area,
but with no success.
How can I accomplish this?

How to turn off Video Acceleration programmatically

I'm using the Windows Media Player OCX in a program runned on hundreds of computers (dedicated).
I have found out that when video acceleration is turned on to "full", on some computers it will cause the video to fail to play correct, with green squares between movies and so on. Turn the acceleration to "None" and everything is fine.
This program is runned on ~800 computers that will autoupdate my program. So I want to add to the startup to my program that it turns off the video acceleration.
The question is, how do I turn off video Acceleration programmatically?
All computers are running XP and at least the second service pack.
It would take me ages to manually logg in to all those computers and change that setting so thats why I want the program to be able to do it automagically for me.
Using the suggested process of running procmon, and filtering out unnecessary data, I was able to determine the changes in the registry when this value changed:
Full Video Acceleration:
[HKEY_CURRENT_USER\Software\Microsoft\MediaPlayer\Preferences\VideoSettings]
"PerformanceSettings"=dword:00000002
"UseVMR"=dword:00000001
"UseVMROverlay"=dword:00000001
"UseRGB"=dword:00000001
"UseYUV"=dword:00000001
"UseFullScrMS"=dword:00000000
"DontUseFrameInterpolation"=dword:00000000
"DVDUseVMR"=dword:00000001
"DVDUseVMROverlay"=dword:00000001
"DVDUseVMRFSMS"=dword:00000001
"DVDUseSWDecoder"=dword:00000001
No Video Acceleration:
[HKEY_CURRENT_USER\Software\Microsoft\MediaPlayer\Preferences\VideoSettings]
"PerformanceSettings"=dword:00000000
"UseVMR"=dword:00000000
"UseVMROverlay"=dword:00000000
"UseRGB"=dword:00000000
"UseYUV"=dword:00000000
"UseFullScrMS"=dword:00000001
"DontUseFrameInterpolation"=dword:00000001
"DVDUseVMR"=dword:00000000
"DVDUseVMROverlay"=dword:00000000
"DVDUseVMRFSMS"=dword:00000000
"DVDUseSWDecoder"=dword:00000000
So, in short, set
PerformanceSettings
UseVMR
UseVMROverlay
UserRGB
UseYUV
DVDUseVMR
DVDUseVMROverlay
DVDUseVMRFSMS
DVDUseSWDecoder
to 0, and set
UseFullScrMS
DontUseFrameInterpolation
to 1.
It seems you're not the only one with this problem. Here's a link to a blog - the author solves his problem by lowering the hardware acceleration level. Tested on Media Player 9, 10 and 11 with REG script to set appropriate settings.
http://thebackroomtech.com/2009/04/15/global-fix-windows-media-player-audio-works-video-does-not/
As well as applying this fix, you might check the affected machines have the latest drivers and codec versions. Finally, if possible, you may consider re-coding the content to a format that doesn't produce the display problems (if the bug is codec related.)
Using hardware acceleration is certainly more energy-efficient - according to this Intel report, almost twice as much energy is used without acceleration, and as there are 800 machines, there's reason to seek out a green solution.