Show video in mono application - mono

I created simple desktop application in mono and I added control from LibVLCSharp project to show video and it works but the video is tugging itself on Raspberry Pi.
Are in mono other possibilities to show fluently video?
The code which I have and try to run here:
https://github.com/artbase/monoappwithgtkvideo
The logs what I get below /usr/bin/xrandr: Failed to get size of gamma
for output default
[021b1658] pulse audio output error: PulseAudio server connection
failure: Connection refused
Failed to open VDPAU backend libvdpau_nvidia.so: cannot open shared
object file: No such file or directory
[6a200c80] xcb_xv vout display error: no available XVideo adaptor

Related

Applications not seeing NoMachine pulseaudio microphone source

I'm remoted into a Linux VM running CentOS 7 via NoMachine. NoMachine presents the client's microphone as a pulseaudio source. I can use Audacity to record from the pulseaudio source.
However, other applications - Chrome, Firefox, Slack, WebEx - don't see or don't recognize the pulseaudio source as a microphone.
test.webrtc.org says [ FAILED ] Failed to get access to local media due to error: NotFoundError.
pacmd list-sources shows:
2 source(s) available.
index: 1
name: <nx_voice_out.monitor>
driver: <module-null-sink.c>
* index: 2
name: <nx_audio_in.monitor>
driver: <module-null-sink.c>
How do I get applications to recognize the pulseaudio source as a microphone?
Got it working by remapping the source:
pacmd load-module module-remap-source master=nx_voice_out.monitor source_name=Microphone
I don't know why this works, since all I've done is to essentially rename the source. I've not remapped any properties of the original source. Perhaps applications do not like the .monitor in the name of the original source.
I also needed to unload the suspend on idle module:
pacmd unload-module module-suspend-on-idle
Otherwise pulseaudio sometimes suspends the remapped source and I'm unable to unsuspednd it.

nvprof shows error with TensorFlow

I am trying to run nvprof with cifar10_multigpu_train.py.
I am using following command
/home/ibm/tensorflow/third_party/gpus/cuda/bin/nvprof python cifar10_multi_gpu_train.py
It starts the application but after sometime it shows following errors and application exits.
==140659== Warning: Some profiling data are not recorded. Make sure cudaProfilerStop() or cuProfilerStop() is called before application exit to flush profile data.
======== Error: Application received signal 11
Any idea whats going wrong. File runs just fine without using nvprof.
Note: I have 0.12.0 version installed and I am on IBM PPC64le machine.

Debian Camera isn't working

I have never before worked in Debian environment. I have some problem with camera, I was looking for answers but find nothing.
I am working in virtualbox, the camera is plugged through virtualbox. I am using is my laptop webcam, Lenovo EasyCamera. When I lauch program, for example cheese, I get this message :
jakub#debian:~$ cheese
OpenGL Warning: crPixelCopy3D: simply crMemcpy'ing from srcPtr to dstPtr
(cheese:3368): Gtk-WARNING **: Attempting to add a widget with type GtkImage to a GtkToggleButton, but as a GtkBin subclass a GtkToggleButton can only contain one widget at a time; it already contains a widget of type GtkLabel
libv4l2: error turning on stream: Brak miejsca na urzÄ…dzeniu
** (cheese:3368): WARNING **: Error starting streaming on device '/dev/video0'.
** (cheese:3368): WARNING **: Could not negotiate format
When cheese is working, the diode from the cam is on, so camera is working, but why Debian cannot show me the image.
I hope you will know what to do. I appreciate your help.
It appears to be the same as this issue:
libv4l2: error turning on stream: No space left on device
You probably need to enable USB 2.0 in your virtual machine and pass the device into your guest OS using a high-speed USB host controller (EHCI).
You may need the VirtualBox Extension Pack, I remember it being required for USB 2.0 in the past. I can't find any current information about that, though, perhaps EHCI support is now included already.

LInephone source code not working with TCP for local SIP calls

I downloaded the source code of Linphone app from GitHub (https://github.com/onmyway133/linphone-iphone) and tried to run it on my iPhone. It is working fine with transport selected as UDP but when I select transport setting as TCP outgoing works fine but the app don't notify about any incoming call.
I also tried to track the network calls by installing Linphone for mac on my macbook but for TCP it not even start any session of network requests.
Any one faced such issue or is there any other way to achieve SIP calling in local network? Any help is welcome.
The source code at the URL mentioned in the question "https://github.com/onmyway133/linphone-iphone" is not latest one. I had to check out the latest version from git url mention at linphone.org and after trying it many times finally I got the complete code and also I had to do few changes to compile the latest source code successfully.
I faced this error while compiling the code on terminal:
Shell script 'Makefile' at path 'linphone-iphone/submodules/build-i386-apple-darwin/mssilk/sdk' was downloading corrupt SILK_SDK_SRC_v1.0.9.zip.
Fix: System terminal was downloading only 600Kbs of file size (i.e. corrupt zipped file) from the URL http://developer.skype.com/silk/SILK_SDK_SRC_v1.0.9.zip due to which next command was not able to unzip it and was displaying file missing error. I changed the default URL to 'http://bkvoice.googlecode.com/files/SILK_SDK_SRC_v1.0.9.zip' thus process was able to download the file that was actually 62.9MBs of size.
Hope it'll help someone.

Raspberry Pi - Audio Fails After Adding RTC

I have a Raspberry Pi that I'm trying to hook-up to walkie-talkies to announce the current time every half hour plus different status updates automatically.
I had a CRON job running mpg123 that was announcing the time over the walkies perfectly, but then when I installed the drivers for this RasClock module as specified here (https://www.modmypi.com/blog/installing-the-rasclock-raspberry-pi-real-time-clock), all audio stopped working.
speaker-test says:
speaker-test 1.0.25
Playback device is default
Stream parameters are 48000Hz, S16_LE, 1 channels
Using 16 octaves of pink noise
Playback open error: -1,Operation not permitted
and mpg123 says:
[module.c:142] error: Failed to open module jack: file not found
[module.c:142] error: Failed to open module portaudio: file not found
[pulse.c:84] error: Failed to open pulse audio output: Connection refused
[nas.c:220] error: could not open default NAS server
[module.c:142] error: Failed to open module openal: file not found
[audio.c:180] error: Unable to find a working output module in this list: alsa,oss,jack,portaudio,pulse,nas,openal
[audio.c:532] error: Failed to open audio output module
[mpg123.c:897] error: Failed to initialize output, goodbye.
Now, the machine tends to freeze up a lot, too. When I tried suggestions I found online, such as adding "LD_LIBRARY_PATH=/usr/lib/mpg123" or "export LD_LIBRARY_PATH=/usr/lib:/usr/lib/mpg123" before the command, it made no difference.
What little hair I have left thanks you in advance for helping me through this.
I had the same error message with mpg123.
Before this message, I installed all these packages: mysql-server, build-essential, libmysqlclient-dev, libapache2-mod-wsgi.
I also changed group:
# usermod -G anothergroup pi
One of these two manipulations have caused my problem.
The solution in my case ?
Go in the /etc/group file and modify the line beginning with "audio" from this...
audio:x:NN:
to that...
audio:x:NN:pi
N.B.: NN is the GID. pi is the Raspberry Pi's default username.
To achieve the same result, there is also this command :
# usermod -a -G audio pi
Log out from your session and log in again.
P.S.: Could somebody add the mpg123 tag because I spent a lot of time without finding this topic, as I have exactly the same problem with mpg123 ?
I had the same issue run this command should fix it modprobe snd_bcm2835