playing multiple instances of vlc with audio in python-vlc - python-vlc

I'm new to vlc.py and i'm trying to play two instances of MediaPlayer() which is working but the second instance is not playing audio while the first instance is playing fine.
some basic codes from stackoverflow and python-vlc documentations

You can use the multiprocessing module in Python. That way you can have 3 processes, where one is the controlling one, and the other two are the ones containing each one VLC instance.

I had the similar problem in java. You have to play both songs in separate threads. Here is tutorial how to do it:
https://realpython.com/intro-to-python-threading/
You can’t play two songs in one thread, because program have to wait until end of first song.
Also if you want to do some actions while playing, create new threads for BOTH of MediaPlayer instances.

Related

How to change audio playrate from the browser console?

I change the playrate of videos all the time. Whenever I watch youtube or BBC videos I'm able to speed them:
document.getElementsByTagName('video')[0].playbackRate = 2.5
It makes learning and going through content a lot better.
I've been trying to do the same with audio and I can't.
I know there mind be podcast apps that provide up to 3x speed but I (a) only found 2x (b) not every audio is just podcast, and most importantly (c) I want to know how to do it.
For example Making Sense Podcast
The same thing works with audio, but there's no guarantee that there is a tag on the page for you to reference.
The HTMLAudioElement might be instantiated with script:
const audio = new Audio('https://example.com/podcast.webm');
If that's the case, document.getElementsByTagName() has nothing to find.
You will need to modify the script doing the playing.

Background task for simple operations?

I need to play several sounds, and if one sound is already playing, the subsequent sounds will be queued.
I'm using MCI API calls.
Until now, I'm using this sound player as a COM control with InterOp.
However, the COM control causes some COM Release errors, so I would like to solve it differently.
I would like to know if I should solve this by a usual NET class in my form using a background task so that I'm completely indepenent on how heavy the load of my application is so that finished sounds can immediately trigger playing the next sound without any delay, or if that would be an overkill, and background tasks should only be used on long, blocking operations.
My sound player basically only uses a timer to check with an API call if the MCI has stopped playing and then starts the next sound.
Also, it provides events when a sound was started, stopped or errored-out.
Or if I should encapsulate the player in a separate NET project and reference it? Then I would also be independent on the workload of the main application.
Seamless playing of the sound queue would be essential to me.

What OS X events can I access programmatically from Swift?

I'd like to find both current running programs (or at least program in the foreground) programmatically - and also key events on OS X.
I found inter-application communication guidelines, but they don't seem to say I can find out what applications are running.
I've found key events, but it seems to imply that the current task in the forefront is the one that gets the key events and only if it doesn't handle them does it go up to event chain. I'd like to programmatically intercept them.
Seems dubious, I know. I'm trying to use key events along with screen captures to try to best learn text on the screen - it's for research.
I'm using swift, but I understand that an obj-c example is pretty helpful since they all use the same libraries.
To get a list of running applications use:
NSWorkspace.sharedWorkspace().runningApplications
You can build a key logger (so to speak) by creating an event monitor (same document you linked, just a different section).

Can I sync multiple live radio streams with video?

Is it possible to sync multiple live radio streams to a pre-recorded video simultaneously and vary the volume at defined time-indexes throughout? Ultimately for an embedded video player.
If so, what tools/programming languages would be best suited for doing this?
I've looked at Gstreamer, WebChimera and ffmpeg but am unsure which route to go down.
This can be done with WebChimera, as it is open source and extremely flexible.
The best possible implementation of this is in QML by modifying the .qml files from WebChimera Player directly with any text editor.
The second best implementation of this is in JavaScript with the Player JS API.
The difference between these two methods would firstly be resource consumption.
The second method that would use only JavaScript would require adding one <object> tag for the video, and one more for each audio file you need to play. So for every media source you add to the page, you will need to call a new instance of the plugin.
While the first method made only in QML (mostly knowing JavaScript would be needed here too, as that handles the logic part behind QML), would load all your media sources in one plugin instance, with multiple VlcVideoSurface components that each has it's own Plugin QML API.
The biggest problem I can foresee for what you want to do is the buffering state, as all media sources need to be paused as soon as one video/audio starts buffering. Synchronizing them by time should not be to difficult though.
WebChimera Wiki is a great place to start, it has lots of demos and examples. And at WebChimera Questions we've helped developers modify WebChimera Player to suit even the craziest of needs. :)

Capture playing sound

I would like to build a program that can catch sound played on Mac OS X, either all sound or from individual programs. Is that possible? I've been reading a lot of documentation but have not found much that looks useful. It could be that I'm just looking in the wrong direction. Can it be done and are there a specific group of APIs that I should focus on?
Haven't tried it, but the idea would be to direct all sound output to an audio "device" (a kernel component) that allows it to be captured. According to this page, you can do that with soundflower.
If you want to do it programmatically, I'd install the soundflower driver and look into controlling it from your program.