In puppeteer, how to share virtual window screen and should ignore fake media default video - webrtc

In puppeteer added below arguments
args: [
'--use-fake-device-for-media-stream',
'--use-fake-ui-for-media-stream',
'--use-file-for-fake-audio-capture='+audioFilePath,
'--use-file-for-fake-video-capture='+videoFilePath,
'--enable-usermedia-screen-capturing',
'--allow-http-screen-capture',
'--no-sandbox',
'--auto-select-desktop-capture-source="Entire screen"',
'--allow-file-access'
]
For video and audio using fake video and audio, but for sharing need to share virtual windows screen but sharing fake media default video. how to fix it?
I tried multiple scenarios not working

Related

How can this webm file detect my web browser?

This video spreading across Discord is very intriguing...
The same webm file behaves as follows in the following browsers:
Browser / Player
Message
Chrome
"Discord detected on Chromium/Electron"
Firefox
"Discord detected on Gecko (Firefox)"
VLC
"Discord detected on Gecko (Firefox)"
Media Player Classic
"Discord detected on Chromium/Electron"
Edge
"Discord detected on Chromium/Electron"
Safari
Play bar loads but only plays first frame of video.
Epiphany
Play bar loads but won't play at all.
Edge (Legacy)
Won't load/play
IE11
Won't load/play
Windows 10 Movies and TV
Won't load/play
I'm well aware that the "Discord" part of the message baked-in to the video, but what about the rest?
I had originally suspected that the .webm file was somehow switching to a streaming method and using server-side headers to influence the video content, but I tried in a VM in airplane mode and it still worked.
My second thought is that since .webm-capable browsers can be split up into two groups: Chromium-based and Gecko-based, that this video has multiple video sources embedded and is taking advantage of a very specific browser incompatibility, like a magic trick that forces you to select a card.

is there any software of hardware that can mimic an external non virtual camera with a custom video input

Websites can tell if camera selected is real or virtual, I think by checking device instance path [real vs virtual camera path][1][1]: https://i.stack.imgur.com/hKr5y.jpg
a real webcam tend to have USB in the path
and virtual tend to have root in the path
is it possible to show the website custom output such as a screen recording or a video stream or ip camera and make it seem like it's a real physical webcam or something like that via a software or an external device that is detected as camera but can I can choose what to output

Hikvision MJPEG substream embed to HTML

we're using a couple of hundreds Hikvision Cameras (DS-9664NI-I8) for a University. We want to embed them to a web page hosted locally. Cameras, NVR, and webpage are on same local network.
We don't want to use RSTP, we will use HTTP.
We're able to run the camera sub-stream on the browser using the following URL
http://<username>:<password>#<IP Address>/Streaming/channels/102/httpPreview
But, when I embed this to image src, this won't work, as browsers don't support the HTML embedded credentials.
How to play the camera streams on the web page?
You can disable security on streams. But if it possible - that depends on your camera and its firmware version. And be aware of consequences of course, secure your image with another means - firewall, etc.

can the agora virtual background sdk be usedd with WebRTC protocol

I want to integrate the Virtual Background feature into my application.My website uses WebRTC protocol but the virtual background sdk uses the RTMP protocol.So is it possible for me to somehow integrate this virtual background sdk into my application?If yes, how do i do so?
SDK link :
https://www.agora.io/en/blog/agora-io-sdk-version-2-3-av-fallback-background-images-and-more-in-this-release
I think you are misunderstanding the feature you are referring to in the link.
This feature is not segmentation based virtual backgrounds, it's meant for use when pushing streams from Agora.io to a 3rd party RTMP url.
In the setLiveTranscoding configuration you would set a background image and when Agora composites multiple streams into one of its predefined templates, so the videos display in the given configuration and you can add a background image behind the streams so it's not black if one of the streams stops broadcasting.
AgoraImage io.agora.rtc.live.LiveTranscoding.backgroundImage:
The background image added to the CDN live publishing stream. Once a background image is added, the audience of the CDN live publishing stream can see it. See AgoraImage.

getUserMedia: access system output stream

I know how to access audio input devices via getUserMedia() and route them to the WebAudio API. This works fine for microphones and such. But in my use-case, I would rather like to hook into the audio stream of an output device. The use case is that I want to create a spectrum analyser for audio coming from a digital audio workstation (DAW) running on the same PC.
I tried to enumerate the devices and call getUserMedia() with the device id of an audio device, but the stream returned only showed silence data. The only solution I found so far is to install an audio loopback device (like Soundflower on Macs) to route the DAW's output to and then use this as an input device for getUserMedia(). But this will require the user to install 3rd party software.
Is there any way to hook directly into the audio stream of an output device instead, before it is actually sent to the physical device (speaker or external soundcard)?
This can be achieved using the desktop capture APIs (chrome.desktopCapture.chooseDesktopMedia). An example for chrome is included here