https://cryptsy.tv/#!/test
(To broadcast, click join channel then broadcast)
(To join a broadcast just click join channel)
Whenever using the above example on edge it glitches and shows old frames during playback, causing it to appear as if the person is teleporting around the screen. Is this behavior being observed with everyone else? I am using two rtc connections sending audio and video seperately then joining them using a mediastream.
https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/19298399/
Related
I have an MP4 which concatenates video from two separate camera streams of different resolutions. This plays back appropriately in VLC, but Chrome and Windows Media Player fails to handle the change in resolution - the second half of the video is totally distorted.
Tools which read an MP4 file and provide technical data all report a single AVC1 video track of resolution #1.
ffprobe shows:
Duration: 00:00:34.93, start: 0.000000, bitrate: 825 kb/s
Stream #0:0(und): Video: h264 High avc1, yuv420p, 2688x1520 (Resolution #1) , 823 kb/s, SAR 189:190 DAR 15876:9025, 13.42 fps, 15 tbr, 90k tbn, 30 tbc
Deconstructing the MP4 (using onlinemp4parser.com and gpac.github.io/mp4box.js/test), it shows one track atom (trak) which contains one media atom (mdia). further down, inside the sample table description atom (stsd), there are two AVC1 items. The two AVC1 items describe the two resolutions correctly.
e.g.
trak -> mdia -> minf -> stbl -> stsd
AVC1 (showing resolution #1)
AVC1 (showing resolution #2)
These tools also show resolution #1 under the track header (tkhd).
Can someone comment on how the final playback resolution is determined? Or why VLC is able to correctly read the second AVC1 block out of the sample description? Is there a different way to express the change in resolution between the samples that browsers will interpret correctly?
If anyone knows of a better way please feel free to share. The best way I've found to do this is by ensuring the SPS/PPS (and VPS for H.265) NAL units precede every I frame. This still isn't always perfect because I assume the players just don't expect to handle the the video parameters changing mid-stream.
Here's a quick reference of how some players handle this as of 12/5/2022:
Chrome:
Regular playback good, seeking initially shows corruption then resolves.
Firefox
Regular playback does not resize the player on the transition and makes the right and bottom pixels of video fill the remainder. Seeking shows corruption but will eventually resolve itself, but player still doesn't resize.
Edge
Regular playback shows only the top left corner on transition, but will eventually correct itself. Seeking shows corruption, size corrects itself, then just jumps to the end, but remains corrupted.
Media Player
Regular playback good, seeking shows corruption (note you must click the spot you want to jump rather than sliding it).
VLC is good in pretty much all cases. It's just much more resilient than most other players.
I'm using Godot engine to develop a multiplayer Lan WiFi game,at some point the game will give a player a task to solve ,the task is a mini game that has some random aspects ,one player should control and solve this task while other players will be just watching and should not be able to control anything ,so I want to know how to display exactly what's happening on that player screen to the rest of players?
enter image description here
what i would recommend is to record the person game screen and just send the recording to the other person via live this is going to take some bandwidth tho
(https://github.com/henriquelalves/GodotRecorder)
also when sending a screen recording it just a matrix array or a pool byte array in Godot i think.
another way is to get person movement and location and set the camera to that exact location you could also just use two cameras and a split screen to see the current player and the other player.
A-Frame newby here: I am trying to position a menu in the current view of the A-Frame camera (adding the related entities as childs of the camera entity, as suggested elsewhere as a way to create a heads-up display).
I want to position that to the top left of what the user currently sees. Therefore specifiying fixed x and y coordinates do not quite work, since the field of vision of camera depends on e.g. the screen resolution of the client the browser is running on (so on some devices the menu can be seen, on others one would have to move the camera back or turn to see the menu).
Is there a way to find out at which x,y coordinates (relative to the camera) the top left of the view is? Or another solution of how to place a menu in constant view (that works both for a classical browser window as well as in full screen and VR mode)?
I am testing HTML5's video API. The plan is to have a video play with an effect, like making it black and white. I have and working together using a buffer. I take the current video frame and copy to the scratch buffer where I can process it. The problem is the rate at which it runs.
The Video API of HTML5 has the 'timeupdate' event. I tried using this to have the handler process frames, once for every frame, but it runs at a slower rate than the video.
Any ideas to speed up processing frames?
You can get much more frequent redraws by using requestAnimationFrame to determine when to update your canvas, rather than relying on timeupdate, which only updates every 200-250ms. It's definitely not enough for frame-accurate animation. requestAnimationFrame will update at most every 16ms (approx 60fps), but the browser will throttle it as necessary and sync with video buffer draw calls. It's pretty much exactly what you want for this sort of thing.
Even with higher frame rates, processing video frames with a 2D canvas is going to be pretty slow. For one thing, you're processing every pixel sequentially in the CPU, running Javascript. The other problem is that you're copying around a lot of memory. There's no way to directly access pixels in a video element. Instead, you have to copy the whole frame into a canvas first. Then, you have to call getImageData, which not only copies the whole frame a second time, but it also has to allocate the whole block of memory again, since it creates a new ImageData every time. Would be nice if you could copy into an existing buffer, but you can't.
It turns out you can do extremely fast image processing with WebGL. I've written a library called Seriously.js for exactly this purpose. Check out the wiki for a FAQ and tutorial. There's a Hue/Saturation plugin you can use - just drop the saturation to -1 to get your video to grayscale.
The code will look something like this:
var composition = new Seriously();
var effect = composition.effect('hue-saturation');
var target = composition.target('#mycanvas');
effect.source = '#myvideo';
effect.saturation = -1;
target.source = effect;
composition.go();
The big down side of using WebGL is that not every browser or computer will support it - Internet Explorer is out, as is any machine with old or weird video drivers. Most mobile browsers don't support it. You can get good stats on it here and here. But you can get very high frame rates on pretty large videos, even with much more complex effects.
(There is also a small issue with a browser bug that, oddly enough, shows up in both Chrome and Firefox. Your canvas will often be one frame behind the video, which is only an issue if the video is paused, and is most egregious if you're skipping around. The only workaround seems to be to keep forcing updates, even if your video is paused, which is less efficient. Please feel free to vote those tickets up so they get some attention.)
I'm doing some stereoscopic work which means I have need to work with two instances of various filters (i.e. a camera source that receives an IP stream), and this is proving not to be trivial.
I even tried copying the IPCamfilter.ax to IPCamfilter.ax and manually making new CLSID entries in the reg, and the clone shows up, but won't work. Any ideas?
Should I edit the clone filters binary to change its CLSID and then register it? Or is there a simple way to use GraphEdit to do this?
Do you work with two cameras or with one camera and you wanna have two pictures.
In the first case, there are some filters which work just with one connected device (in case for e.g. firewire, cameras have to be connected to two different controllers).
In the latter case, you can use the Infinite Pin Tee Filter to get two streams of the one device. You can test that in GraphEdit as well.
There's nothing in COM that prevents you creating two instances of the same clsid, so you're solving the wrong problem by trying to change the clsid. There must be something in the filter internals that prevents multiple use in the same process.
If you can't get access to the source to fix it, you could have two capture graphs in separate processes and then use a bridge of some sort to combine the two outputs in a third graph (or in your application).
G
SplitCam is a freeware virtual video clone and video driver for connecting several applications to a single video capture source. Usually, if you have a camera connected to your PC, you cannot use it in more than one application at the same time, and there is no standard Windows options that makes it possible. Split Camera allows you to easily multiply your video source in any conferencing software like ICQ, Yahoo, MSN Messenger, or whatever.
Video Processing Filter is a powerful transform filter that allows rotate the video in 90, 180, and 270 degrees ,keep aspect ratio when rotated the video in 90 and 270 degrees , flip the video, convert a RGB video stream to Grayscale and invert color. Support rotate the video in 90, 180, and 270 degrees in any Directshow base application. Support keep aspect ratio when rotated the video in 90 and 270 degrees.