Streamlit-webrtc with custom video source - webrtc

I'm working on a traffic detection project, where I get video stream from Youtube livestream using CamGear and then process each frame with OpenCV. I want to display final video with detections in Streamlit app. I tried passing processed frames to st.frame, but for some reason only first few frames are shown and then the video freezes. Now I am looking at using webrtc component, but I can’t figure out, how to change video source from webcam. Thanks.

Related

How to Read a Video stream from OTG input in react-native (expo) - reduce the frame-rate, convert to grayscale and display it?

I am very new to React. I'm trying to build an android application with React Native (Expo) which requires reading a video input from an HDMI to USB converter + an OTG cable.
Video Capture Card or HDMI to USB 2.0 Converter – Live Streaming
I need to :
a) read the input video stream
b) reduce the frame rate of the video to 1fps
c) convert it to grayscale
d) display it.
Can anyone please suggest to me an idea on how to accomplish the above steps?
What is the standard method/process for this job?
Is there any tutorial that I can follow?
react-native-video component
a) can read video stream
b) can change playrate
c) can convert it to grayscale/FilterType.MONOCHROME (only available on
ios devices)
d) can display it.
Here is a brief article about Video Live Streaming with React Native and youtube videos about react-native-video
If possible, recording video directly at 1 fps with grayscale effect can be quite efficient. If this isn't possible, it may be nice to send the final version of the video to the application by performing the operations on the video on the server side. (reducing fps rate and adding grayscale effect on server) Server side operation can cause some delay.

Preload first frame of video using react native video so user won't have to wait while playing

In my app i've list of videos like instagram and what i want to is to preload first frame of video so when user opens app they don't have to wait until it load.
I am using react-native-video. I can not cached video because its utilising too much memory and getting crash.
Any help/suggestion appreciated.
Try using ffmpeg-kit to generate the video frame and then render.

Snapchat style captions on recorded videos

I am using expo and i am trying to implement a feature similar to snapchat/instagrams draw on video and add text/caption to video before upload it. My problem is not the UI part but editing the original video and getting a uri for the new video
I know with images you can use libraries like expo-pixi then take a snapshot of a view but i am not how to go about this for recorded videos specifically
anyone would be kind enough to point me to the right direction?

How to get frames from video file in react native expo

I want to run tensorflow mobilenet model on a pre-recorded video. The only way I have found to extract frames is to use ffmpeg. But the thing is that I need to keep the app in expo. Is there any way I could get frames from the video and run the model on them?
Maybe by getting current frame from expo-av or something else.

Agora WebRTC Change video call view

I am working on a react-native project which uses Agora.io for video calling.
In a video call it shows my camera feed as fullscreen and the reciever's feed as thumbnail which is the opposite of the correct way.
I want to know, Is this the way agora works or is it possible to fix this..?
Because even in their website they have put the images in that way.
image on the home page
I appreciate any help regarding to fix this.
So it seems like you are presenting the local video stream to the larger view. You would need to switch this. Render the remote video stream on the larger view and the local video stream on the thumbnail view.