I'm streaming both RTMP and HLS(for IOS and android), with RTMP video.js display correct currentTime. According to me currentTime should be when the stream started, not when the client started to view the stream. But when I go with the HLS-stream currentTime returns when the client started the stream and not when the stream started(same result using any player from android or ios or VLC).
Using ffprobe on my HLS-stream I get the correct values, i.e when the stream started, which makes me believe that I should start looking at the client to find a solution, but I'm far from sure.
So please help me get in the right direction to solve this problem.
I.e is it HLS in nature that doesn't give me correct currentTime, but weird that ffprobe gives me correct answer?
Can't find anything in the video.js code on how to get any other time code.
Is it my server that generates wrong SMTPE timecode for HLS and ffprobe are using other ways to get correct currentTime?
Anyway I'm just curious, I have a workaround for it, by initially counting used fragments I will atleast get in the 5 seconds ballpark, i.e good enough for my case.
Thanks for any help or input.
BR David
RTMP and HLS work in different ways.
RTMP is always streaming, and when you subscribe to the stream, you subscribe to the running stream, so the begin time will be when the stream started.
HLS works differently. When you subscribe to a HLS stream, it creates a stream for you. So the current time will be when the HLS stream was started, which means when you subscribed and the HLS stream was created.
Related
I'm trying to take two videos and transform them with ffmpeg into a single video. It works great if you take the two videos, run them through ffmpeg and then serve that file up via an API. Unfortunately the upper range for these videos is ~20 minutes, and this method takes too long to create the full video (~30 seconds w/ ultrafast).
I had an idea to stream the output of the ffmpeg command to the client which would eliminate the need to wait for ffmpeg to create the whole video. I've tried to proof this out myself and haven't had much success. It could be my inexperience with streams, or this could be impossible.
Does anyone know if my idea to stream the in-progress output of ffmpeg is possible / feasible?
you should check hangfire. I used this for running the process on the background, and if it needs a notification, signalR will help you
What do you mean by "streaming" ? Serving the result of your command to an http client on the fly ? Or your client is some video player that play the video (like a VLC player receiving a tcp stream of 4 IP cameras) ?
Dealing with video isn't a simple task, and you need to choose your protocols, tools and even hardware carefully.
Based on the command that you send as an example, you probably need some jobs that convert your videos.
Here's a complete article on how to use Azure Batch to process using ffmeg. You can use any batching solution if you want (another answer suggests Hangfire and it's ok too)
as the title shows, is there any methods I can use to play multiple videos continuously using simple rtmp client(my rtmp server is wowza)? Here is the way I think:
when the first video is about to be finished,open a new thread to send a new createStream command and a new play command and get the video rtmp packet and put them into a buffer list, when the first video is finished, then play the video rtmp in the buffer list..
Can this way be available or are there any other recommended methods to achieve it? Any suggestion will be appreciated, thanks!
While the functionality is not built-in, Wowza does have a module called StreamPublisher to allow you to implement a server-side type of playlist. The source code for the module is available on GitHub. A scheduled playlist of multiple VOD files is streamed as a live stream, similar to a TV broadcast.
Is it possible to stream video and audio to a rtmp://-server with GPUImage?
I'm using the GPUImageVideoCamera and would love to stream (video + audio) directly to a rtmp-server.
I tried VideoCore which streams perfectly to e.g. YouTube, but whenever I try to overlay the video with different images I do get performance problems.
It seems as GPUImage is doing a really great job there, but I don't know how to stream with that. I found issues on VideoCore talking about feeding VideoCore with GPUImage, but I don't have a starting point on how that's implemented...
I'm using the soundcloud API and so far it was working fine until I hit this track:
https://soundcloud.com/katyperryofficial/roar
I don't know what's wrong with this track but it really wouldn't play. I can get all info of it just not the stream part. I checked chrome network tab and it gives me this. It just cancels without any error:
Name Method Status Type Initiator Size Time
stream?consumer_key=### GET (canceled) Other 13B 1.02s
Any ideas? Have I missed something?
Soundcloud devs made some changes in their code, and i don't know why, they are switching back to rtmp protocol.
Even the response said that track is streamable it can't be streamed with a regular stream_url.
After some digging in dev tools, i've noticed that some tracks use rtmp protocol instead of http/https
Anyway, you can find the streams of the track on:
http://api.soundcloud.com/tracks/TrackID/streams?consumer_key=XXX
from here, you're on your own. from my research, only flash (why?) can play rtmp streams.
Hi
I am using the NAudio library at http://naudio.codeplex.com/
I have this hardware made by some manufacturer which claims to send
audio with the following characteristics.
aLaw 8khz, AUD:11,0,3336,0
Not sure what it all means at this stage.
I received bunch of bytes from this device when a user speaks into the
equipment.
Hence I am constantly recieving a stream of bytes at particular times
At this stage I have been unable to decode the audio so I can hear
what is spoken into the device with my headphones.
I have tried writing the audio to a file doing code like
FWaveFileWriter = new WaveFileWriter("C:\Test4.wav",
WaveFormat.CreateALawFormat(8000, 1));
And have been unable to playback the sound using the sample demo apps.
I have tried similar code from
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=231245 and
http://naudio.codeplex.com/Thread/View.aspx?ThreadId=83270
and still have not been able to achieve much.
Any information is appreciated.
Thanks
Allen
If you are definitely receiving raw a-law audio (mono 8kHz) then your code to create a WAV file should work correctly and result in a file that can play in Windows Media Player.
I suspect that maybe your incoming byte stream is wrapped in some other kind of protocol. I'm afraid I don't know what "AUD:11,0,3336,0" means, but that might be a place to start investigating. Do you hear anything intelligible at all when you play back the file?