Publishing a stream using librtmp in C/C++ - rtmp

How to publish a stream using librtmp library?
I read the librtmp man page and for publishing , RTMP_Write() is used.
I am doing like this.
//Code
//Init RTMP code
RTMP *r;
char uri[]="rtmp://localhost:1935/live/desktop";
r= RTMP_Alloc();
RTMP_Init(r);
RTMP_SetupURL(r, (char*)uri);
RTMP_EnableWrite(r);
RTMP_Connect(r, NULL);
RTMP_ConnectStream(r,0);
Then to respond to ping/other messages from server, I am using a thread to respond like following:
//Thread
While (ThreadIsRunning && RTMP_IsConnected(r) && RTMP_ReadPacket(r, &packet))
{
if (RTMPPacket_IsReady(&packet))
{
if (!packet.m_nBodySize)
continue;
RTMP_ClientPacket(r, &packet); //This takes care of handling ping/other messages
RTMPPacket_Free(&packet);
}
}
After this I am stuck at how to use RTMP_Write() to publish a file to Wowza media server?

In my own experience, streaming video data to an RTMP server is actually pretty simple on the librtmp side. The tricky part is to correctly packetize video/audio data and read it at the correct rate.
Assuming you are using FLV video files, as long as you can correctly isolate each tag in the file and send each one using one RTMP_Write call, you don't even need to handle incoming packets.
The tricky part is to understand how FLV files are made.
The official specification is available here: http://www.adobe.com/devnet/f4v.html
First, there's a header, that is made of 9 bytes. This header must not be sent to the server, but only read through in order to make sure the file is really FLV.
Then there is a stream of tags. Each tag has a 11 bytes header that contains the tag type (video/audio/metadata), the body length, and the tag's timestamp, among other things.
The tag header can be described using this structure:
typedef struct __flv_tag {
uint8 type;
uint24_be body_length; /* in bytes, total tag size minus 11 */
uint24_be timestamp; /* milli-seconds */
uint8 timestamp_extended; /* timestamp extension */
uint24_be stream_id; /* reserved, must be "\0\0\0" */
/* body comes next */
} flv_tag;
The body length and timestamp are presented as 24-bit big endian integers, with a supplementary byte to extend the timestamp to 32 bits if necessary (that's approximatively around the 4 hours mark).
Once you have read the tag header, you can read the body itself as you now know its length (body_length).
After that there is a 32-bit big endian integer value that contains the complete length of the tag (11 bytes + body_length).
You must write the tag header + body + previous tag size in one RTMP_Write call (else it won't play).
Also, be careful to send packets at the nominal frame rate of the video, else playback will suffer greatly.
I have written a complete FLV file demuxer as part of my GPL project FLVmeta that you can use as reference.

In fact, RTMP_Write() seems to require that you already have the RTMP packet formed in buf.
RTMPPacket *pkt = &r->m_write;
...
pkt->m_packetType = *buf++;
So, you cannot just push the flv data there - you need to separate it to packets first.
There is a nice function, RTMP_ReadPacket(), but it reads from the network socket.
I have the same problem as you, hope to have a solution soon.
Edit:
There are certain bugs in RTMP_Write(). I've made a patch and now it works. I'm going to publish that.

Related

Audio through CAN FD into headphones

I am trying to record audio using a 12 bit resolution ADC, take the sample buffer and send it through CAN FD to another device, which takes samples of this audio and creates a .wav and plays it. The problem is that I see the data of the microphone being sent through CAN FD to the other device, but I am not able to transform this data into a .wav file properly and hear what I say through the microphone. I only hear beeps.
I'm creating a new .wav every 4 CAN FD messages in order to make some kind of real time communication and decrease the delay, but I don't think this is possible or if I am thinking it the proper way.
In this thread I take the message sent by the CAN FD and concatenate it in a buffer in order to introduce it in a .wav file. I have tried bigger buffers but it doesn't change the outcome.
How could I be able to take the data from the CAN FD and hear it?
Clarification: I know using CAN FD to transmit audio isn't the proper way, but it is for a master project.
struct canfd_frame frame;
CAN_MSG msg;
int trama_can[72];
int nbytes;
while (status_libreria == 0)
;
unsigned char buffer[256];
// FILE * fPtr;
int i=0,x=0;
//fPtr = fopen("Test.txt", "w");
while (1) {
do {
nbytes = read(s, &frame, sizeof(struct canfd_frame));
} while (nbytes == 0);
msg.id.ext = frame.can_id;
msg.dlc = frame.len;
if (msg.dlc > 8)
msg.dlc = 8; //Protecci�n hasta adaptar AC3LIB a CANFD
Numas_memcpy(&(msg.data.bdata), &(frame.data), msg.dlc);
can_frame_2_ac3lib(&msg, BUS_VERTICAL);
for(x=0;x<64;x++) buffer[i*64+x] = frame.data[x];
printf("%d \r\n",frame.data[x]);
printf("i:%d \r\n",i);
// Copiar datos a fichero.wav y reproducirlo simultaneamente
if (i == 3) {
printf("Datos IN\r\n");
write_wav("prueba.wav",256 , (short int *)buffer, 16000);
//fwrite(buffer,1,sizeof(buffer),fPtr);
//fclose(fPtr);
system("aplay prueba.wav -f cd");
i = 0;
system("rm prueba.wav");
}
i++;
}
32 first bytes of the audio file being recorded
In the picture, as you can see, the data is being recorded. moreover, this data is the same data as in the ADC, but when I play it, I only hear noise.
Simplify the problem first. Make sure you can transmit known data from one end to the other first at low rates. I'm sure the suggestion below will sound far too trivial. But until you are absolutely confident you understand it all, I predict you sill have many struggles.
Slowly - one frame per second, or even slower.
Learn to send one 0x55 byte from one end to the other and verify at the receiver.
Learn to send a few 0x55 in one frame and verify.
Learn to send 0x12345678 - verify it ends up with the bytes in the right order at the other end
Learn to send a counter. Check it at the receiver, make sure you do not drop any data.
Now do it all again but 10x faster.
Continue until you can send a counter at 10x the rate you need to for the audio without dropping any frames at all, for minutes and then hours.
Stress the rest of the system to make sure it still works under stress.
Only now, can you start to learn about sending audio.
Trust me, you will learn a lot!

How to extract frames from video using webcodecs from chrome 86

WebCodecs is released in Chrome 86. But there's no real code example on how to use it yet. Given a video url, how to extract video frames as ImageData using webcodecs?
What you describe is the entire complex process of acquiring raw bitmap-like data (e.g. something you can dump on a canvas), from a formatted file or a stream of data chunks.
In case of files (including the case where your URL points to a complete file, such as an .mp4 file), this is generally made of 2 steps:
Parsing the container file into individual chunks of encoded video and/or audio
Decoding these chunks of encoded video/audio
WebCodecs only facilitates step 2 of this process, i.e. what is called decoding. The reasoning behind this decision was that parsing the container is computationally trivial, so you can efficiently do this with the File APIs already, but you still need to implement parsing/processing the container yourself.
Luckily, plenty of libraries exist already, many of which ironically existed long before the emergence of the WebCodecs API.
MP4Box is one example, helping you acquire encoded video and audio chunks, which you can then feed into a VideoDecoder or AudioDecoder.
With MP4Box, the key piece of your code will be centered around the onSamples callback you provide, and it'll look something like this:
mp4BoxFile.onSamples = (trackId, user, chunks) =>
{
for (let i = 0; i < chunks.length; i++)
{
let chunk = chunks[i];
let encodedChunk = new EncodedVideoChunk({
// you'll need to deep-inspect chunk to figure these out
type: "key", // or "delta"
timestamp: ...
duration: ...
data: chunk.data
});
// pass encodedChunk to a VideoDecoder instance's decode method
}
};
This is just a rough sketch of how your code will probably look, it probably won't work without more inspection, and it'll take a lot of trial and error, because this is very low level stuff.
WebCodecs is not the silver bullet you probably expected, but it can help you build one.

How to read variable length data from an asynchronous tcp socket?

I'm using CocoaAsyncSocket for an iOS project. I'm trying to read VarInts through an asynchronous interface. The problem is unlike something else like a String, where I can prefix a length, I don't know the length of a varint beforehand. It needs to be processed one byte at a time, but since each read operation is asynchronous other read calls may have been queued in between.
I considered reading into a buffer then processing it, say reading 5 bytes (the max length for a varint-32), and pushing extra bytes back, but that may hang unnecessarily if the varint is only 4 bytes and I'm waiting for a 5th byte to be available.
How can I do this? Also, I cannot change the protocol on the other end, to use fixed size ints.
Here's a snippet of code as Josh requested
- (void)readByte:(void (^)(int8_t))onComplete {
NSUInteger size = 1;
int32_t tag = OSAtomicAdd32(1, &_nextTag);
dispatch_async(self.dispatchQueue, ^{
[self.onCompleteHandlers setObject:(^void (NSData* data) {
int8_t x = 0;
[data getBytes:&x length:size];
onComplete(x);
}) forKey:[NSNumber numberWithInteger:((NSInteger) tag)]];
[self.socket readDataToLength:size withTimeout:-1 tag:tag];
});
}
A callback is saved in a dictionary, which is used in the delegate method socket: didReadData: withTag.
Suppose I'm reading a VarInt byte by byte:
execute read first byte for varint
don't know if we need to read another byte for a varint or not; that depends on the result of the first read
(possible) read another byte for something else
read second byte for varint, but now it's actually the 3rd byte being read
I can imagine using a flag to indicate whether or not I'm in a multipart-read, and a queue to hold reads that should be executed after the multipart-read, and I've started writing it but it's quite messy. Just wondering if there is a standard/recommended/better way to approach this problem.
in short there are 4 ways to know how much to read from a socket...
read some format that you can infer the length from like the Content-Length header... only works if the whole request can be put together before the body is sent.
read until some pattern: like \r\n\r\n at the end of the headers
read until some timeout... after you get no bytes after n seconds you flush the buffers and close the connection.
read until the server closes the connection... actually used to be pretty common.
these each have problems and I would probably lean in your case from using some existing protocol.
of course there is overhead to doing it that way, and you may find that you don't want to use any of that application level stuff and your requests may be like:
client>"doMath(2+5)\0"
server>"(7)\0"
but it is hard to answer your general question specifically.
edit:
So I looked into the varint base-128 issue a little more and I think really only a timeout or the server closing the connection will work, if you are writing these right at the TCP level which is horrible...

WCF Stream/Message size

I have a streamed WCF service. In one operation, I receive a file, for upload purposes.
If I try to do something like this
request.FileContent.Length
Then I receive an OperationNotSupported exception. That's Ok.
But how could I get the file size without actually transfering it entirely?
I know I could send this information along with the call, as a Header, but I don't want to go this way.
If WCF is able to limit the request size trough maxReceiveMessageSize. How can I use the same information to check the message/stream size?
In general, you can't know the size of a byte stream without reading it all and counting the bytes, unless there is data at the start of the stream which tells you how many bytes there are in the entire stream, or some other out-of-band way to communicate the length of the stream, such as in the WCF message headers. You will have to go with the Header approach if you want to know the size without reading the stream.
The WCF maxReceiveMessageSize works by counting the bytes as they are received and throwing an exception if the limit is exceeded... it doesn't know the stream length either, and can't pre-emptively prevent the message being received without first reading the maximum allowed number of bytes.
But how could I get the file size without actually transfering it entirely? I know I could send this information along with the call, as a Header, but I don't want to go this way.
You're going to have to send the size of the byte stream down the pipe first there is no other way. (if there was some inbuilt way thats all it would be doing anyway)
It doesn't add much complexity to prepend it to the stream:
var bytes = File.ReadAllBytes("somefile.txt");
stream.Write(BitConverter.GetBytes((Int32)bytes.Length), 0, 4);
stream.Write(bytes, 0, bytes.Length);
and then on the other side when reading the stream:
byte[] fileLengthBytes =new byte[4];
stream.Read(fileLengthBytes, 0, 4);
int length = BitConverter.ToInt32(fileLengthBytes, 0);
//you know the size of the file now, log it or show the user
var fileBytes = new byte[length];
stream.Read(fileBytes, 0, fileBytes.Length);
this is only an example - you may not want to create a byte[] buffer if your stream is large.

AsyncSocket: getting merged two packets instead of separate two packets

I'm executing 4 startup commands and also expecting to receive 4 responses. The server is already implemented and another dev who is developing android, is able to receive those 4 separate responses, however, I'm getting 2 good responses (separate) and then 3rd and 4th responses come as one response. I'v placed NSLog of NSData result in completeCurrentRead, and it outputs me merged packet "0106000000000b0600000000" instead of separate packets "010600000000" and "0b0600000000". I'v also tested those 3rd and 4th commands separatedly (only one at a time) and everything is OK with the server, it sends them separately, however there occurs merge (with 3rd and 4th) if all four commands are executed in a line. Any ideas?
UPDATE: I think I'v traced to the problem roots. There's a call that reads packet data from a stream in doBytesAvailable method:
CFIndex result = [self readIntoBuffer:subBuffer maxLength:bytesToRead];
And in readIntoBuffer:maxLength, there's a call (length == 256) :
return CFReadStreamRead(theReadStream, (UInt8 *)buffer, length);
So, CFReadStreamRead returns incorrect length of packet - it return length of 12 (instead of 6), and also grabs merged data. Hm, what might causing CFReadStreamRead to read two packets into one, instead of reading them separately...
UPDATE2: I'm using onSocket:didReadData:withTag: delegate method and expecting to receive response data with the tag of request I performed. I have realized recently, streams are streams, not packets but how I can solve that? Server responses does not have terminating chars at start and end of response, just response size, that comes as 2 - 5 bytes. I can cut the first part of response (first packet) and ignore the second part but how AsyncSocket will make another callback with the second part of the response (second packet)? If I will cut only the first parts and ignore the second then IMHO the second "packet" will be lost...
How to cut the first part of response and tell AsyncSocket to make another callback with tag and the second part of response as separate callback?
UPDATE3: In onSocket:didReadData:withTag:, I manually cut merged response, handle the first part (first packet) and then at the end, throwing a call to onSocket:didReadData:withTag: again:
if (isMergedPacket) {
...
[self onSocket:sock didReadData:restPartOfTheResponse withTag:myCommandTag];
}
However, it looks like AsyncSocket itself pairs every request packet with its response packet (via AsyncReadPacket class) using tags. So, my manual cutting works, but AsyncSocket does not know that I already handled both packets, and it still tries to read the second packet. So, I'm getting sock:shouldTimeoutReadWithTag:... callback which is called when a read operation has reached its timeout without completing.
Found solution. It's not necessary to change and dig into AsyncSocket. You just need to define the length of each response - how much bytes are you interested in reading and getting your callback. More info you can on other post here