GStreamer UDPSink blocksize property not working? - udp

I'm using GStreamer and sending audio using this pipeline:
gst-launch-1.0 -v filesrc location=soundfile.mp3 ! mad ! audioconvert ! audio/x-raw, layout=interleaved, format=F32LE, channels=2 ! udpsink blocksize=512 port=5005 host=127.0.0.1
However, blocksize doesn't appear to be working at all. This is the doc for udpsink, accessed by gst-inspect udpsink:
Element Properties:
name : The name of the object
flags: readable, writable
String. Default: "udpsink0"
preroll-queue-len : Number of buffers to queue during preroll
flags: readable, writable
Unsigned Integer. Range: 0 - 4294967295 Default: 0
sync : Sync on the clock
flags: readable, writable
Boolean. Default: true
max-lateness : Maximum number of nanoseconds that a buffer can be late before it is dropped (-1 unlimited)
flags: readable, writable
Integer64. Range: -1 - 9223372036854775807 Default: -1
qos : Generate Quality-of-Service events upstream
flags: readable, writable
Boolean. Default: false
async : Go asynchronously to PAUSED
flags: readable, writable
Boolean. Default: true
ts-offset : Timestamp offset in nanoseconds
flags: readable, writable
Integer64. Range: -9223372036854775808 - 9223372036854775807 Default: 0
enable-last-buffer : Enable the last-buffer property
flags: readable, writable
Boolean. Default: true
last-buffer : The last buffer received in the sink
flags: readable
MiniObject of type "GstBuffer"
blocksize : Size in bytes to pull per buffer (0 = default)
flags: readable, writable
Unsigned Integer. Range: 0 - 4294967295 Default: 4096
render-delay : Additional render delay of the sink in nanoseconds
flags: readable, writable
Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0
throttle-time : The time to keep between rendered buffers (unused)
flags: readable, writable
Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0
bytes-to-serve : Number of bytes received to serve to clients
flags: readable
Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0
bytes-served : Total number of bytes sent to all clients
flags: readable
Unsigned Integer64. Range: 0 - 18446744073709551615 Default: 0
sockfd : Socket to use for UDP sending. (-1 == allocate)
flags: readable, writable
Integer. Range: -1 - 2147483647 Default: -1
closefd : Close sockfd if passed as property on state change
flags: readable, writable
Boolean. Default: true
sock : Socket currently in use for UDP sending. (-1 == no socket)
flags: readable
Integer. Range: -1 - 2147483647 Default: -1
clients : A comma separated list of host:port pairs with destinations
flags: readable, writable
String. Default: "localhost:4951"
auto-multicast : Automatically join/leave the multicast groups, FALSE means user has to do it himself
flags: readable, writable
Boolean. Default: true
ttl : Used for setting the unicast TTL parameter
flags: readable, writable
Integer. Range: 0 - 255 Default: 64
ttl-mc : Used for setting the multicast TTL parameter
flags: readable, writable
Integer. Range: 0 - 255 Default: 1
loop : Used for setting the multicast loop parameter. TRUE = enable, FALSE = disable
flags: readable, writable
Boolean. Default: true
qos-dscp : Quality of Service, differentiated services code point (-1 default)
flags: readable, writable
Integer. Range: -1 - 63 Default: -1
send-duplicates : When a distination/port pair is added multiple times, send packets multiple times as well
flags: readable, writable
Boolean. Default: true
buffer-size : Size of the kernel send buffer in bytes, 0=default
flags: readable, writable
Integer. Range: 0 - 2147483647 Default: 0
host : The host/IP/Multicast group to send the packets to
flags: readable, writable
String. Default: "localhost"
port : The port to send the packets to
flags: readable, writable
Integer. Range: 0 - 65535 Default: 4951
This is already somewhat confusing, as the default value for blocksize is listed as both 0 and 4096. It seems that it is 4096, however, as that is my UDP packet size no matter what value I use for blocksize. What's more confusing is that I can scarcely find any mention of the blocksize property anywhere online, even in GStreamer's own documentation: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-good-plugins/html/gst-plugins-good-plugins-udpsink.html
The only properties mentioned are host and port. Has blocksize been deprecated or something? And if so, is there any way to control the amount of data sent in each UDP packet? I've tried using the mtu property in RTP with no luck (see here: gstreamer RTP packet size) and am kind of at my wits' end with this.

When you are using gstinspect on an element, it shows you all properties - not only from a particular element but also from its parents. Which is truly expected behaviour.
UdpSink indeed has only host and port property. Parent - GstMultiUDPSink has more.
You can find an inheritance chain in GStreamer documentation in form of a tree (Object Hierarchy paragraph). Unfortunately, not all links are working correctly, so it is better to google proper link for the parent element.
Ok, back to question, blocksize you are mentioning is property of GstBaseSink
and is described as:
The amount of bytes to pull when operating in pull mode.
Flags: Read / Write
Default value: 4096
So it defines how much data it will pull from upstream element. It will not affect UDP packets. Probably what you are looking for is buffer-size property of GstMultiUDPSink:
“buffer-size” gint
Size of the kernel send buffer in bytes, 0=default.
Flags: Read / Write
Allowed values: >= 0
Default value: 0

Related

NFS V4 READ of file returns 0 bytes

I'm in the process of writing an NFS V4 client and am debugging the results with Wireshark. I'm unable to read a file.
Through OPEN followed by GETATTR, I've opened the file and confirmed it's the desired file by the matching length (1001 bytes).
I then try to READ a single byte of the file with offset 0 and length 1. The resulting reply returns 0 bytes of data, even though the EOF is false. Repeated calls to the read command yield the same result.
Is there parameters in open or read that I'm missing to actually read the file?
Wireshark Output
Open Call
Operations (count: 5): SEQUENCE, PUTROOTFH, OPEN, GETFH, GETATTR
Opcode: SEQUENCE (53)
Opcode: PUTROOTFH (24)
Opcode: OPEN (18)
seqid: 0x00000000
share_access: OPEN4_SHARE_ACCESS_READ (1)
share_deny: OPEN4_SHARE_DENY_NONE (0)
clientid: 0x13f5c375a578cd7c
owner: <DATA>
Open Type: OPEN4_NOCREATE (0)
Claim Type: CLAIM_NULL (0)
Opcode: GETFH (10)
Opcode: GETATTR (9)
Open Reply
Operations (count: 5)
Opcode: SEQUENCE (53)
Opcode: PUTROOTFH (24)
Opcode: OPEN (18)
Status: NFS4_OK (0)
StateID
[StateID Hash: 0x91a9]
StateID seqid: 1
StateID Other: 13f5c375a578cd7c00000000
[StateID Other hash: 0x5b73]
change_info
Atomic: Yes
changeid (before): 110
changeid (after): 110
result flags: 0x00000004, locktype posix
.... .... .... .... .... .... .... ..0. = confirm: False
.... .... .... .... .... .... .... .1.. = locktype posix: True
.... .... .... .... .... .... .... 0... = preserve unlinked: False
.... .... .... .... .... .... ..0. .... = may notify lock: False
Delegation Type: OPEN_DELEGATE_NONE (0)
Opcode: GETFH (10)
Status: NFS4_OK (0)
Filehandle
length: 128
[hash (CRC-32): 0xc5dcd623]
FileHandle: 2b3e5cee938ee2b6afff448601a384b8ffc30bab28b5e2a4...
Opcode: GETATTR (9)
Status: NFS4_OK (0)
Attr mask: 0x00000010 (Size)
reqd_attr: Size (4)
size: 1001
Read Call
Operations (count: 3): SEQUENCE, PUTFH, READ
Opcode: SEQUENCE (53)
Opcode: PUTFH (22)
FileHandle
length: 128
[hash (CRC-32): 0xc5dcd623]
FileHandle: 2b3e5cee938ee2b6afff448601a384b8ffc30bab28b5e2a4...
Opcode: READ (25)
StateID
[StateID Hash: 0x91a9]
StateID seqid: 1
StateID Other: 13f5c375a578cd7c00000000
[StateID Other hash: 0x5b73]
offset: 0
count: 1
Read Reply
Operations (count: 3)
Opcode: SEQUENCE (53)
Opcode: PUTFH (22)
Status: NFS4_OK (0)
Opcode: READ (25)
Status: NFS4_OK (0)
eof: 0
Read length: 0
Data: <EMPTY>
For anybody who runs into a similar situation, I was able to fix the problem by changing the cachethis flag in the SEQUENCE operation of the READ call from true to false.
struct SEQUENCE4args {
sessionid4 sa_sessionid;
sequenceid4 sa_sequenceid;
slotid4 sa_slotid;
slotid4 sa_highest_slotid;
bool sa_cachethis; // ensure this is false
};
Some of the behavior of the cachethis flag is described in RFC 5661, but the information does not include why this behavior occurred.
One possibility is that Amazon's implementation of the NFS spec has a bug in the behavior with sa_cachethis.

Why perror() changes orientation of stream when it is redirected?

The standard says that:
The perror() function shall not change the orientation of the standard error stream.
This is the implementation of perror() in GNU libc.
Following are the tests when stderr is wide-oriented, multibyte-oriented and not oriented, prior to calling perror().
Tests 1) and 2) are OK. The issue is in test 3).
1) stderr is wide-oriented:
#include <stdio.h>
#include <wchar.h>
#include <errno.h>
int main(void)
{
fwide(stderr, 1);
errno = EINVAL;
perror("");
int x = fwide(stderr, 0);
printf("fwide: %d\n",x);
return 0;
}
$ ./a.out
Invalid argument
fwide: 1
$ ./a.out 2>/dev/null
fwide: 1
2) stderr is multibyte-oriented:
#include <stdio.h>
#include <wchar.h>
#include <errno.h>
int main(void)
{
fwide(stderr, -1);
errno = EINVAL;
perror("");
int x = fwide(stderr, 0);
printf("fwide: %d\n",x);
return 0;
}
$ ./a.out
Invalid argument
fwide: -1
$ ./a.out 2>/dev/null
fwide: -1
3) stderr is not oriented:
#include <stdio.h>
#include <wchar.h>
#include <errno.h>
int main(void)
{
printf("initial fwide: %d\n", fwide(stderr, 0));
errno = EINVAL;
perror("");
int x = fwide(stderr, 0);
printf("fwide: %d\n", x);
return 0;
}
$ ./a.out
initial fwide: 0
Invalid argument
fwide: 0
$ ./a.out 2>/dev/null
initial fwide: 0
fwide: -1
Why perror() changes orientation of stream if it is redirected? Is it proper behavior?
How does this code work? What is this __dup trick all about?
TL;DR: Yes, it's a bug in glibc. If you care about it, you should report it.
The quoted requirement that perror not change the stream orientation is in Posix, but does not seem to be required by the C standard itself. However, Posix seems quite insistent that the orientation of stderr not be changed by perror, even if stderr is not yet oriented. XSH 2.5 Standard I/O Streams:
The perror(), psiginfo(), and psignal() functions shall behave as described above for the byte output functions if the stream is already byte-oriented, and shall behave as described above for the wide-character output functions if the stream is already wide-oriented. If the stream has no orientation, they shall behave as described for the byte output functions except that they shall not change the orientation of the stream.
And glibc attempts to implement Posix semantics. Unfortunately, it doesn't quite get it right.
Of course, it is impossible to write to a stream without setting its orientation. So in an attempt to comply with this curious requirement, glibc attempts to make a new stream based on the same fd as stderr, using the code pointed to at the end of the OP:
58 if (__builtin_expect (_IO_fwide (stderr, 0) != 0, 1)
59 || (fd = __fileno (stderr)) == -1
60 || (fd = __dup (fd)) == -1
61 || (fp = fdopen (fd, "w+")) == NULL)
62 { ...
which, stripping out the internal symbols, is essentially equivalent to:
if (fwide (stderr, 0) != 0
|| (fd = fileno (stderr)) == -1
|| (fd = dup (fd)) == -1
|| (fp = fdopen (fd, "w+")) == NULL)
{
/* Either stderr has an orientation or the duplication failed,
* so just write to stderr
*/
if (fd != -1) close(fd);
perror_internal(stderr, s, errnum);
}
else
{
/* Write the message to fp instead of stderr */
perror_internal(fp, s, errnum);
fclose(fp);
}
fileno extracts the fd from a standard C library stream. dup takes an fd, duplicates it, and returns the number of the copy. And fdopen creates a standard C library stream from an fd. In short, that doesn't reopen stderr; rather, it creates (or attempts to create) a copy of stderr which can be written to without affecting the orientation of stderr.
Unfortunately, it doesn't work reliably because of the mode:
fp = fdopen(fd, "w+");
That attempts to open a stream which allows both reading and writing. And it will work with the original stderr, which is just a copy of the console fd, originally opened for both reading and writing. But when you bind stderr to some other device with a redirect:
$ ./a.out 2>/dev/null
you are passing the executable an fd opened only for output. And fdopen won't let you get away with that:
The application shall ensure that the mode of the stream as expressed by the mode argument is allowed by the file access mode of the open file description to which fildes refers.
The glibc implementation of fdopen actually checks, and returns NULL with errno set to EINVAL if you specify a mode which requires access rights not available to the fd.
So you could get your test to pass if you redirect stderr for both reading and writing:
$ ./a.out 2<>/dev/null
But what you probably wanted in the first place was to redirect stderr in append mode:
$ ./a.out 2>>/dev/null
and as far as I know, bash does not provide a way to read/append redirect.
I don't know why the glibc code uses "w+" as a mode argument, since it has no intention of reading from stderr. "w" should work fine, although it probably won't preserve append mode, which might have unfortunate consequences.
I'm not sure if there's a good answer to "why" without asking the glibc developers - it may just be a bug - but the POSIX requirement seems to conflict with ISO C, which reads in 7.21.2, ¶4:
Each stream has an orientation. After a stream is associated with an external file, but before any operations are performed on it, the stream is without orientation. Once a wide character input/output function has been applied to a stream without orientation, the stream becomes a wide-oriented stream. Similarly, once a byte input/output function has been applied to a stream without orientation, the stream becomes a byte-oriented stream. Only a call to the freopen function or the fwide function can otherwise alter the orientation of a stream. (A successful call to freopen removes any orientation.)
Further, perror seems to qualify as a "byte I/O function" since it takes a char * and, per 7.21.10.4 ¶2, "writes a sequence of characters".
Since POSIX defers to ISO C in the event of a conflict, there is an argument to be made that the POSIX requirement here is void.
As for the actual examples in the question:
Undefined behavior. A byte I/O function is called on a wide-oriented stream.
Nothing at all controversial. The orientation was correct for calling perror and did not change as a result of the call.
Calling perror oriented the stream to byte orientation. This seems to be required by ISO C but disallowed by POSIX.

Checksum calculation issue for ICMPv6 using Asio Boost

I have used the ICMP example provided in the ASIO documentation to create a simple ping utility. However, the example covers IPv4 only and I have a hard time to make it work with IPv6.
Upgrading the ICMP header class to support IPv6 requires a minor change - the only difference between ICMP and ICMPv6 header is the different enumeration of ICMP types. However, I have a problem computing the checksum that needs to be incorporated in the ICMPv6 header.
For IPv4 the checksum is based on the ICMP header and payload. However, for IPv6 the checksum should include the IPv6 pseudo-header before the ICMPv6 header and payload. The ICMPv6 checksum function needs to know the source and destination address that will be in the IPv6 header. However, we have no control over what goes into the IPv6 header. How can this be done in Asio-Boost?
For reference please find below the function for IPv4 checksum calculation.
void compute_checksum(icmp_header& header, Iterator body_begin, Iterator body_end)
{
unsigned int sum = (header.type() << 8) + header.code()
+ header.identifier() + header.sequence_number();
Iterator body_iter = body_begin;
while (body_iter != body_end)
{
sum += (static_cast<unsigned char>(*body_iter++) << 8);
if (body_iter != body_end)
sum += static_cast<unsigned char>(*body_iter++);
}
sum = (sum >> 16) + (sum & 0xFFFF);
sum += (sum >> 16);
header.checksum(static_cast<unsigned short>(~sum));
}
[EDIT]
What are the consequences if the checksum is not calculated correctly? Will the target host send echo reply if the echo request has invalid checksum?
If the checksum is incorrect, a typical IPv6 implementation will drop the packet. So, it is a serious issue.
If you insist on crafting the packet yourself, you'll have to do it
completely. This incldues finding the source IP address, to put it in
the pseudo-header before computing the checksum. Here is how I do it
in C, by calling connect() for my intended destination address
(even when I use UDP, so it should work for ICMP):
/* Get the source IP addresse chosen by the system (for verbose display, and
* for checksumming) */
if (connect(sd, destination->ai_addr, destination->ai_addrlen) < 0) {
fprintf(stderr, "Cannot connect the socket: %s\n", strerror(errno));
abort();
}
source = malloc(sizeof(struct addrinfo));
source->ai_addr = malloc(sizeof(struct sockaddr_storage));
source_len = sizeof(struct sockaddr_storage);
if (getsockname(sd, source->ai_addr, &source_len) < 0) {
fprintf(stderr, "Cannot getsockname: %s\n", strerror(errno));
abort();
}
then, later:
sockaddr6 = (struct sockaddr_in6 *) source->ai_addr;
op6.ip.ip6_src = sockaddr6->sin6_addr;
and:
op6.udp.check =
checksum6(op6.ip, op6.udp, (u_int8_t *) & message, messagesize);

AT91SAM7X512 reset type issue

I am using an AT91SAM7X512 for my application. I perform a software reset after certain action. The processor resets. But upon reading the RSTC_RSR status register I get an invalid register value for the reset type: RSTC_RSR = 0x700 which translates the RSTTYP register value to be 111 . This condition is not defined in the data sheet. I am reading the reset type by using the statement unsigned int buffer = AT91C_RSTC_RSTTYP;.
AT91C_RSTC_RSTTYP is the constant 0x700, it is the bitmask to mask out the RSTTYP bits in the RSTC_SR register (defined in AT91SAM7X512.h):
#define AT91C_RSTC_RSTTYP (0x7 << 8) // (RSTC) Reset Type
To read the register there is a pointer AT91C_RSTC_RSR:
#define AT91C_RSTC_RSR (AT91_CAST(AT91_REG *) 0xFFFFFD04) // (RSTC) Reset Status Register
So
unsigned int buffer = *AT91C_RSTC_RSR;
should work for reading the register (but I didn't test it).

Linux splice() returning EINVAL ("Invalid argument")

I'm trying to experiment with using splice (man 2 splice) to copy data from a UDP socket directly to a file. Unfortunately the first call to splice() returns EINVAL.
The man page states:
EINVAL Target file system doesn't support splicing; target file is opened in
append mode; neither of the descriptors refers to a pipe; or offset
given for nonseekable device.
However, I believe none of those conditions apply. I'm using Fedora 15 (kernel 2.6.40-4) so I believe splice() is supported on all filesystems. The target file should be irrelevant in the first call to splice, but for completeness I'm opening it via open(path, O_CREAT | O_WRONLY | O_TRUNC, S_IRUSR | S_IWUSR). Both calls use a pipe and neither call uses an offset besides NULL.
Here's my sample code:
int sz = splice(sock_fd, 0, mPipeFds[1], 0, 8192, SPLICE_F_MORE);
if (-1 == sz)
{
int err = errno;
LOG4CXX_ERROR(spLogger, "splice from: " << strerror(err));
return 0;
}
sz = splice(mPipeFds[0], 0, file_fd, 0, sz, SPLICE_F_MORE);
if (-1 == sz)
{
int err = errno;
LOG4CXX_ERROR(spLogger, "splice to: " << strerror(err));
}
return 0;
sock_fd is initialized by the following psuedocode:
int sock_fd = socket(AF_INET, SOCK_DGRAM, IPPROTO_UDP);
setsockopt(sock_fd, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
fcntl(sock_fd, F_SETFL, flags | O_NONBLOCK);
bind(sock_fd, ...);
Possibly related is that this code snippet is running inside a libevent loop. libevent is using epoll() to determine if the UDP socket is hot.
Found my answer. tl;dr - UDP isn't supported on the inbound side.
After enough Googling I stumbled upon a forum discussion and some test code which prints out a table of in/out fd types and their support:
$ ./a.out
in\out pipe reg chr unix tcp udp
pipe yes yes yes yes yes yes
reg yes no no no no no
chr yes no no no no no
unix no no no no no no
tcp yes no no no no no
udp no no no no no no
Yeah, it is definitely not supported for reading from a UDP socket, even in the latest kernels. References to the kernel source follow.
splice invokes do_splice in the kernel, which calls do_splice_to, which calls the splice_read member in the file_operations structure for the file.
For sockets, that structure is defined as socket_file_ops in net/socket.c, which initializes the splice_read field to sock_splice_read.
That function, in turn, contains this line of code:
if (unlikely(!sock->ops->splice_read))
return -EINVAL;
The ops field of the socket is a struct proto_ops. For an IPv4 UDP socket, it is initialized to inet_dgram_ops in net/ipv4/af_inet.c. Finally, that structure does not explicitly initialize the splice_read field of struct proto_ops; i.e., it initializes it to zero.
So sock_splice_read returns -EINVAL, and that propagates up.