x265 Set max NAL/slice size via libx265 - size

I want to send my encoded NAL packets via UDP as a fast webcam streaming program.
Because of the MTU size, I want to set the NAL packets to a max size of around 1390 bytes.
I've found another response on Stack Overflow explaining that one has to set the i_max_slice_size or something, but this was for x264. I've been trying to find the equivalent for x265 but I cannot see it anywhere.
I am using the libx265 (using x265.h) library for encoding.
Can anyone guide me in the right direction please? All help is greatly appreciated!

AFAIK there is no such thing in libx265 currently (it is in earlier stage of development and not all use cases are currently covered). Also as far as I looked it doesn't support more than one slice per frame at all currently not saying about size limited slices.

Related

WebRTC Data Channel - max data size?

I'm using the WebRTC data channel to send JSON data. Seems to be working fine for small packets of data.
However, I'm trying to send a larger package (HTML of a webpage, base64 encoded, so maybe a few hundred KB), it never makes it to the other end.
Is there a max size?
I think the spec doesn't say a word about max data size. In practice 16 KB is the maximum. Take a look at this blog post and especially the throughput / packet size diagram. This result has been achieved experimentally and is the one allowing for most compatibility between webrtc implementations.
I've managed to send packets as large as 256 KB (and even larger if memory serves me right) between two Firefox instances. This was about an year ago and since then the implementation may have changed an the max data size with it.
If you want to send packets larger than 16 K you have to fragment them first. The fragmentation has to be implemented as part of you application protocol.

Retrieve data from USRP N210 device

The N210 is connected to the RF frontend, which gets configured using the GNU Radio Companion.
I can see the signal with the FFT plot; I need the received signal (usrp2 output) as digital numbers.The usrp_sense_spectrum.py output the power and noise_floor as digital numbers as well.
I would appreciate any help from your side.
Answer from the USRP/GNU Radio mailing lists:
Dear Abs,
you've asked this question on discuss-gnuradio and already got two
answers. In case you've missed those, and to avoid that people tell
you what you already know:
Sylvain wrote that, due to a large number of factors contributing to
what you see as digital amplitude, you will need to calibrate
yourself, using exactly the system you want to use to measure power:
You mean you want the signal power as a dBm value ?
Just won't happen ... Too many things in the chain, you'd have to
calibrate it for a specific freq / board / gain / temp / phase of the
moon / ...
And I explained that if you have a mathematical representation of how
your estimator works, you might be able to write a custom estimator
block for both of your values of interest:
>
I assume you already have definite formulas that define the estimator for these two numbers.
Unless you can directly "click together" that estimator in GRC, you will most likely have to implement it.
In many cases, doing this in Python is rather easy (especially if you come from a python or matlab background),
so I'd recommend reading at least the first 3 chapters of
https://gnuradio.org/redmine/projects/gnuradio/wiki/Guided_Tutorials
If these answers didn't help you out, I think it would be wise to
explain what these answers are lacking, instead of just re-posting the
same question.
Best regards, Marcus
I suggest that you write a python application and stream raw UDP bytes from the USRP block. Simply add a UDP Sink block and connect it to the output of the UDH: USRP Source block. Select an appropriate port and stream to 127.0.0.1 (localhost)
Now in your python application open a listening UDP socket on the same port and receive the data. Each sample from the UDH: USRP Source is a complex pair of single prevision floats. This means 8 bytes per sample. The I float is first, followed by the Q float.
Note that the you need to pay special attention to the Payload Size field in the UDP Sink. Since you are streaming localhost, you can use a very large value here. I suggest you use something like 1024*8 here. This means that each packet will contain 1024 IQ Pairs.
I suggest you first connect a Signal Source and just pipe a sin() wave over the UDP socket into your Python or C application. This will allow you to verify that you are getting the float bytes correct. Make sure to check for glitches due to overflowing buffers. (this will be your biggest problem).
Please comment or update your post if you have further questions.

Enabling 10 Hz sampling rate in Ublox modules

I'm using ublox NEO-M8N-0-01 GNSS module.
This module supports up to 5Hz GPS+GLONASS and 10 Hz GPS only.
However, when I try to change the sampling rate (via UBX-CFG-RATE in the messages view) I can only increase it to 5 Hz (Measurement period = 200ms). Any value below 200ms is impossible (changes the box to pink).
It happens even if I only produce NMEA message GxGGA.
The way I made it only GPS was via UBX-CFG-GNSS
Has anyone encountered this issue?
Thanks in advance
Roi Yozevitch
You don't say how you are setting the rate however going by your description I'm assuming you are using the ublox u-center software.
There is a simple explanation for this issue and a simple solution: Their software has a bug in (or wasn't updated to match the final specification of the part).
The solution is to not use u-center, it's the PC software that's complaining not the receiver. The receiver itself doesn't care what the spec sheet says, it will try it's best to run at whatever rate you request.
Sending commands directly I've managed to get a fairly reliable 10Hz GPS+Glonass. There is the occasional missing point but most of the time it keeps up.
Running GPS only you can get faster than 10Hz. If you play with the settings and restrict it to 8 channels 18-19Hz is fairly reliable. Unfortunately 20Hz is pushing it too far, you end up getting positions at 10Hz.
Obviously when running at these update rates make sure that your baud rate is high enough to cope with the requested messages and rate.

Integrating KollMorgen AKD Basic motor drive using TCP/IP protocol in LabVIEW

Myself and my team are new to Kollmorgen AKD Basic motor drive and are working with this drive for the first time using TCP/IP protocol interface with LabVIEW.
We could write/set various variables succesfully but are facing an issue while reading settings and variables from the drive. Problem we are facing is because we are not getting exact number of bytes to read from Kollmorgen AKD Basic drive for a particular command. Actual number of bytes written and returned back by Kollmorgen AKD Basic drive is different than what is documented. e.g. As per Kollmorgen AKD Basic drive documentation, read request to read value stored in USER.INT6 variable should write back a DWORD or 4 Octates. If USER.INT6 variable contains value of 1, then I am getting value of '{CR}{LF}--' when I read 4 bytes. If I try to read 8 bytes, then I get '{CR}{LF}-->1{CR}{LF}' Where {CR} is 'carriage return' character and {LF} is 'line feed' character. If USER.INT1 contains value of 100, then I am getting value of '{CR}{LF}-->100'on reading 8 bytes. And so if USER.INT6 contains value of 1000, then I have to read 9 bytes.
This is happening to all other variables as well. Real problem is I don't know at run-time exactly what value a variable would have and to get complete value how many bytes I need to read. I am sure I am not the first to face this issue, and there would be a way to overcome it. So seeking help of seasoned experts. Please let me know.
Thanks and Regards,
Sandeep
I have no experience with that particular device, but in general, if it doesn't return a known number of bytes, then you're basically down to reading one byte at a time until you see the terminator.
In the specific case of CRLF, you can configure the TCP Read primitive to use a terminated mode using the mode input, so I believe that should work in your case, but I never tried it myself.
I would suggest altering the TCP/IP read mode from standard to CRLF, I have a feeling that your device terminates the messages with a CRLF string. If you insert a large enough number of bytes to read (eg. 20), it will try to read all those bytes or until it receives the CRLF combo.
Could you alter the display to HEX, I have a feeling that your --> actually are the number of bytes in the response.
It would help if you post your code!
From a quick glance at the Kollmorgan site looks like this drive uses Modbus TCP/IP. I suggest using the LabVIEW Modbus Library http://sine.ni.com/nips/cds/view/p/lang/en/nid/201711
Check out Modbus on Wiki to learn the specs http://en.wikipedia.org/wiki/Modbus
You can get the support for this from Kollmorgen itself. They have Application Engineers based in Pune.

Simple robust error correction for transmission of ascii over serial (RS485)

I have a very low speed data connection over serial (RS485):
9600 baud
actual data transmission rate is about 25% of that.
The serial line is going through an area of extremely high EMR. Peak fluctuations can reach 3000 KV.
I am not in the position (yet) to force a change in the physical medium, but could easily offer to put in a simple robust forward error correction scheme. The scheme needs to be easy to implement on a PIC18 series micro.
Ideas?
This site claims to implement Reed-Solomon on the PIC18. I've never used it myself, but perhaps it could be a helpful reference?
Search for CRC algorithm used in MODBUS ASCII protocol.
I develop with PIC18 devices and currently use the MCC18 and PICC18 compilers. I noticed a few weeks ago that the peripheral headers for PICC18 incorrectly map the Busy2USART() macro to the TRMT bit instead of the TRMT2 bit. This caused me major headaches for short time before I discovered the problem. Example, a simple transmission:
putc2USART(*p_value++);
while Busy2USART();
putc2USART(*p_value);
When the Busy2USART() macro was incorrectly mapped to the TRMT bit, I was never waiting for bytes to leave the shift register because I was monitoring the wrong bit. Before I realized the inaccurate header file, the only way I was able to successfully transmit a byte over 485 was to wait 1 ms between bytes. My baud rate was 91912 and the delays between bytes killed my throughput.
I also suggest implementing a means of collision detection and checksums. Checksums are cheap, even on a PIC18. If you are able to listen to your own transmissions, do so, it will allow you to be aware of collisions that may result from duplicate addresses on the same loop and incorrect timings.