Converting ubx file to RINEX; not obtaining .nav files, why? - gps

I'm trying to convert .ubx file to RINEX .obs and .nav files with RTKLIB.
But I found that sometime only .obs file can be generated while .nav can not.
What kind of data do I need in .ubx to generate .nav file?
Does anyone have any idea why?

First of all you need to enable both messagetypes RXM-RAW and RXM-SFRB.
To make sure the conversion from .ubx-Files works properly you should disable every other messages (use u-center). Sometimes RTKLIB does not work well with mixed files. UBX files are binary files and should not be filled up with NMEA-Messages.
My LEA-4t runs on a RasperryPi with a Python-Script I developed. It is a Statemachine that only puts out the RXM-Messages that are needed. It works well for me.

To create suitable RINEX you need a Ublox chip with enabled "RAW data option". A normal ublx chip does not have that option. But if you pay ublox probably the firmware of the chip can be updated to output that RAW messages.
These RAW messages contains the "original" data of the satellite signal.
So probaly your ubx file does not contain that raw messages.
See also ublox binary protocoll specification on ublox web page.
There are the RAW messages decribed

Related

Configure .eds file to map channels of a CANopen Client PLC

In Order use a PLC as a Client (formerly “Slave”), one has to configure the PDO channels, since the default values of the manufacturer are often not suitable. In my case, I need the PDOs so send INT valued instead of the default UNSIGNED8 (see. Picture).
Therefore my question: What kind of workflow would you recommend, to map the CANopen Client PDO channels?
I found the following workflow suitable, however I appreciate any improvements and recommendations from your side!
Start by locating the .eds file from the manufacturer. The image show this in the B&R Automation Studio Programming Environment
Open the file in a eds. Editor. I found the free Vector CANEds Editor very useful. Delete all RxPODs and RxPDO mappings that you don’t need.
Assign the needed Data Type (e.g. INTEGER16) and Channel Name (“1 Byte In (1)”).
Add the necessary PDOs and PDO mapping from the database. (This might actually be a bug, but if you just edit the PDOs without deleting and recreating them, I always receive error messages)
Map the Date to the Channels
Don't forget to write the number of channels in the first entry (in this image: 1601sub0)
Check the eds file for Errors (press F5) and copy&paste the eds file to the original location point 1.)
Add the PLC Client device in Automation Studio and you should see the correct mappings.
(PS: I couldn't make the images smaller ... any recommendations about formating this question are welcome!)

How to tell from a media file if it's a video or an audio? [duplicate]

This question already has answers here:
Using ffprobe to check if file is audio or video only
(5 answers)
Closed 1 year ago.
I have some files - audio and video files - stored in an aws s3 bucket, which can have different containers and stuff, and I would like to send them to the aws mediaConvert(MC). My issue is that I need to know whenever the file is an audio or a video file, because I need to configure the MC accordingly. The main app on the server is php based, but this information might not be that interesting.
I have ffmpeg, and I already use ffprobe to get the file duration, however I can't tell from the returning data if it's a video or audio-only file. And in an another question someone said that maybe I can get this info by using the ffmpeg, but as far as I know that requires me to download the file from the s3 to the server.
There is also an s3 function that I can use to get the header of a specific object, which contains the mime type of the file, however this sometimes returns as audio/mpeg for video files, which then configures the MC incorrectly.
I could also check the error message from the first job create of MC, which then tells me about the issue, and I can resend the file with the correct configuration, but then the job is marked as failed with error, which doesn't feel right to do.
So my question is; what would be the best approach to tell from a media file if it's a video or audio only? It's possible that ffmpeg or ffprobe can do it remotely, but I just don't know the tools deeply enough, so I'm not entirely sure that it can only work in that way I described it.
You could use
ffprobe -v quiet -print_format json -show_format -show_streams sample.mp4
The above command outputs input file's (here: sample.mp4) parameters along with audio and video streams' details. You can see the sample output here.
What you're interested in is .streams[].codec_type to figure out if the input file contains only audio, only video or both streams. You can use jq for that matter, for example:
ffprobe -v quiet -print_format json -show_format -show_streams sample.mp4 | jq .streams[].codec_type
"video"
"audio"
or parse the output JSON in any other way.

Watson speech to text in Node-RED crashes Node-RED application

I am using Node-RED and I am trying to store text to speech data in a Cloudant database. That works fine and I also can get it out in msg.payload.speech but when I feed it into Speech To Text, my whole app crashes..... with this error:
ERR Dropped log message: message too long (>64K without a newline)
so it seems that the Speech To Text node cannot handle large messages. It also seems that Text to Speech makes a very long string regardless of what you inject. One word or a whole paragraph does not make any difference.
Is there a way to get around this issue in Node-RED?
What happens if you split the audio that you feed to the STT service into smaller chunks? Does that work? How much audio are you trying to feed?
If you give us more details about what you are trying to accomplish then we should be able to help.
Can you also explain the problem that you are experiencing with TTS, what do you mean with "Text to Speech makes a very long string regardless of what you inject"?
thank you
Thanks for your reaction.
What I basically want to do is, using the S2T node in Node-RED. I have put a .wav file in a Cloudant database. So when I feed this .wav file into the S2T node, the app crashes. I used several ways to get the Speech into the database; 1. via the text to speech node, 2. added manually the .wav file in the database.
When I look in Cloudant, it is one long line of characters, so I have placed the wave-file on different lines, which did not help, then I have split up the wave-file in smaller chucks, this did not work either probably because the wave file loses it's structure.
The next thing I tried was using a flac file, which is also supported by T2S and S2T and this is a compressed audio file (factor 10), it would be less then 64k. But I got the message that only wav files are supported. Then I looked in the code of the S2T-node and found out that only wav is supported (the Watson S2T service in Bluemix supports more audio formats).

Decoding base64 for a large file in Objective-C

I consuming Web-Services that streams a pdf file to my iOS device. I used SOAP message to interact with web-services and using NSXMLParser:foundCharacters() after the stream is complete and I want to get the content of pdf file from my streamed xml file which was created in first step. The data I get is encoded in base64 and I have the methods to decode the content back.For small file the easiest approach is reading/collect all content with NSXMLParser:foundCharacters from first streamed file and call the decoding base64 method when I get whole data from parse:didEndElement
(the above approach works fine I tested for this case and I made the right pdf file out of it).
Now my question is what is the best approach(optimizing memory/speed) to read/decode/write to make the final pdf from a big streamed files.
Is there any code available or any thought to accomplish this in objective-C

How is file upload handled in HTTP?

I am curious to know how webservers handle file uploads.
Is the entire file sent as a single chunk? Or is it streamed into the webserver - which puts it together and saves it in a temp folder for PHP etc. to use?
It's just a matter of following the encoding rules so that one can easily decode (parse) it. Read on the specification about multipart-form/data encoding (the one which is required in HTML based file uploads using input type="file").
Generally the parsing is done by the server side application itself. The webserver only takes care about streaming the bytes from the one to the other side.
It's streamed to answer that question, but see this RFC 1867 for more information.
RFC 1867 describes the mechanism.