how to convert Flatbuffer data file to json? - flatbuffers

I used flatbuffer to encode data into a file named 'person.txt', how can I convert it into a json file ?
I trying 'flatc --json person.txt person.json' but it failed.
I have 'person.fbs, person.txt', How to do it ?

you need to provide path of schema too in the command you are executing.
flatc.exe --raw-binary -t <path to fbs schema file> -- <path to flatbuffer binary file>

Related

how to get a binary schema from a text schema file?

I have a flatbuffer text schema se.fbs file.
How can I get a corresponding binary schema file?
This cmd does not work: flatc --cpp se.fbs --schema

Okapi Java properties file and XLIFF file

Is it possible to use Okapi to convert java properties files to XLIFF and to reconstruct java properties file from XLIFF file.
Yes, this is possible using the Properties Filter.
An example of doing this using Okapi Tikal would look like this:
tikal.sh -fc okf_properties -x sample.properties -nocopy
# translate the resulting sample.properties.xlf file
tikal.sh -fc okf_properties -m sample.properties.xlf
You can also use this with Rainbow as part of an extraction pipeline.

how to read a mounted dbc file in databricks?

I try to read a dbc file in databricks (mounted from an s3 bucket)
the file path is:
file_location="dbfs:/mnt/airbnb-dataset-ml/dataset/airbnb.dbc"
how to read this file using spark?
I tried the code below:
df=spark.read.parquet(file_location)
But it generates and error:
AnalysisException: Unable to infer schema for Parquet. It must be specified manually.
thanks for help !
I tried the code below: df=spark.read.parquet(file_location) But it
generates and error:
You are using spark.read.parquet but want to read dbc file. It won't work this way.
Don't use parquet but use load. Give file path with file name (without .dbc extension) in path parameter and dbc in format paramter.
Try below code:
df=spark.read.load(path='<file_path_with_filename>', format='dbc')
Eg: df=spark.read.load(path='/mnt/airbnb-dataset-ml/dataset/airbnb', format='dbc')

Send MongoDB db.serverStatus() output to a text file

I am looking for the simplest way to direct from mongo shell the output of db.serverStatus() to a text file.
If i try the pipe symbol db.serverStatus() >> myoutput.txt I get Reference error: my output is not defined.
You can use Javascript to translate the result into a printable JSON.
mongo dbname command.js > output.txt
where command.js contains this (or its equivalent):
printjson( db.serverStatus())
By the way if you are running just a single Javascript statement you don't have to put it in a file and instead you can use:
mongo dbname --eval "printjson(db.serverStatus())" > output.txt
For reference: "http://docs.mongodb.org/manual/tutorial/write-scripts-for-the-mongo-shell/"
Explaination: The eval option will pass the mongo shell a JavaScript fragment which will return the output of db.serverStatus() using the mongo shell and then will output it to the 'output.txt' file.

How do I can check an input file is compressed (ZIP) or not?

How do I can check an input file is compressed (ZIP) or not ?.
Is the solution to read the file info using "Get File Names" step and check the extension field ?
Use the "file" command if you're on Unix.
If not install cygwin and goto 1.
If this is related to your other question about conditionally reading different files then I would consider getting your files into a consistent format first. i.e. all compressed.