Which RDF formats does GraphDB supports for preserving target graphs? - graphdb

The GraphDB import settings has an option for 'Target graphs' with 3 possible values:
from data
the default graph
a named graph (to be specified)
There is a tooltip documentation that says:
Data is imported into one or more graphs. Some RDF formats may specify graphs, while others do not support that. The latter are treated as if they specify the default graph.
Which RDF formats does GraphDB recognizes for specifying a graph?
In particular, is Owl serialized in RDF/XML one such format?
When I import a zip file of multiple *.owl files in RDF/XML format, I get all the triples loaded into the default graph even though I chose the 'from data' import option.
Can someone explain me how to import ontologies into GraphDB with graphs from the data?

GraphDB is built on top of RDF4J and supports the same formats as it does.
.owl files do not support named graphs, unfortunately. This is by design, rather than a GraphDB-specific behaviour.
As for the formats that do support named graphs, and can be imported into RDF4J/GraphDB, here's a list:
TriX - XML-based RDF serialization. File extensions .xml or .trix.
TriG - extension on Turtle. File extension .trig.
TriG* - TriG with RDF-star support. File extension .trigs.
Binary RDF - for binary RDF documents. File extension .brf.
N-Quads - a line-based syntax for triples, with context support. File extension .nq.
JSON-LD - JSON serialization for linked data. File extension .jsonld.
RDF/JSON - another JSON serialization. File extension .rj.
Given that you have .owl files, which are serialized with XML, I'd suggest that the easiest thing would be to conver them to TriX and go from there.

Related

Adaptive Autosar Manifest files,What does Manifest.json and Manifest.arxml have? Is the JSON file created out of arxml

I am quiet new to Adaptive Autosar, could someone explain what Manifest does exactly? And I noticed in each folder (Platform) there is a manifest.json.
But my understanding from Autosar documents was that Manifest is supposed to be an arxml file.
So does Execution Manager in the platform need this .json file to parse ?
How are these .json files created and how does it fit into the Adaptive Autosar platform.
And what exact information is there inside these .json and .arxml files?
The standardized manifest content is formalized in the AUTOSAR XML Schema. Therefore, it is possible to create an ARXML model that covers the standardized manifest content.
However, stack vendors are free to convert the standardized ARXML content plus vendor-specific extensions into any format for the configuration on device.
JSON just turns out to be quite popular, but (as mentioned before) there is no actual limitation to JSON in place.
The term Manifest is used for formal specification of configuration.
Here is the [link][1] to official specification for Adaptive AUTOSAR.
.arxml format is standardized by AUTOSAR consortium.
However that does not mean in the actual machine .arxml file is uploaded and parsed by the software. Every vendor has freedom to define and use custom format of up-loadable file. It could be json as in your case. but really depends on vendor stack (Vector/Elektrobit/ETAS etc..).
Modelling done is captured and maintained (software configuration management like git) in form of ARXML files. The vendor specific tool may convert set of arxml files (not single, but a set of files which make sense) to up-loadable format like json, which are then placed in target machine or ECU are used by the software.
Bottom line :
arxml is used to define or specify configuration
formats like json is derived from set of arxml files and are actually used in the machine.
[1]: https://www.autosar.org/fileadmin/user_upload/standards/adaptive/17-03/AUTOSAR_TPS_ManifestSpecification.pdf

How to read/edit Parasoft SOATEST .tst file by code or manually?

I need to read the .txt file as raw text or by code to extract the data keyed in the test suite (resource/assertors,....). Is there any way to do that? by code or any editor.
If you have binary format of tst file then there could be a problem, there is no official API to read it.
It's very old format, I don't think that is still in use.
There could be also two, newest, formats of tst:
compressed XML
XML
In case of compressed XML you have to unzip it and then you have access to XML, where you can read it as text file.
In case of XML, it's just XML, you can read it as pure text file.
There is no official API which allows to read it in similar way as SOAtest's GUI to use in code i.e.: in Java.

Is there a way to create an intermediate output from Sphinx extensions?

When sphinx processes an rst to html conversion is there a way to see an intermediate format after extensions have been processed?
I am looking for an intermediate rst file that is generated after sphinx extensions were run.
Any ideas?
Take a look at the "ReST Builder" extension: https://pythonhosted.org/sphinxcontrib-restbuilder/.
There's not much to say; the extension takes reST as input and outputs ...drumroll... reST!
Quote:
This extension is in particular useful to use in combination with the autodoc extension. In this combination, autodoc generates the documentation based on docstrings, and restbuilder outputs the result are reStructuredText (.rst) files. The resulting files can be fed to any reST parser, for example, they can be automatically uploaded to the GitHub wiki of a project.

What file extension should be applied to NSCoder binary file format?

When using NSCoder and NSKeyedArchiver, I understand the data is stored in binary format. Therefore, what is the most appropriate file extension for a storage file? Many tutorials use .plist, but I believe this should be text property lists ~ key / value pairs.
You would typically use a custom extension.

generate RDF document From OWL file

is there any tool that can generate RDF document from OWL file ?
Jena (http://jena.sourceforge.net/ ) will do this.
Also look at OWLAPI (http://owlapi.sourceforge.net/ ) though personally I find it very awkward