Adaptive Autosar Manifest files,What does Manifest.json and Manifest.arxml have? Is the JSON file created out of arxml - embedded

I am quiet new to Adaptive Autosar, could someone explain what Manifest does exactly? And I noticed in each folder (Platform) there is a manifest.json.
But my understanding from Autosar documents was that Manifest is supposed to be an arxml file.
So does Execution Manager in the platform need this .json file to parse ?
How are these .json files created and how does it fit into the Adaptive Autosar platform.
And what exact information is there inside these .json and .arxml files?

The standardized manifest content is formalized in the AUTOSAR XML Schema. Therefore, it is possible to create an ARXML model that covers the standardized manifest content.
However, stack vendors are free to convert the standardized ARXML content plus vendor-specific extensions into any format for the configuration on device.
JSON just turns out to be quite popular, but (as mentioned before) there is no actual limitation to JSON in place.

The term Manifest is used for formal specification of configuration.
Here is the [link][1] to official specification for Adaptive AUTOSAR.
.arxml format is standardized by AUTOSAR consortium.
However that does not mean in the actual machine .arxml file is uploaded and parsed by the software. Every vendor has freedom to define and use custom format of up-loadable file. It could be json as in your case. but really depends on vendor stack (Vector/Elektrobit/ETAS etc..).
Modelling done is captured and maintained (software configuration management like git) in form of ARXML files. The vendor specific tool may convert set of arxml files (not single, but a set of files which make sense) to up-loadable format like json, which are then placed in target machine or ECU are used by the software.
Bottom line :
arxml is used to define or specify configuration
formats like json is derived from set of arxml files and are actually used in the machine.
[1]: https://www.autosar.org/fileadmin/user_upload/standards/adaptive/17-03/AUTOSAR_TPS_ManifestSpecification.pdf

Related

How to inject data in a .bin file in a post compilation script?

Purpose
I want my build system to produce one binary file that includes:
The bootloader
The application binary
The application header (for the bootloader)
Here's a small overview of the memory layout (nothing out of the ordinary here)
The build system already concatenates the bootloader and the application in a post-compilation script.
In other words, only the header is missing.
Problem
What's the best way to generate and inject the application header in the memory?
Possible solutions
Create a .bin file just for the header and use cat to inject it in my final binary
Use linker file to hardcode the header (is this possible?)
Use a script to read the final binary and hardcode the header
Other?
What is the best solution for injecting data in memory in a post compilation script?
SRecord is a great tool for doing all kinds of manipulation on binary and other file types used for embedded code images.
In this case, given a binary bootheader.bin to insert at offset 0x8000 in image.bin:
srec_cat bootheader.bin −binary −offset 0x8000 −o image.bin
The tool is somewhat arcane, but the documentaton includes numerous examples covering various common tasks.

Digital Asset Management tool for large files that are not photos or videos

Most DAMs that I have found are geared towards media like photos and videos. I have need to manage large binary files like ISOs and IMG files.
Does anybody know of a DAM that can manage non-media files? Specifically something that is on premise? Going to a DAM in the cloud would be too expensive because of the amount of storage we would need and the bandwidth it would consume.
DAMs have specific functionality tailored towards visual content. For example, DAM systems will create previews for the files stored and also, possibly, extract metadata from the file itself. In addition to that, it will also provide you options to transform and download content in various formats. Considering that all these options are part of the DAM package, I would not expect too much from them with respect to previews, metadata extraction and transformations when it comes to large binary files, such as ISO and IMG files.
You can however, use most of the DAMs to upload any file you want. It will simply take it and allow you to tag metadata against it. An example would be Elvis DAM where you can simply upload content (I would use hot folder type of uploads for large files) and tag them with metadata. You can create custom fields such as OS version, applications, etc. and store it against the ISO files. These will become searchable and it will scale to hold all of this information and allow you to quickly find your content.
There might be other simpler and less expensive solutions out there that might just simply keep a file and assign metadata to it.
Try NeoFinder
It's original incarnation was as a catalog program for CDs, but it supports extensive metadata for tagging, as well as pulling metadata from images.
https://www.cdfinder.de
We solved our need by using Git Large File Storage (LFS) to manage our large binary files. We tried out git-annex as well, which worked well, but in the end we went with Git LFS.

Are file extensions required to correctly serve web content?

We're using Amazon S3 to store and serve images, videos, etc. When uploading this content we also always set the correct content-type (image/jpeg, etc.).
My question is this: Is a file extension required (or recommended) with this sort of setup? In other words, will I potentially run into any problems by naming an image "example" versus "example.jpg"?
I haven't seen any issues with doing this in my tests, but wanted to make sure there are any exceptions that I may be missing.
Extensions are just a means by which OS decides the operating program. As far as your scenario is concerned, as long as the content-type specifies the type, the extension doesn't matter. But why in the world, would you name a jpg file as .txt right ?
Regards

What does .dist used as an extension of some source code file mean?

Examples in the Zend tutorial:
phpunit.xml.dist
local.php.dist
TestConfig.php.dist
.dist files are often configuration files which do not contain the real-world deploy-specific parameters (e.g. Database Passwords, etc.), and are there to help you get started with the application/framework faster. So, to get started with such frameworks, you should remove the .dist extension, and customize your configuration file with your personal parameters.
One purpose I have seen in using .dist extension, is to avoid publishing personal data on VCSs (say git). So, you, as the developer of a reusable app, would use your own configuration file, but put the de-facto get-started config data in a separate .dist-suffixed file. (See Symfony2's documentation, 4th part)

reading a ptd/zgy file

Is there a way to read the ptd or zgy file outside of Petrel? I have an application that would like to read the 3d seismic data that petrel holds in these formats without opening petrel to export the data into ASCII or something else. Obviously its a better user experience to just read it from my own application.
You can use zgy access C++ library deployed with Petrel. It's named Slb.Salmon.ZgyPublic.zip and located in the Petrel root folder. The archive contains binaries (native DLLs), C++ header files and documentation.
As for ptd, it is an extension of a folder name which contains files in many formats (binary, XML etc.), belonging to one project. The project's main file has pet extension, it is stored in binary format. There is no documentation on the format, it may change without notice, so you are not supposed to read those files directly.