How to generate Swagger with format application/json only? - shopware6

In the Swagger file by default both response formats, JSON:API and application/json are included.
This causes problems when generating Java classes and prevents the client from compiling.
I tested this with two different compilers
Swagger Code Generator 3.0.35 https://mvnrepository.com/artifact/io.swagger.codegen.v3/swagger-codegen/3.0.35 as well as
OpenAPI Generator in different versions https://mvnrepository.com/artifact/org.openapitools/openapi-generator-cli/3.3.4.
As a workaround the Swagger file currently has to be manually adjusted and the JSON:API format has to be deleted, which causes a lot of effort during updates.
Is the generation of the Swagger file possible containing only the format application/json?

No that is not possible and IMHO should also not be possible, as that would undermine the whole point of having an OpenApiSchema.
The OpenApi schema file is a specification of the shopware API, and as shopware offers both response formats it does not make sense to generate OpenApi files that are incomplete. As a specification that is not complete is by definition no specification anymore.
IMHO the issues lies with the code generators you wanna use. As they should skip formats that they don't support, instead of erroring.

Related

How to do multiple (simultaneous) versions of APIs with OpenAPI and express-openapi?

Re: https://www.npmjs.com/package/express-openapi#getting-started
On many projects I've been worked on (in other technologies) we would always support multiple versions of an API simultaneously, e.g.
GET http://example.com/api/v1/pets
GET http://example.com/api/v2/pets
I'm baffled how to do this with express-openapi (or OpenAPI in general, for that matter).
The way the example files are named (./api-v1/api-doc.js, ./api-v1/paths/worlds.js, ./api-v1/services/worldsService.js) imply that to add a v2 version of the API, you'd have similar files (./api-v2/api-doc.js, etc).
But the initialization code seems to provide no way to add a v2 version of the API to the same instance:
initialize({
app,
apiDoc: './api-v1/api-doc.yml',
dependencies: {
worldsService: v1WorldsService
},
paths: './api-v1/paths'
});
I'm trying to figure out how to do this (so far I can find no examples of this anywhere, even with other OpenAPI implementations). Am I missing something fundamental in how OpenAPI handles this? It seems very basic.
Or should we not be trying to have separate specification files for the separate versions, and put all the versions into one specification?

Adaptive Autosar Manifest files,What does Manifest.json and Manifest.arxml have? Is the JSON file created out of arxml

I am quiet new to Adaptive Autosar, could someone explain what Manifest does exactly? And I noticed in each folder (Platform) there is a manifest.json.
But my understanding from Autosar documents was that Manifest is supposed to be an arxml file.
So does Execution Manager in the platform need this .json file to parse ?
How are these .json files created and how does it fit into the Adaptive Autosar platform.
And what exact information is there inside these .json and .arxml files?
The standardized manifest content is formalized in the AUTOSAR XML Schema. Therefore, it is possible to create an ARXML model that covers the standardized manifest content.
However, stack vendors are free to convert the standardized ARXML content plus vendor-specific extensions into any format for the configuration on device.
JSON just turns out to be quite popular, but (as mentioned before) there is no actual limitation to JSON in place.
The term Manifest is used for formal specification of configuration.
Here is the [link][1] to official specification for Adaptive AUTOSAR.
.arxml format is standardized by AUTOSAR consortium.
However that does not mean in the actual machine .arxml file is uploaded and parsed by the software. Every vendor has freedom to define and use custom format of up-loadable file. It could be json as in your case. but really depends on vendor stack (Vector/Elektrobit/ETAS etc..).
Modelling done is captured and maintained (software configuration management like git) in form of ARXML files. The vendor specific tool may convert set of arxml files (not single, but a set of files which make sense) to up-loadable format like json, which are then placed in target machine or ECU are used by the software.
Bottom line :
arxml is used to define or specify configuration
formats like json is derived from set of arxml files and are actually used in the machine.
[1]: https://www.autosar.org/fileadmin/user_upload/standards/adaptive/17-03/AUTOSAR_TPS_ManifestSpecification.pdf

Want to configure tika parsing to do OCR on PDFs only

I am trying to manipulate the tika configuration file (using tika server) to exclude all documents except PDFs from OCR processing. I have tried a number of combinations, such as excluding OCR from the default parser but configuring the PDF parser to do inline processing. I tried configuring the auto strategy. I excluded both PDF and Tesseract from the default parser. No luck. I ended up running two tika instances, one with OCR configured, and one without it, and directing files based on extension to one or the other in my code. I am using the python tika client. Is there a better way? More generally, is there a comprehensive guide to configuring parser parameters in tika? Most of what I have seen has been fragmentary. Thank you.
Do you know the ocrStrategy?
pdfParserConfig.setOcrStrategy(ocrStrategy)
Where ocrStrategy is an enum - OCRStrategy
you can set the value OCR_ONLY for pdf
and NO_OCR for other docs

Is there a way to create an intermediate output from Sphinx extensions?

When sphinx processes an rst to html conversion is there a way to see an intermediate format after extensions have been processed?
I am looking for an intermediate rst file that is generated after sphinx extensions were run.
Any ideas?
Take a look at the "ReST Builder" extension: https://pythonhosted.org/sphinxcontrib-restbuilder/.
There's not much to say; the extension takes reST as input and outputs ...drumroll... reST!
Quote:
This extension is in particular useful to use in combination with the autodoc extension. In this combination, autodoc generates the documentation based on docstrings, and restbuilder outputs the result are reStructuredText (.rst) files. The resulting files can be fed to any reST parser, for example, they can be automatically uploaded to the GitHub wiki of a project.

dojo js library + jsdoc -> how to document the code?

I'd love to ask you how do the guys developing dojo create the documentation?
From nightly builds you can get the uncompressed js files with all the comments, and I'm sure there is some kind documenting script that will generate some html or xml out of it.
I guess they use jsdoc as this can be found in their utils folder, but I have no idea on how to use it. jsDoc toolkit uses different /**commenting**/ notations than the original dojo files.
Thanks for all your help
It's all done with a custom PHP parser and Drupal. If you look in util/docscripts/README and util/jsdoc/INSTALL you can get all the gory details about how to generate the docs.
It's different than jsdoc-toolkit or JSDoc (as youv'e discovered).
FWIW, I'm using jsdoc-toolkit as it's much easier to generate static HTML and there's lots of documentation about the tags on the google code page.
Also, just to be clear, I don't develop dojo itself. I just use it a lot at work.
There are two parts to the "dojo jsdoc" process. There is a parser, written in PHP, which generates xml and/or json of the entirety of listed namespaces (defined in util/docscripts/modules, so you can add your own namespaces. There are basic usage instructions atop the file "generate.php") and a Drupal part called "jsdoc" which installs as a drupal module/plugin/whatever.
The Drupal aspect of it is just Dojo's basic view of this data. A well-crafted XSLT or something to iterate over the json and produce html would work just the same, though neither of these are provided by default (would love a contribution!). I shy away from the Drupal bit myself, though it has been running on api.dojotoolkit.org for some time now.
The doc parser is exposed so that you may use its inspection capabilities to write your own custom output as well. I use it to generate the Komodo .cix code completion in a [rather sloppy] PHP file util/docscripts/makeCix.php, which dumps information as found into an XML doc crafted to match the spec there. This could be modified to generate any kind of output you chose with a little finagling.
The doc syntax is all defined on the style guideline page:
http://dojotoolkit.org/reference-guide/developer/styleguide.html