We are trying to run a full SHACL valdiation on a GraphDB 9.1 (free version) repository (as part of a comparative analysis). For this we try to use RDF4J (3.0.3) as mentioned in the beginning of the documentation:
http://graphdb.ontotext.com/free/shacl-validation.html linking to https://rdf4j.org/documentation/programming/shacl/
Our problem is that as of current we haven't found out, how to generate a ShaclSail for an existing GraphDB repository. The documentation for GraphDB in RDF4J (http://graphdb.ontotext.com/documentation/enterprise/using-graphdb-with-the-rdf4j-api.html) focusses on accessing, creating and modifying repositories, while ShaclSail needs a MemoryStore. Working with a local NativeStore like it's possible with RDF4Js own repositories was unsuccessful as well.
Is there currently a way to (somewhat natively) perform a full SHACL validation on a GraphDB repository using RDF4J? I didn't include code samples because most stuff we tried is similarish to the docs or consists of primarily trivial extensions. Thanks for the help.
As its written in the GraphDB documentation: "A repository with SHACL validation must be created from scratch, i.e., Create new."
One of the options is to export the old storage, create the new repository with Shacl validation enabled, following the guide into GraphDB documentation.
Here is a link to Shacl validation guide:
http://graphdb.ontotext.com/documentation/enterprise/shacl-validation.html
Related
When comparing different RPM files, I've noticed that not all of them expose the same header tags. So there must be some logic that activates/deactivates creation of some of them.
One example is the build time and host. I've stumbled upon two RPM specs. Neither mentions anything that looks at all like a specification or switch to provide the information. Still, one of them is generated with Build Time and Build Host fields, the other isn't (I am not permitted to post either one).
I am aware of the new _buildhost macro. The RPM version used to generate both is insufficient to use it. Both packages get created from a list of Sources, as far as I can see. The one that doesn't display the build information gets built using CMake/CPack, the other uses rpmbuild directly, that's the only information I have about serious difference.
Both are defined as Group: AddOn. So far, I haven't found any remotely definite resources about what groups are valid, or their meanings. Only thing I found was the list of deprecated groups in Fedora. I'd be more interested in a list of supported ones, but wasn't successful so far.
Resources I've found until now (omitting the pointless ones):
Max RPM Package Building Page, RedHat blog-ish tutorial, The RPM build guide, The actual RPM tags documentation, The RPM packaging guide
Unfortunately, none of the above provide the information I'm looking for.
"Give a man a fish" question: How can I suppress creation of Build Time or Build Host in rpm 4.11, be it in spec syntax or in usage of rpmbuild?
"Teach a man how to fish" question: Is there any documentation about what header tags get created with which settings?
You can use Mock for building rpm (recommended anyway). and use config_opts['hostname'] = 'my.own.hostname'.
Mock will call sethostname() in the chroot.
This is the only way how to do it AFAIK.
rpmbuild should honor SOURCE_DATE_EPOCH - but I never used it.
You can set environment variable using:
config_opts['environment']['SOURCE_DATE_EPOCH'] = 'foo'
b) After generating the .NET C# server stub, the documentation is not very verbose about how to use it:
You need to implement the logic yourself to handle whatever work the
API needs to do. Once the implementation is ready, you can deploy the
API locally or on your server. See the README.md file in the
downloaded archive to get started.
Is there any tutorial about how to use the code? I would like to use inheritance to avoid code changes of the generated code. But the documentation talks about just ignoring some generated files. The swagger support told me to just "migrate" the changes on every change. What is possible, but I hoped to be able to leave generated files untouched. Am I wrong here, is there no parctical need for this? I would like to use the server stub in a continuous integration environment.
One option you have is to customize the templates.
Clone the swagger-codegen repository.
Assuming you are using the latest stable v2 version of the code generation tool, then master branch is fine. Otherwise checkout the tag for the tool version you are using.
In Windows Explorer, open swagger-codegen\modules\swagger-codegen\src\main\resources\ and copy the aspnetcore directory. Paste that into your customer source code repository.
When next you run the codegen tool, provide the -t argument:
java -jar swagger-codegen-cli.jar generate
-i <your Open API spec URL/file>
-l aspnetcore
-o <outputdir>
-t <relative path to your>\aspnetcore
... other args as needed
Now you can modify those templates with custom code. For example, you could have an external library with a new base controller class that provides some generic business logic. Then you could modify the controller.mustache file to reference your base class. Just one of many examples. Add your custom templates to your source control for continuous integration.
Caveats: There is a controller.mustache file directly in aspnetcore and another in aspnetcore\2.1. In studying the source code, I see that the 2.1 folder is used for any version of ASP.NET Core other than 2.0. I'm new to this tool myself and have not fully figured out how to exploit it; the utility generates source code that will not build for me out of the box. It does not generate the security classes, but it does generate code that tries to use those security classes. Thus I'm having to comment out the security code in the templates.
I found in the RDF4J documentation here http://docs.rdf4j.org/custom-sparql-functions/ that it supports creating custom functions with Java.
I attempted to implement this palindrome example and run the example query against GraphDB. I am using the RDF4J java libraries to execute my queries against GraphDB. When executing the query, I get no obvious errors but no results were returned.
Does GraphDB support running custom RDF4J functions? The documentation for RDF4J custom functions states you must place the JAR on your classpath and it will work. What classpath? Can I build it into my project that is executing queries via RDF4J libraries or do I place the custom function JAR on the classpath of GraphDB before I start the server?
In case the graphdb comes as a platform dependent embedded launcher, e.g. .exe, .deb etc. you'll need an additional step.
Under the installation folder (say GrapDBFree), there is a folder named app. First, place the jar with the custom function within app/lib folder and then, edit the app/GraphDBFree.cfg file by adding the jar to the app.classpath= entry declared there.
I have a Tomcat instance running an openrdf-sesame environment. By default the location of my openrdf-sesame database configuration and data is at %APPDATA%\aduna. I am trying to change where this data saves to something custom like C:\aduna. I have looked at documentation online, but it does not specify if this is defined in a configuration file somewhere or if it is an hard coded location. I also saw that RDF4J is a new replacement for openrdf-sesame? I wouldn't mind upgrading if I could achieve the result of specifying where to save my data. Any ideas?
OpenRDF Sesame is no longer maintained, it has been succeeded by the Eclipse RDF4J project. There is a migration guide available to help you figure out what to do when updating your project.
Although the Sesame project is no longer maintained, a documentation archive is available, and of course a lot of the RDF4J documentation also applies to Sesame, albeit with slightly different package names.
As for your specific question: the directory Sesame Server uses is determined by the system property info.aduna.platform.appdata.basedir. Set this property (at JVM startup, using the -D flag) to the desired location. See the archived documentation about application directory configuration for more details. As an aside: note that in RDF4J this property has been renamed (to org.eclipse.rdf4j.appdata.basedir), so if you upgrade to RDF4J, be sure to change this.
I am planning to integrate Clearcase UCM(under dynamic view) with Maven.
1) I found that Maven SCM is partially
implemented for clearcase. Is there
are any still issue with this? what is
meaning partailly implemented SCM?
2) How compatable Clearcase with
Maven?
3)Any issues or limitation with this 2
tools integration?
4)Maven docs says that it is not
possible to use SCM plugin features
like creating tags (applying labels),
creating Change logs, and so on.
5) where can i find good document to integrate Maven with clearcase?. Apache site have given, but it is not very clear for beginners.
There are very few documentations on Maven with UCM ClearCase, and limitations like the ones described in SCM Implementation: ClearCase:
The ClearCase SCM provider uses snapshot views.
(so no dynamic view for instance, but you mention tags, which should be implemented as UCM baseline)
As no SCM metadata can be accessed, it is not possible to use SCM plugin features like creating tags (applying labels), creating changelogs, and so on.
Another limitation, in this thread:
Hi. I have been able to integrate Hudson and ClearCase without too much trouble using a Windows machine. Downloading source code from a given baseline or stream is fine.
The problem comes if you try to use some ant tasks for checking out a pom file, make some changes ( like updating some version numbers ) and then checkin the modified pom file before starting to build.
No matter if I use an ant script with ClearCase tasks, or internal Java classes, or even a maven-release-plugin for Hudson that tries to do this kind of job, I always end with the following error :
cleartool: Error: Type manager "_xml2" failed create_version operation
when trying to checking a XML file.
Which kind of integration are you looking for?
If it's about identifying and documenting the changes between UCM baselines, streams, activities and components, you can use CompBL - a complemntary add-on for ClearCase.
It's an easy to install add-on yet very powerful.
Cheers
This is an error thrown by clearcase while checking in xml files, if xml file is exceeding more then /1000 characters.
try changing xml file type, this will resolve the issue "cleartool chtype file file.xml"