How to install Jena SemanticWeb Framework in Play Framework - semantics

I put jena jar files in the lib folder and see the message:
A JPA error occurred (Cannot start a JPA manager without a
properly configured database): No datasource configured
what am I doing wrong?
I found the answer. This was a problem in Play.
There's some reason put in front of the class directive from the module javax
I do not know why it happened, simply remove and earned

This error doesn't asoociate with jena, because if your dont't choose model (dataset) while your execute query, you will get next message - No dataset description for query and com.hp.hpl.jena.query.QueryExecException. But if you choose jena as datasource in play, you may get your message(sorry, but i don't know much about Play).
What operations you do with jena?

I don't know much Jena but it seems that some persistent ontologies might be stored into database. Thus, it would mean Jena needs a database connection?
Is this error an error of Jena and not from Play?
What do you try to do in your code before getting this error?
If Jena requires some configuration and resource creation before using it, you should think of creating a little Jena play plugin to initialize your Jena context...

Related

How can I get more results from anzograph

I am using anzograph with SPARQL trough http using RDFlib. I do not specify any limits in my query, and still I only receive 1000 solutions. The same seems to happen on the web interface.
If I fire the same query on other triple stores with the same data, I do get all results.
Moreover, if I fire this query using their command line tool on the same machine as the database, I do get all results (millions). Maybe it is using a different protocol with the local database. If I specify the hostname and port explicitly on the command line, I get 1030 results...
Is there a way to specify that I want all results from anzograph over http?
I have found the service_graph_rowset_limit setting and changed its value to 100000000 in both config/settings_standalone.conf and config/settings.conf, (and restarted the database) but to no avail.
let me start by thanking you for pointing this issue out.
You have identified a regression of a fix, that had been intended to protect the web UI from freezing on unbounded result sets, but affected the regular sparql endpoint user as well.
Our Anzo customers do not see this issue, as they use the internal gRPC API directly.
We have produced a fix that will be in our upcoming anzograph 2.4.0 and in our upcoming patch release 2.3.2 set of images.
Older releases will receive this fix as well (when we have a shipment vehicle).
If it is urgent to you I can provide you both a point fix (root.war file).
What exact image are you using?
Best - Frank

How do I direct Quarkus hibernate sql log to a separate file handler

I have this in my application.properties. the sql file created, but nothing goes into it, and everything still is showing in console.
quarkus.log.handler.console."SqlConsoleHandler".enable=true
quarkus.log.handler.file."SqlFileHandler".enable=true
quarkus.log.handler.file."SqlFileHandler".path=hibernate.sql
quarkus.log.handler.file."SqlFileHandler".rotation.max-file-size=500M
quarkus.log.handler.file."SqlFileHandler".rotation.max-backup-index=200
quarkus.log.handler.file."SqlFileHandler".rotation.file-suffix=.yyyy-MM-dd-hh-mm
quarkus.log.handler.file."SqlFileHandler".rotation.rotate-on-boot=true
quarkus.log.category."hibernate".use-parent-handlers=false
quarkus.log.category."hibernate".level=DEBUG
quarkus.log.category."hibernate".handlers=SqlConsoleHandler,SqlFileHandler
quarkus.hibernate-orm.log.sql=true
quarkus.hibernate-orm.log.bind-param=true
My bet would be that hibernate is not the correct category. You need to use the full category of the log.
Have you tried with org.hibernate? It will redirect all the Hibernate logs though.
Apparently, org.hibernate.SQL is what you look like for only pushing the SQL statements to a specific file.
This article might be useful: https://thorben-janssen.com/hibernate-logging-guide/ .

Conflicting class versions in Apache Flink

I have 2 applications. The first is a Play! Framework (v2.5.1) application. This application's job is to read the aggregated data. The second is an Apache Flink (v1.1.2) application. This application's job is to write the aggregated data.
The error
java.lang.NoSuchMethodError: com.typesafe.config.ConfigFactory.defaultApplication(Lcom/typesafe/config/ConfigParseOptions;)Lcom/typesafe/config/Config;
This is caused by Play & Flink using different versions of com.typesafe.config (1.3.0 vs 1.2.1).
I've tried
I've tried using shading, but there are further complications when I get to using Akka. Akka also has conflicting versions, so I shade config & akka, which leads to a configuration error in Akka. If I duplicate the configuration to the correct path, then the ActorSystem fails to initialize because of incorrect class version.
Research
I don't know this area well, but it seems like a number of JVM servers handle this by doing parent-last class loading. Is that possible in flink?
There may be other, simple solutions that I've not tried as well. If there are some of those, let me know, and I'll gladly try them.
Thanks for your help!

Access LinkedMDB offline [duplicate]

i want to query from Linked Movie Database at linkedmdb.org locally.
is there some rdf or owl version of it that can i download query locally instead of remotely
I tried to query it and got the following error:
org.openjena.riot.RiotException: <E:\Applications\linkedmdb-latest-dump\linkedmdb-latest-dump.nt> Code: 11/LOWERCASE_PREFERRED in SCHEME: lowercase is preferred in this component
org.openjena.riot.system.IRIResolver.exceptions(IRIResolver.java:256)
org.openjena.riot.system.IRIResolver.access$100(IRIResolver.java:24)
org.openjena.riot.system.IRIResolver$IRIResolverNormal.resolveToString(IRIResolver.java:380)
org.openjena.riot.system.IRIResolver.resolveGlobalToString(IRIResolver.java:78)
org.openjena.riot.system.JenaReaderRIOT.readImpl(JenaReaderRIOT.java:121)
org.openjena.riot.system.JenaReaderRIOT.read(JenaReaderRIOT.java:79)
com.hp.hpl.jena.rdf.model.impl.ModelCom.read(ModelCom.java:226)
com.hp.hpl.jena.util.FileManager.readModelWorker(FileManager.java:395)
com.hp.hpl.jena.util.FileManager.loadModelWorker(FileManager.java:299)
com.hp.hpl.jena.util.FileManager.loadModel(FileManager.java:250)
ServletExample.runQuery(ServletExample.java:92)
ServletExample.doGet(ServletExample.java:62)
javax.servlet.http.HttpServlet.service(HttpServlet.java:627)
javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
There's a claim that there is a download from this page. Haven't tried it myself, so I don't know whether it's fresh or not.
There is a dump in ntriples format at this address:
http://queens.db.toronto.edu/~oktie/linkedmdb/
If you want to query it you may upload the dump files onto one local triple store such as 4store or jena (using the relational support). Other libraries and tools are available, depending on the language you're more familiar with.
If you need more information let me know.

Mapping for component xxx not found in ORM app

I'm doing some TDD with a ColdFusion ORM application so I'm letting the application.cfc in my tests directory so I'm setting dbcreate="update" so the tests will create the database tables. Every time I change a model's method and re-run my tests I get the following error:
Mapping for component models.user.User not found.
If I restart the server the error goes away, however this is a terrible workflow so I'm looking for a better way to fix this problem.
have you tried dbcreate=dropcreate?
From my experience, update or dropcreate might fail for the first time, but if you ormreload again, it might just work.