I am using the SOH commands given by apache jena fuseki platform to s-get and s-post and s-query but I am always getting the output as s-get : command not found. What is wrong in this case and how can I make this work?
Related
Having this sparql service available http://agrovoc.uniroma2.it/sparql, using the OpenRefine RDF extension, is it possible to use it as a reconciliation service?
So far I've tried but got the following error:
12:06:51.461 [..ctReconciliationService] error reconciling 'Lenteja'
(3153ms) org.shaded.apache.jena.query.QueryException: Endpoint
returned Content-Type: text/html which is not rcognized for SELECT
queries
I think may be I got the wrong endpoint.
OpenRefine version is 3.3
Recently I have configured a reverse proxy(apache2 in ubuntu 16) for fuseki2 as a web application using tomcat7. However, there is a problem still confusing me that I can only get results if I execute some simple queries which don't need a long processing time (less than 10mins in chrome exactly) from the backend. I will otherwise get a print of "unable to get a response from endpoint". The timeout is strictly 10mins. I have increased the timeout for fuseki2 as well as the reverse proxy, but this doesn't work for me.
I have traced the error.log in apache2 and catalina.out in tomcat and found no error. Besides, Tomcat is keep runing when "no response" error appears in Chrome.
Anyone has any idea about this? Any help or hint will be appreciated.
I am trying to make a federated query using my local GraphDB database and SPARQL endpoint. For some reason I always get errors or empty columns in my result table.
I tried several SPARQL endpoints which are working perfectly fine on FactForge. My code and error message are below can you help me to get it working.
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX bif: <bif:>
SELECT ?duration1 ?duration2{
VALUES (?start ?end)
{("2011-02-02T14:45:14"^^xsd:dateTime "2011-02-04T14:45:13"^^xsd:dateTime)}
SERVICE <http://dbpedia.org/sparql> #-- Virtuoso
{ BIND ((?end - ?start)/86400.0 AS ?duration1) }
}
Error 500: error
Query evaluation error: org.eclipse.rdf4j.query.QueryEvaluationException: Virtuoso 22023 Error Can not load own service description
I am using the free version of GraphDB. My OS is macOS Mojave 10.14.5. I directly installed GraphDB from the website and i never used RDF4J before. Also after this error i installed a local Virtuoso SPARQL endpoint and i can use federated query from that endpoint using my local GraphDB as a service. But i cannot use federated query from my local GraphDB endpoint using my local Virtuoso endpoint as a service since it gives the same error above.
The root cause of the above error is "Virtuoso 22023 Error Can not load own service description", which is handled as an unknown runtime exception resulting HTTP Code 500.
I suspect this was a temporal glitch in the DBPedia service since the same query currently works from GraphDB tested with the latest GraphDB 9.0 versions and an older GraphDB 8.x version via FactForge service.
I am running query from the live demo (http://biohackathon.org/d3sparql/). The SPARQL endpoint is changed to the localhost endpoint built using Apache Jena Fuseki server. The RDF file is uploaded and stored in the server and the SPARQL endpoint is set. The set up for making the RDF file accessible via HTTP is basically a success and changes could be tracked through terminal.
So, I am trying to implement D3Sparkle to visualise the data. All I did so far is changing the endpoint and put my SPARQL query respectively in the D3Sparkle live demo page. However, when I try running the query, it I got this error:
[2017-02-25 00:05:30] Fuseki INFO [108] GET /ds :: 'data' :: <none> ? query=PREFIX%20up%3A%20%3Chttp%3A%2F%2Fpurl.uniprot.org%2Fcore%2F%3E%0APREFIX%20tax%3A%20%3Chttp%3A%2F%2Fpurl.uniprot.org%2Ftaxonomy%2F%3E%0ASELECT%20%3Froot_name%20%3Fparent_name%20%3Fchild_name%0AFROM%20%3Chttp%3A%2F%2Ftogogenome.org%2Fgraph%2Funiprot%3E%0AWHERE%0A%7B%0A%20%20VALUES%20%3Froot_name%20%7B%20%22Tardigrada%22%20%7D%0A%20%20%3Froot%20up%3AscientificName%20%3Froot_name%20.%0A%20%20%3Fchild%20rdfs%3AsubClassOf%2B%20%3Froot%20.%0A%20%20%3Fchild%20rdfs%3AsubClassOf%20%3Fparent%20.%0A%20%20%3Fchild%20up%3AscientificName%20%3Fchild_name%20.%0A%20%20%3Fparent%20up%3AscientificName%20%3Fparent_name%20.%0A%7D%0A%20%20%20%20%20%20
[2017-02-25 00:05:30] Fuseki INFO [108] 400 Neither ?default nor ?graph in the query string of the request (0 ms)
Does anybody had the same experience or any advices?
Thank you in advance for your help.
The request is not directed to the query endpoint. Usually that's /ds/sparql or /ds/query.
It is going to /ds/data which is the default for SPARQL Graph Store Protocol (GSP) operations.
The error is from the GSP handler.
I'm trying to index a site with "Apache Nutch 1.4" and when I run the command below, the following error occurs "java.io.IOException: Job failed"
bin/nutch solrindex http://localhost:8983/solr/ crawl/crawldb -linkdb crawl/linkdb crawl/segments/*
I installed "Tomca6" and "Apache Solr 3.5.0" to work with Nutch but unfortunately is not working
simulation
root#debian:/usr/share/nutch/runtime/local$ bin/nutch solrindex http://localhost:8983/solr/ crawl/crawldb -linkdb crawl/linkdb crawl/segments/*
SolrIndexer: starting at 2012-03-28 18:45:25
Adding 48 documents
java.io.IOException: Job failed!
root#debian:/usr/share/nutch/runtime/local$
Can someone help me please?
This error often occurs if the mapping of nutch result fields onto Solr field is incorrect or incomplete. This results in the "update" action being rejected by the Solr server. Unfortunately, at some point in the call chain this error is converted into a "IO error" which is a little misleading. My recommendation is to access the web console of the Solr server (which is accessible using the same URL as for the submissing of links, e.g. in this case http://some.solr.server:8983/solr/) and go to to the logging tab. Errors concerning the mapping will show up there!
Looks like Solr is not configured right. (Please ensure that the input linkdb, crawldb and segments are present in the location that you pass command line).
Read
Setting up Solr 1.4 with Apache Tomcat 6.X
Nutch 1.3 and Solr Integration .