How can I do to reconcile AGROVOC sparql endpoint with OpenRefine - openrefine

Having this sparql service available http://agrovoc.uniroma2.it/sparql, using the OpenRefine RDF extension, is it possible to use it as a reconciliation service?
So far I've tried but got the following error:
12:06:51.461 [..ctReconciliationService] error reconciling 'Lenteja'
(3153ms) org.shaded.apache.jena.query.QueryException: Endpoint
returned Content-Type: text/html which is not rcognized for SELECT
queries
I think may be I got the wrong endpoint.
OpenRefine version is 3.3

Related

Which error codes for SFTP error in Mulesoft 3

I have developed an application in mule3 to transform data and then upload the data as a file to sftp location. I have included all common errors, such as http 400 series and 500 but what is a proper handling status code for when ftp fails, for example with file upload, connection or permission.
I have searched a lot on the internet and the more I search the more I get lost.
Does anyone have experience with this?
Thanks
If you are asking for a table for mapping error codes between SFTP and HTTP, there is no standard for it. These are completely different protocols. You have to define your own mapping. Most of them will probably be 5xx in HTTP, with authentication errors probably 403.
Not sure which connector version you use. But if you open the documentation of the SFTP connector, like: https://docs.mulesoft.com/sftp-connector/1.4/sftp-documentation.
You can see the documentation refers to the error that could be thrown, for example the copy operation can throw the following errors.
Based on those errors you should do your logic. Also the HTTP connector is throwing such errors, but then in the HTTP namespace. If needed you can also remap errors to a different and new namespace. Based on your remapped errors you could also implement logic.

SPARQL over HTTP not executing

I am using the SOH commands given by apache jena fuseki platform to s-get and s-post and s-query but I am always getting the output as s-get : command not found. What is wrong in this case and how can I make this work?

Graphdb sparql cannot make federated query

I am trying to make a federated query using my local GraphDB database and SPARQL endpoint. For some reason I always get errors or empty columns in my result table.
I tried several SPARQL endpoints which are working perfectly fine on FactForge. My code and error message are below can you help me to get it working.
PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
PREFIX bif: <bif:>
SELECT ?duration1 ?duration2{
VALUES (?start ?end)
{("2011-02-02T14:45:14"^^xsd:dateTime "2011-02-04T14:45:13"^^xsd:dateTime)}
SERVICE <http://dbpedia.org/sparql> #-- Virtuoso
{ BIND ((?end - ?start)/86400.0 AS ?duration1) }
}
Error 500: error
Query evaluation error: org.eclipse.rdf4j.query.QueryEvaluationException: Virtuoso 22023 Error Can not load own service description
I am using the free version of GraphDB. My OS is macOS Mojave 10.14.5. I directly installed GraphDB from the website and i never used RDF4J before. Also after this error i installed a local Virtuoso SPARQL endpoint and i can use federated query from that endpoint using my local GraphDB as a service. But i cannot use federated query from my local GraphDB endpoint using my local Virtuoso endpoint as a service since it gives the same error above.
The root cause of the above error is "Virtuoso 22023 Error Can not load own service description", which is handled as an unknown runtime exception resulting HTTP Code 500.
I suspect this was a temporal glitch in the DBPedia service since the same query currently works from GraphDB tested with the latest GraphDB 9.0 versions and an older GraphDB 8.x version via FactForge service.

Query and Graph are not found Fuseki Info [108] 400 Error

I am running query from the live demo (http://biohackathon.org/d3sparql/). The SPARQL endpoint is changed to the localhost endpoint built using Apache Jena Fuseki server. The RDF file is uploaded and stored in the server and the SPARQL endpoint is set. The set up for making the RDF file accessible via HTTP is basically a success and changes could be tracked through terminal.
So, I am trying to implement D3Sparkle to visualise the data. All I did so far is changing the endpoint and put my SPARQL query respectively in the D3Sparkle live demo page. However, when I try running the query, it I got this error:
[2017-02-25 00:05:30] Fuseki INFO [108] GET /ds :: 'data' :: <none> ? query=PREFIX%20up%3A%20%3Chttp%3A%2F%2Fpurl.uniprot.org%2Fcore%2F%3E%0APREFIX%20tax%3A%20%3Chttp%3A%2F%2Fpurl.uniprot.org%2Ftaxonomy%2F%3E%0ASELECT%20%3Froot_name%20%3Fparent_name%20%3Fchild_name%0AFROM%20%3Chttp%3A%2F%2Ftogogenome.org%2Fgraph%2Funiprot%3E%0AWHERE%0A%7B%0A%20%20VALUES%20%3Froot_name%20%7B%20%22Tardigrada%22%20%7D%0A%20%20%3Froot%20up%3AscientificName%20%3Froot_name%20.%0A%20%20%3Fchild%20rdfs%3AsubClassOf%2B%20%3Froot%20.%0A%20%20%3Fchild%20rdfs%3AsubClassOf%20%3Fparent%20.%0A%20%20%3Fchild%20up%3AscientificName%20%3Fchild_name%20.%0A%20%20%3Fparent%20up%3AscientificName%20%3Fparent_name%20.%0A%7D%0A%20%20%20%20%20%20
[2017-02-25 00:05:30] Fuseki INFO [108] 400 Neither ?default nor ?graph in the query string of the request (0 ms)
Does anybody had the same experience or any advices?
Thank you in advance for your help.
The request is not directed to the query endpoint. Usually that's /ds/sparql or /ds/query.
It is going to /ds/data which is the default for SPARQL Graph Store Protocol (GSP) operations.
The error is from the GSP handler.

multiple file upload to bigquery

I am trying to do multiple file upload simultaneously to google big-query using command line tool. I got following error :
BigQuery error in load operation: Could not connect with BigQuery server.
Http response status: 503
Http response content:
Service Unavailable
Any way to workaround this problem ?
How do I upload multiple files simultaneously to google big-query using command line tool.
Multiple file upload should work (and we use it every day). If you're getting a 503, that indicates something is wrong with the service. One thing you might want to make sure of is that if you're using a * in your command line that you have it quoted so that the shell doesn't expand it automatically before it gets passed to bq.
If you're getting a 503 error, can you retry the command the flag --apilog=- (this needs to be one of the first params) which will dump the interaction with the server to stdout. The problem may be obvious from that log, but if it isn't can you update your question with the relevant portions of the log? If you're not comfortable posting that information on a public forum, can you e-mail it to me at tigani at google dot com?