RDF4j and GraphDB repository connection - graphdb

I've a problem with rdf4j: i want to delete from my GraphDB repository "Feed" all the triples with feed:hashCode as predicate.
The first query verifies if there is a triple with the url parameter as subject, feed:hashCode as predicate, and hash parameter has object, and it works. If this statement doesn't exist in my repository, the second query begins, it should delete all the triples with feed:hashCode as predicate and url as subject, but it doesn't work, what is the problem?
Here is the code:
public static boolean updateFeedQuery(String url, String hash) throws RDFParseException, RepositoryException, IOException{
Boolean result=false;
Repository repository = new SPARQLRepository("http://localhost:7200/repositories/Feed");
repository.initialize();
try {
try (RepositoryConnection conn = repository.getConnection()) {
BooleanQuery feedUrlQuery = conn.prepareBooleanQuery(
// #formatter:off
"PREFIX : <http://purl.org/rss/1.0/>\n" +
"PREFIX feed: <http://feed.org/>\n" +
"PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>\n" +
"PREFIX dc: <http://purl.org/dc/elements/1.1/>\n" +
"ASK{\n" +
"<"+url+"> feed:hashCode \""+hash+"\";\n"+
"rdf:type :channel.\n" +
"}"
// #formatter:on
);
result = feedUrlQuery.evaluate();
//the feed is new or updated
if(result == false) {
Update removeOldHash = conn.prepareUpdate(
// #formatter:off
"PREFIX feed: <http://feed.org/>\n" +
"DELETE WHERE{\n"+
"<"+url+"> feed:hashCode ?s.\n" +
"}"
// #formatter:on
);
removeOldHash.execute();
}
}
}
finally {
repository.shutDown();
return result;
}
}
The error code is: "Missing parameter: query", and the server response is : "400 Bad Request"

The problem is in this line:
Repository repository = new SPARQLRepository("http://localhost:7200/repositories/Feed");
You are using SPARQLRepository to access your RDF4J/GraphDB triplestore, and you're providing it only with a SPARQL query endpoint. According to the documentation, that means it will use that endpoint for both queries and updates. However, RDF4J Server (and therefore GraphDB) has a separate endpoint for SPARQL updates (see the REST API documentation). Your update fails because SPARQLRepository tries to send it to the query endpoint, instead of the update endpoint.
One way to fix is to explicitly set an update endpoint as well:
Repository repository = new SPARQLRepository("http://localhost:7200/repositories/Feed", "http://localhost:7200/repositories/Feed/statements");
However, SPARQLRepository is really intended as a proxy class for accessing a (non-RDF4J) SPARQL endpoint (e.g. DBPedia, or some endpoint outside your own control or running a different triplestore implementation). Since GraphDB is fully RDF4J-compatible, you should really be using the HTTPRepository to access it. HTTPRepository implements the full RDF4J REST API, which extends the basic SPARQL protocol, which will allow your client-server communication to be much more efficient. See the Programming with RDF4J chapter on the Repository API for more details on how effectively accesss a remote RDF4J/GraphDB store.

Related

R2DBC - Are queries results reactive/streamed?

I am trying the reactive stack with MySql at the backend.
I was expecting that query results will be a stream , t.e. a heavy query will not wait and return the result when the all the records are found, but will stream them one by one. It does not look that way. The process waits untill the query is done and then returns all the results.
This is the Spring Data repository that I created:
public interface ScopusSpringDataRepo extends ReactiveCrudRepository<Scopus, Long> {
#Query("select" + " SC1.*" + "from Scopus as SC1 join Scopus as SC2"
+ " on SC1.norma like concat(SC2.norma, '%') where ( SC2.norma is not null"
+ " and SC2.word like concat('%', :word, '%'))")
public Flux<Scopus> byNorma(String word);
}
This query is heavy on purpose.
Can someone please explain what behavior is expected from R2DBC in this case?
Thank you.
Update after the comments:
The repository method byNorma is called by a business service that is called by a REST Controller.
Service:
public class ScopusService {
protected ScopusSpringDataRepo scRepo;
public Flux<Scopus> getByNorma(String norma) {
return scRepo.byNorma(norma);
}
}
REST Controller:
public class ScopusController {
protected ScopusService scopusService;
#GetMapping(path = "/stream/scopus/{word}", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
public ResponseEntity<Flux<Scopus>> getByNorma(#PathVariable String word) {
log.info("Getby norma recieved. word :{}", word);
return ResponseEntity.ok(this.scopusService.getByNorma(word));
}
}
Which is called from a Chromium browser.
Update:
I am trying to understand whether the R2DBC driver sends the result one by one or just gets all the results first and then sends them into the stream at once.

Can't upload files in spring boot

I've been struggling with this for the past 3 days now, I keep getting the following exception when I try upload a file in my spring boot project.
org.springframework.web.multipart.support.MissingServletRequestPartException: Required request part 'file' is not present
I'm not sure if it makes a differance but I am deploying my application as a war onto weblogic,
here is my controller
#PostMapping
public AttachmentDto createAttachment(#RequestParam(value = "file") MultipartFile file) {
logger.info("createAttachment - {}", file.getOriginalFilename());
AttachmentDto attachmentDto = null;
try {
attachmentDto = attachmentService.createAttachment(new AttachmentDto(file, 1088708753L));
} catch (IOException e) {
e.printStackTrace();
}
return attachmentDto;
}
multi part beans I can see in spring boot actuator
payload seen in chrome
Name attribute is required for #RequestParm 'file'
<input type="file" class="file" name="file"/>
You can try use #RequestPart, because it uses HttpMessageConverter, that takes into consideration the 'Content-Type' header of the request part.
Note that #RequestParam annotation can also be used to associate the part of a "multipart/form-data" request with a method argument supporting the same method argument types. The main difference is that when the method argument is not a String, #RequestParam relies on type conversion via a registered Converter or PropertyEditor while #RequestPart relies on HttpMessageConverters taking into consideration the 'Content-Type' header of the request part. #RequestParam is likely to be used with name-value form fields while #RequestPart is likely to be used with parts containing more complex content (e.g. JSON, XML).
Spring Documentation
Code:
#PostMapping(consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
public AttachmentDto createAttachment(#RequestPart("file") MultipartFile file) {
logger.info("Attachment - {}", file.getOriginalFilename());
try {
return attachmentService.createAttachment(new AttachmentDto(file, 1088708753L));
} catch (final IOException e) {
logger.e("Error creating attachment", e);
}
return null;
}
You are using multi part to send files so there is nothing much configuration to do to get desired result.
I m having the same requirement and my code just run fine :
#RestController
#RequestMapping("/api/v2")
public class DocumentController {
private static String bucketName = "pharmerz-chat";
// private static String keyName = "Pharmerz"+ UUID.randomUUID();
#RequestMapping(value = "/upload", method = RequestMethod.POST, consumes = MediaType.MULTIPART_FORM_DATA)
public URL uploadFileHandler(#RequestParam("name") String name,
#RequestParam("file") MultipartFile file) throws IOException {
/******* Printing all the possible parameter from #RequestParam *************/
System.out.println("*****************************");
System.out.println("file.getOriginalFilename() " + file.getOriginalFilename());
System.out.println("file.getContentType()" + file.getContentType());
System.out.println("file.getInputStream() " + file.getInputStream());
System.out.println("file.toString() " + file.toString());
System.out.println("file.getSize() " + file.getSize());
System.out.println("name " + name);
System.out.println("file.getBytes() " + file.getBytes());
System.out.println("file.hashCode() " + file.hashCode());
System.out.println("file.getClass() " + file.getClass());
System.out.println("file.isEmpty() " + file.isEmpty());
/**
BUSINESS LOGIC
Write code to upload file where you want
*****/
return "File uploaded";
}
None of the above solutions worked for me, but when I digged deeper i found that spring security was the main culprit. Even if i was sending the CSRF token, I repeatedly faced the issue POST not supported. I came to know that i was receiving forbidden 403 when i inspected using developer tools in google chrome and saw the status code in the network tab. I added the mapping to ignoredCsrfMapping in Spring Security configuration and then it worked absolutely without any other flaw. Don't know why i was not allowed to post multipart data by security. Some of the mandatory setting that needs to be stated in application.properties file are as follows:
spring.servlet.multipart.max-file-size=10MB
spring.servlet.multipart.max-request-size=10MB
spring.http.multipart.max-file-size=10MB
spring.http.multipart.max-request-size=10MB
spring.http.multipart.enabled=true

How to modify variables in an atomic way using REST API

Consider a process instance variable which currently has some value. I would like to update its value, for instance increment it by one, using the REST API of Activiti / Camunda. How would you do this?
The problem is that the REST API has services for setting variable values and to get them. But incorporating such API could easily lead to race condition.
Also consider that my example is regarding integers while a variable could be a complex JSON object or array!
This answer is for Camunda 7.3.0:
There is no out-of-the-box solution. You can do the following:
Extend the REST API with a custom resource that implements an endpoint for variable modification. Since the Camunda REST API uses JAX-RS, it is possible to add the Camunda REST resources to a custom JAX-RS application. See [1] for details.
In the custom resource endpoint, implement the read-modify-write cycle in one transaction using a custom command:
protected void readModifyWriteVariable(CommandExecutor commandExecutor, final String processInstanceId,
final String variableName, final int valueToAdd) {
try {
commandExecutor.execute(new Command<Void>() {
public Void execute(CommandContext commandContext) {
Integer myCounter = (Integer) runtimeService().getVariable(processInstanceId, variableName);
// do something with variable
myCounter += valueToAdd;
// the update provokes an OptimisticLockingException when the command ends, if the variable was updated meanwhile
runtimeService().setVariable(processInstanceId, variableName, myCounter);
return null;
}
});
} catch (OptimisticLockingException e) {
// try again
readModifyWriteVariable(commandExecutor, processInstanceId, variableName, valueToAdd);
}
}
See [2] for a detailed discussion.
[1] http://docs.camunda.org/manual/7.3/api-references/rest/#overview-embedding-the-api
[2] https://groups.google.com/d/msg/camunda-bpm-users/3STL8s9O2aI/Dcx6KtKNBgAJ

How do I query a SPARQL endpoint such as DBPedia with Sesame?

I use Sesame triplestore to store my data. When I try to use the query interface with Sesame with external resources such as dbpedia, I get no results. This query returns results with snorql but not the Sesame after adding all the necessary prefixes:
select ?routes where {
dbpedia:Polio_vaccine dbpprop:routesOfAdministration ?routes
}
What do I need to change?
You can query any SPARQL endpoint, including DBPedia, in various ways using Sesame, either programmatically or manually via the Sesame Workbench.
Using the Workbench
Using the Sesame Workbench tool, you can query DBPedia (or any public SPARQL endpoint) by creating a repository proxy for that endpoint, as follows:
select 'New repository' and in the repository type menu select 'SPARQL endpoint proxy'. Give the proxy an identifier and optionally a title and click 'next'.
fill in the SPARQL endpoint URL for the query endpoint. For the public DBPedia server, this should be http://dbpedia.org/sparql.
Finalize by clicking 'create'.
Once you've set this up you can query it from the 'Query' menu:
Result:
Programmatic access
You can simply create a SPARQLRepository object that connects to the DBPedia endpoint:
Repository repo = new SPARQLRepository("http://dbpedia.org/sparql");
repo.initialize();
Once you have that, you can use it to execute a SPARQL query just like you would on any other Sesame repository:
RepositoryConnection conn = repo.getConnection();
try {
StringBuilder qb = new StringBuilder();
qb.append("PREFIX dbpedia: <http://dbpedia.org/resource/> \n");
qb.append("PREFIX dbpprop: <http://dbpedia.org/property/> \n");
qb.append("SELECT ?routes \n");
qb.append("WHERE { dbpedia:Polio_vaccine dbpprop:routesOfAdministration ?routes } \n");
TupleQueryResult result =
conn.prepareTupleQuery(QueryLanguage.SPARQL, qb.toString()).evaluate();
while(result.hasNext()) {
BindingSet bs = result.next();
Value route = bs.getValue("routes");
System.out.println("route = " + route.stringValue());
}
}
finally {
conn.close();
}

Apache camel nested routes

I am new to Apache camel. I have very common use case that i am struggling to configure camel route. The use case is to take execution context
Update database using execution context.
Then using event on the execution context, create a byte message and send over MQ.
Then in the next step again use execution context and perform event processing.
Update database using execution context.
So basically its kind of nested routes. In the below configuration I need to have access to the executionContext that executionController has created in the updateSchedulerState, sendNotification, processEvent and updateSchedulerState i.e. steps annotated as 1,2, 3 and 4 respectively.
from("direct:processMessage")
.routeId("MessageExecutionRoute")
.beanRef("executionController", "getEvent", true)
.beanRef("executionController", "updateSchedulerState", true) (1)
.beanRef("executionController", "sendNotification", true) (2)
.beanRef("messageTransformer", "transform", true)
.to("wmq:NOTIFICATION")
.beanRef("executionController", "processEvent", true) (3)
.beanRef("eventProcessor", "process", true)
.beanRef("messageTransformer", "transform", true)
.to("wmq:EVENT")
.beanRef("executionController", "updateSchedulerState", true); (4)
Kindly let me know how should i configure the route for the above use case.
Thanks,
Vaibhav
So you need to access this executionContext in your beans at various points in the route?
If I understand correctly, you can put this executionContext in an exchange Property, and it will persist throughout the route.
Setting the exchange property can be done via the Exchange.setProperty() method or various camel dsl functions such as like this:
from("direct:xyz)
.setProperty("awesome", constant("YES"))
//...
You can access exchange properties from a bean by adding a method argument of type Exchange, like this:
public class MyBean {
public void foo(Something something, Exchange exchange) {
if ("YES".equals(exchange.getProperty("awesome"))) {
// ...
}
}
}
Or via #Property like this:
public class MyBean {
public void foo(Something something, #Property String awesome) {
if ("YES".equals(awesome)) {
// ...
}
}
}
This presumes you are using later versions of camel.
Does this help?