DMSDK QueryBatch how to check list of uri exists in marklogic - marklogic-9

I'm using Marklogic 9.0-8 on windows 10. from java code, I want to check a list of uris if these uris exists in marklogic, either xml document or binary mode.
I'm trying to use java client API with DMSDK, with QueryBatch. But I'm not sure wht to do with method onUrisReady.
I was thinking to register ExportListener, but I don't want to return the whole document, I only want to know if the document exists in marklogic or not.
Can anyone give me some suggestion how to check if uri exists in marklogic without pulling all the documents out to client?
Thanks, Helen

Take a look at the examples in the QueryBatcher documentation. Might be useful.

Related

Python / rdflib HTTP server for sparql endpoint

Is there a capability for or example of creating a Sparql HTTP endpoint with rdflib? We would want it to follow the spec and be able to return json and/or csv formats. This would mostly be for POC usage. It would also be possible to use Javascript/Node.
Thanks!
You might try https://github.com/rdflib/pyLDAPI. It's been touched much more recently than https://github.com/RDFLib/rdflib-web and there are some public examples of it to follow, e.g. https://geofabricld.net. Also, the SKOS-specific tool VocPrez uses it under the hood.
As of earlier this year, pyLDAPI implements the W3C's Content Negotiation by Profile specification which is, I suppose, the latest an greatest Linked Data API-relevant specification, although it's not just for Linked Data APIs.
Feel free to contact me directly if you need more of a hand with this.

Informatica using URI based REST API

I'm having real trouble getting Informatica PowerCenter or Developer to call a URI based REST API and I'm doing it for something simple (JIRA's API). Basically I want to call JIRA's worklog REST API which is a different URL for a list of issue ids and write it to our DB.
https://docs.atlassian.com/jira/REST/6.2/
/rest/api/2/issue/{issueIdOrKey}/worklog
Informatica PowerCenter supports only HTTP transformation which is only a simple GET. Unfortunately the latest version is still stuck in the 'old' query type URL building where they append inputs into search strings. E.g. if I have a "key" input field with value "ABC-1" and the URL is jira/rest/api/2/search it would actually build the URL on the fly into jira/rest/api/2/search?key=ABC-1. While some of JIRA's API works this way, some use the URI way e.g. jira/rest/api/2/ABC-1/worklog which requires embedding the value into the URI. There's no way I can get this to work :-
if I do jira/rest/api/$key/worklog it still converts the URI into jira/rest/api/$key/worklog/?key=ABC-1 so $key does not get replaced
even if i pre-build the URI outside the mapping it's not feasible as the URI needs to be dynamic to the list of JIRA keys and anyway because it appends ? at the end JIRA throws an error (because ? is a reserved key word for this API)
HTTP transformation does not support NTLMv2 authentication which our company's JIRA instance may upgrade to shortly
Last resort is to use a Java transformation in which Informatica has quite little value add. This also means I need to somehow pass in the JIRA user password for authentication which is a separate challenge (versus just storing as a HTTP connection)
Informatica Developer supports REST Web Consumer Transformation but has similar limitations with only building query type URL. Even worse I can't even dynamically build the URL since it's fixed to the HTTP connection object URL.
Am I straight outta luck?
I got the query and here I would like to answer about this. I can write here only points and it might be you won’t be able to understand that thing properly. So Here I am putting link of blog where the task "how to informatica read rest api is mentioned in detail step by step with video tutotial. Some examples are also there. Feel free to visit
https://zappysys.com/blog/read-json-informatica-import-rest-api-json-file/
Hope it will help.

How to find the source location of a dynamic token in JMeter?

I've been using Fiddler tool to capture the HTTP request-responses, then manually finding out the source location of a dynamic token (in a recorded page). I'd then use regular expression extractor on that source page to extract and store the value of that dynamic token in a variable, and use that variable in later pages.
Just wondering if there's an easier way for this. Is there any tool in JMeter that can help us find the source location of a dynamic token?
Thank you,
--Ishti
As of may 2015, there's nothing available OOTB except to save request / responses to file with ViewResultsTree and search in resulting file, or search in each response in ViewResultsTree gui.
An option would be to write a BackendListenerClient implementation that writes data in jdbc or ElasticSearch instance and uses it to search through SQL or elastic search queries.
A contribution would be welcome.
It is possible that this is implemented in future releases.

Backend database used in the API

By going through this API documentation page, is it possible to tell which database is being used in the backend?
Zomato API
MySQL would require a php file on the server to handle the requests, make queries, pack data in JSON format then send it back to the device. But in this case parameters are passed to .json files. Please advice
There is no way to "see through" to what the backend service actually used to provide you with the information you may query for. Are you sure you want to continue using this product? The site notes that Zomato will no longer be available to individuals, and that your API key will be disabled if you don't use it monthly.
I haven't read the specs for that particular API. But in general, is it possible to tell what database is being used on the back end by studying an API? No. That's the whole point of an API: It's supposed to shield the API-user from implementation details.
It's probably true that in many cases you could make reasonable guesses about what tools are being used on the back end. Like if you see that the API gives you a syntax for doing comparisons that looks exactly like the proprietary compare function used in Foobar SQL and not found in any other database product, that would be a strong clue. But even something like that wouldn't be proof. Maybe originally they were using Foobar SQL, then they switched to another database, but to maintain compatibility they wrote code to translate the Foobar SQL compare to standard SQL syntax.

Semantic store and entity hub

I am working on a content platform that should provide semantic features such as querying with SPARQL and providing rdf documents for the contained content.
I would be very thankful for some
clarification on the following
questions:
Did I get that right, that an entity
hub can connect several semantic
stores to a single point of access?
And if not, what is the difference
between a semantic store and an
entity hub?
What frameworks would you use to
store content documents as well as
their semantic annotation?
It is important for the solution to be able to later on retrieve the document (html page / docs such as pdf, doc,...) and their annotated version.
Thanks in advance,
Chris
The only Entityhub term that I know is belong to Apache Stanbol project. And here is a paragraph from the original documentation explaining what Entityhub does:
The Entityhub provides two main services. The Entityhub provides the
connection to external linked open data sites as well as using indexes
of them locally. Its services allow to manage a network of sites to
consume entity information and to manage entities locally.
Entityhub documentation:
http://incubator.apache.org/stanbol/docs/trunk/entityhub.html
Enhancer component of Apache Stanbol provides extracting external entities related with the submitted content using the linked open data sites managed by Entityhub. These enhancements of contents are formed as RDF data. Then, it is also possible to store those content items in Apache Stanbol and run SPARQL queries on top of RDF enhancements. Contenthub component of Apache Stanbol also provides faceted search functionality over the submitted content items.
Documentation of Apache Stanbol:
http://incubator.apache.org/stanbol/docs/trunk/
Access to running demos:
http://dev.iks-project.eu/
You can also ask your further questions to stanbol-dev AT incubator.apache.org.
Alternative suggestion...
Drupal 7 has in-built RDFa support for annotation and is more of a general purpose CMS than Semantic MediaWiki
In more detail...
I'm not really sure what you mean by entity hub, where are you getting that definition from or what do you mean by it?
Yes one can easily write a system that connects to multiple semantic stores, given the context of your question I assume you are referring to RDF Triple Stores?
Any decent CMS should be assigning documents some form of unique/persistent ID to documents so even if the system you go with does not support semantic annotation natively you could build your own extension for this. The extension would simply store annotations against the documents ID in whatever storage layer you chose (I'd assume a Triple Store would be appropriate) and then you can build appropriate query and presentation layers for querying and viewing this data as required.
http://semantic-mediawiki.org/wiki/Semantic_MediaWiki
Apache Stanbol
Do you want to implement a traditional CMS extended with some Semantic capabilities, or do you want to build a Semantic CMS? It could look the same, but actually both a two completely opposite approaches.
It is important for the solution to be able to later on retrieve the document (html page / docs such as pdf, doc,...) and their annotated version.
You can integrate Apache Stanbol with a JCR/CMIS compliant CMS like Alfresco. To get custom annotations, I suggest creating your own custom enhancement engine (maven archetype) based on your domain and adding it to the enhancement engine chain.
https://stanbol.apache.org/docs/trunk/components/enhancer/
One this is done, you can use the REST API endpoints provided by Stanbol to retrieve the results in RDF/Turtle format.