What is the relationship between Sesame & Alibaba? - semantics

I am a beginner in this & I am having a hard time understanding this.
What is Alibaba and Sesame?
In the above two, which one does the query optimization and which one does the part of creating repositories.
Any kind of input will be fine. Thanks.

"AliBaba is a RESTful subject-oriented client/server library for distributed persistence of files and data using RDF metadata. AliBaba is the beta version of the next generation of the Elmo codebase. It is a collection of modules that provide simplified RDF store abstractions to accelerate development and facilitate application maintenance."
http://www.openrdf.org/alibaba.jsp
"Sesame is a de-facto standard framework for processing RDF data. This includes parsing, storing, inferencing and querying of/over such data. It offers an easy-to-use API that can be connected to all leading RDF storage solutions."
http://www.openrdf.org/about.jsp
I imagine the query engine, query optimization and storage are part of Sesame, not Alibaba. Alibaba is application code which sits on top of Sesame.
There are also alternatives in Java, such as Apache Jena:
http://incubator.apache.org/jena/
Guess what I use? ;-)

Related

How to test multi-region write in Cosmos DB

I am going to test multi-region write functionality by writing some test code using the cosmos c# v3 SDK.
I plan to have a multi-region write enabled cosmos DB (SQL core API) with three regions. I want to write to one specific region and then read from other regions. While doing it, I want to measure performance as well.
Is there any way of implementing these type of tests? Is there any good of measuring performance such as performance metrics? I also want to vary consistency level and see latency.
Depending on what type of tests you are looking to do the benchmarks in this Cosmos DB Global Distribution Demos GitHub Repo may be of some help. There's a bit of a learning curve as the benchmarks are data driven from app.config files. But once you get the URIs and keys in the app.config you should be mostly good to go.
One thing worth pointing out that changing consistency level when testing multiple writers and readers in different regions when configured for multi-region writes is meaningless because you will always have eventual consistency under those circumstances. For more information see, Guarantees associated with consistency levels.
The other thing to call out is you cannot configure multi-region writes with strong consistency. For more information see, Strong consistency and multiple write regions

CNTK: Python vs C# API for model consumption

We have trained a model using CNTK. We are building a service that is going to load this model and respond to requests to classify sentences. What is the best API to use regarding performance? We would prefer to build a C# service as in https://github.com/Microsoft/CNTK/tree/master/Examples/Evaluation/CSEvalClient but alternatively we are considering building a Python service that is going to load the model in python.
Do you have any recommendations towards one or the other approach? (regarding which API is faster, actively maintained or other parameters you can think of). The next step would be to set up an experiment measuring the performance of both API calls, but was wondering if there is some prior knowledge here that could help us decide.
Thank you
Both APIs are well developed/maintained. For text data I would go with the C# API.
In C# the main focus is fast and easy evaluation and for text loading the data is straightforward.
The Python API is good for development/training of models and at this time not much attention has been paid to evaluation. Furthermore, because of the wealth of packages loading data in exotic formats is easier in Python than C#.
The new C# Eval API based on CNTKLibrary will be available very soon (the first beta is probably next week). This API has functional parity with the C++ and Python API regarding evaluation.
This API supports using multiple threads to serve multiple evaluation requests in parallel, and even better, model parameters of the same loaded model is shared between these threads, which will significantly reduce memory usage in a service environment.
We have also a turorial about how to use Eval API in ASP.Net environment. It still refers to EvalDLL evaluation, but applies to the new C# API too. The document will be updated after the new C# API is released.

Is Neo4j's Cypher query language open-source?

what is the status of the Neo4j's language Cypher? I really like it, but I would like to avoid the Neo4j lock-in. Are there some other Cypher interface like there are in Gremlin?
Regards
Cypher is totally OSS, see https://github.com/neo4j/community/tree/master/cypher . Right now there is one implementation, but potentially there can be more. It's just too early in the evolution to make it a standard, we are still heavily experimenting with it.
Check out Pixy, a declarative graph query language that works on any Blueprints-compatible graph database. It is built on Gremlin/Pipes from the Tinkerpop software stack.
Pixy enables complex pattern matching and logic programming on graph databases by translating PROLOG-style rules and goals to Gremlin pipelines that represent graph traversal operations. It has some additional advantages over Cypher, other than avoiding vendor lock-in.
Pixy is available under the Apache 2.0 license.
openCypher has been implemented by many databases. According to their site these are some of them:
Agens Graph: A multi-model database
Amazon Neptune
AnzoGraph: A native massively parallel (MPP) graph analytical database
ArcadeDB
CAPS: Cypher for Apache Spark
Cypher for Gremlin
Katana Graph
Memgraph: An in-memory, transactional graph database
Neo4j: A native, transactional property graph database
RedisGraph: A graph module for Redis
SAP HANA Graph

Graph Database: TinkerPop/Blueprints vs W3C Linked data

Looking for an infrastructure for network analysis for heterogeneous (multiple node types (multi-mode), multiple edge type (multi-relation) and multiple descriptive features (multi-featured)) networks, I've noticed that there are two standard stacks in the Graph Database world:
On one hand we have the ThinkPop/Blueprint property graph model. It is supported by Neo4j, OrientDB GraphDB, Dex, Titan, InfiniteGraph, etc.
The Tinkerpop stack includes the Blueprint property graph model interface, the Gremlin graph traversal language, and the Furnace graph algorithms package.
On the other hand we have W3C's Linked Data technology stack, which is supported by AllegroGraph, 4store, Oracle Database Semantic Technologies, OWLIM, SYSTap BigData, etc.
Semantic data is represented using RDF/RDFS/OWL, and can be queried using SPARQL On top it offers rules and reasoning capabilities.
Now, suppose that I want to represent heterogeneous data in a graph database, and analyse such data (statistics, relations discovery, structure, evolution, etc.) (I know these terms are wide and vague) - What are the relative strengths of each model for various types of network analysis tasks? Do these two models complement each other?
Couple things, your exemplars of linked data stacks are all triple stores. You would start building a linked data application by first getting your triple store set up, but calling a database a linked data stack is incorrect imo. That's also an incomplete list of triple stores, there is also Sesame, Jena, Mulgara, and Stardog. Sesame and Jena kind of pull double duty, they're the two de-facto standard Java APIs for the semantic web, but both provide triple stores that come bundled with the APIs. I also know that both Cray and IBM are working on triple stores, but I don't know much about either at this point. I do know that Stardog works well with the TinkerPop stack and that it's basically a drop in and start writing Gremlin queries against the RDF.
I think the strengths of RDF/OWL is that you 1) get a real query language 2) they're w3c standards and 3) you get reasoning, if the triple store supports it, for free (more or less -- you still have to write an ontology).
With RDF/OWL/SPARQL being standards, it makes it quite easy to pick up and move to a new triple store with a different feature set should you need to, your data is already in a common format that everyone understands and any application logic encoded as queries are completely portable. And in most cases, you'd be writing against either the Sesame or Jena APIs, or working over SPARQL protocol, so you might need to only change your config/init. I think that's a big win in the early prototyping phases.
I also think that RDF/OWL especially combined w/ reasoning and the kinds of complex SPARQL queries that you can create with the new SPARQL 1.1 really suit themselves well to building complicated analytic applications. Also, I think that the impression that most people have that RDF triple stores don't scale is no longer correct. Most triple stores at this point easily scale into the billions of triples and have very competitive throughput numbers as well.
So based on what I think you might be doing, I think semweb might be a better bet for you. I did a similar project a few years back using RDF & RDFS for the backend fronted by a simple Pylons based webapp and was very happy with the results.

Representing a DAG (directed acyclic graph)

I need to store dependencies in a DAG. (We're mapping a new school curriculum at a very fine grained level)
We're using rails 3
Considerations
Wider than it is deep
Very large
I estimate 5-10 links per node. As the system grows this will increase.
Many reads, few writes
most common are lookups:
dependencies of first and second degree
searching/verifying dependencies
I know SQL, I'll consider NoSQL.
Looking for pointers to good comparisons of implementation options.
Also interested in what we can start with fast, but will be less painful to transition to something more robust/scalable later.
I found this example of modeling a directed acyclic graph in SQL:
http://www.codeproject.com/KB/database/Modeling_DAGs_on_SQL_DBs.aspx?msg=3051183
I think the upcoming version (beta at the moment) of the Ruby bindings for the graph database Neo4j should be a good fit. It's for use with Rails 3. The underlying data model uses nodes and directed relationships/edges with key/value style attributes on both. To scale read-mostly architectures Neo4j uses a master/slave replication setup.
You could use OrientDB as graph database. It's highly optimized for relationships since are stored as link and not JOIN. Load of bidirectional graph with 1,000 vertices needs few milliseconds.
The language binding for Rails is not yet available, but you can use it with HTTP RESTful calls.
You might want to take a look at the act_as_dag gem.
https://github.com/resgraph/acts-as-dag
Also some good writing on Dags with SQL for people that might need some background on this.
http://www.codeproject.com/Articles/22824/A-Model-to-Represent-Directed-Acyclic-Graphs-DAG-o