TinkerPop: Adding Vertex Graph API v/s Traversal API - tinkerpop

Background:
In one of the SO posts it is recommended to use Traversal API than Graph API to make mutation. So I tried out some tests and found Graph API seemed to be faster, I do totally believe the advice but I am trying to understand how its better.
I did try googling but did not find a similar post
Testing:
Query 1: Executed in 0.19734525680541992 seconds
g.addV('Test').property('title1', 'abc').property('title2', 'abc')
Query 2: Executed in 0.13838958740234375 seconds
graph.addVertex(label, "Test", "title1", "abc", "title2", "abc")
Question:
Which one is better and why?
If both are same then why the performance difference?

The Graph API is meant for graph providers and the Traversal API (which is really the Gremlin language) is meant for users. You definitely reduce the portability of your code by using the Graph API. The "server graphs" out there, like Amazon Neptune, DSE Graph, CosmosDB, etc., present environments that don't give you access to the Graph API and therefore you would never be able to switch to those if you felt like doing so. You also start to build your application around two APIs thus creating a non-unified approach to your development (i.e. in some cases you will pass around a Graph object for the Graph API and in some cases GraphTraversalSource for the Traversal API).
I don't know how you executed your tests, but it doesn't surprise me that much that you see a small difference in performance in micro-benchmarks. There is some cost to the Traversal API, but TinkerPop continues to improve in that area - consider the recently closed TINKERPOP-1950 as an example of something recent. I don't know for sure that this will help for your specific benchmark, as benchmarks are tricky things, but the point is that we haven't stopped trying to optimize in that area.
Finally, if discussions in the TinkerPop community continue in the direction they have been going for the past year, I would fully expect to see the Graph API disappear in TinkerPop 4.x. There is no timeline for this release and it is only in the discussion phase, but I would imagine that if you intend for your application to live for many years to come, this information might be of interest to you.

Related

Curious on how to use some basic machine learning in a web application

A co-worker and I had an idea to create a little web game where a user enters a chunk of data about themselves and then the application would write for them to sound like them in certain structures. (Trying to leave the idea a little vague.) We are both new to ML and thought this could be a fun first dive.
We have a decent bit of background with PHP, JavaScript (FE and Node), Ruby a little bit of other languages, and have had interest in learning Python for ML. Curious if you can run a cost efficient ML library for text well with a web app, being most servers lack GPUs?
Perhaps you have to pay for one of the cloud based systems, but wanted to find the best entry point for this idea without racking up too much cost. (So far I have been reading about running Pytorch or TensorFlow, but it sounds like you lose a lot of efficiency running with CPUs.)
Thank you!
(My other thought is doing it via an iOS app and trying Apple's ML setup.)
It sounds like you are looking for something like Tensorflow JS
Yes, before jumping into training something with Deep Learning; (this might even be un-necessary for your purpose) try to build a nice and simple baseline for this.
Before Deep Learning (just a few yrs ago) people did similar tasks using n-gram feature based language models. https://web.stanford.edu/~jurafsky/slp3/3.pdf
Essentially you try to predict the next few words probabilistically given a small context(of n-words; typically n is small like 5 or 6)
This should be a lot of fun to work out and will certainly do quite well with a small amount of data. Also such a model will run blazingly fast; so you don't have to worry about GPUs and compute .
To improve on these results with Deep Learning, you'll need to collect a ton of data first; and it will be work to get it to be fast on a web based platform

Why aren't TripleStore implemented as Native Graph Store as Property-Graph Store are?

Sparql based store or put another way, TripleStore, are known to be less efficient than property graph store, on top of not being able to be distributed while maintaining performance as property graph.
I understand that there are a lot of things at stake here, such as inferencing and what not. Putting distribution and inferencing aside where we could limit ourself to RDFS which can be fully captured via SPARQL, I am wondering why that is ?
More specifically why is the storage the issue. What is limiting Sparql Based store to store data as Property graph store does, and performing traversal instead of massive join queries. Can't sparql simply be translated to Gremlin steps for instance ? What is the limitation there? Can't the join be avoided ?
My assumption is, if sparql can be translated in efficient step traversal, and data is stored as property graph do, such as as janusGraph does https://docs.janusgraph.org/latest/data-model.html , then the issue of performance would be bridged while maintaining some inference such as RDFS.
This being said, Sparql is not Turing-complete of course, but at least for what it does, it would do it fast and possibly at scale as well. The goal is not to compete in my view, but to benefit for SPARQL ease of use and using traversal language like gremlin for things that really requires it e.g. OLAP.
Is there any project in that direction, has Apache jena considered any of this?
I saw that Graql of Grakn seem to be using that road for the reason I explain above, hence what's stopping the TripleStore community ?
#Michael, I am happy that you step in as you definitely know more than me on this :) . I am on a learning journey at this point. At your request here is one of the paper that inspired my understanding:
arxiv.org/abs/1801.02911 (SPARQL querying of Property Graphs using
Gremlin Traversals)
I quote them
"We present a comprehensive empirical evaluation of Gremlinator and
demonstrate its validity and applicability by executing SPARQL queries
on top of the leading graph stores Neo4J, Sparksee and Apache
TinkerGraph and compare the performance with the RDF stores Virtuoso,
4Store and JenaTDB. Our evaluation demonstrates the substantial
performance gain obtained by the Gremlin counterparts of the SPARQL
queries, especially for star-shaped and complex queries."
They explain however that things depends somehow on the type of queries.
Or as another answer put that in stack overflow Comparison of Relational Databases and Graph Databases would also help understand the issue between Set and path. My understanding is that TripleStore works with Set too. This being said i am definitely not aware of all the optimization technics implemented in TripleStore lately, and i saw several papers explaining technics to significantly prune set join operation.
On distribution it is more a guts feelings. For instance, doing join operation in a distributed fashion sounds very but very expensive to me. I don't have the papers and my research is not exhaustive on the matters. But from what I have red and I will have to dig in my Evernote :) to back it, that's the fundamental problem with distribution. Automated smart sharding here seems not to help alleviate the issue.
#Michael this a very but very complex subject. I'm definitively on the journey and that's why i am helping myself with stackoverflow to guide my research. You probably have an idea of as to why. So feel free to provides with pointers indeed.
This being said, I am not saying that there is a problem with RDF and that Property-Graph are better. I am saying that somehow, when it comes to graph traversal, there are ways of implementing a backend that makes this fast. The data model is not the issue here, the data structure used to support the traversal is the issue. The second thing that i am saying is that, it seems that the choice of the query language influence how the "traversal" is performed and hence the data structure that is used to back the data model.
That's my understanding so far, and yes I do understand that there are a lot of other factor at play, and feel free to enumerate some of them to guide my journey.
In short my question comes down to, is it possible to have RDF stores backed by a so-called Native Graph Storage and then Implement Sparql in term of Traversal steps rather than joins over set as per its algebra ? Wouldn't that makes things a bit faster. It seems to be that this is somewhat the approach taken by https://github.com/graknlabs/grakn which is primarily backed by janusGraph for a graph like storage. Although it is not RDF, Graql is the same Idea as having RDFS++ + Sparql. They claim to just do it better, for which i have my reservation, but that's not the fundamental question of this thread. The bottom line is they back knowledge representation by the information retrieval (path traversal) and the accompanying storage approach that Property-Graph championed. Let me be clear on this, I am not saying that the graph native storage is the property of property graph. It is just in my mind a storage approach optimized to store Graph Structure where the information retrieval involve (path) traversal: https://docs.janusgraph.org/latest/data-model.html.
First, I'd love to see the references that back up your claim that RDF-based systems are inherently less efficient than property graph ones, because frankly it's a nonsensical claim. Further, there have been distributed, and I'm assuming you mean scale-out, RDF stores, so the claim that they are not able to be distributed is simply incorrect.
The Property Graph model, and Gremlin, can easily be implemented on top of an RDF-based system. This has been done at twice once to my knowledge, and in one of those implementations reasoning was supported at the Gremlin/Property Graph layer. So you don't need to be a Property Graph based system to support that model. There are a myriad of reasons why systems, RDF and Property Graph, make specific implementation choices, from storage to execution and beyond, and those choices are guided some by the "native" model, the technology chosen for implementation, and perhaps most importantly, the use cases for the system and the problems it aims to solve.
Further, it's unclear what you recommend the authors of RDF-based systems actually do; are you suggesting scale-out is beneficial? Are you stating that your preference for the Propety Graph model should be taken as gospel such that RDF-based systems give up and switch data models? Do you want Property Graph systems retrofit RDFS?
Finally, to the initial question you asked, I think you have it exactly backwards; the Property Graph model is a hybrid graph model mixing elements of graph and key-value models, whereas the RDF model is a pure, ie native, graph model. Gremlin will be adopting the RDF model, albeit with syntactic sugar around what in the RDF world is called reification, but to everyone else, edge properties. So in the world where your exemplar of the Property Graph model is abandoning said model, I'm not sure what more to tell you, other than you should do a bit more background research.

CNTK: Python vs C# API for model consumption

We have trained a model using CNTK. We are building a service that is going to load this model and respond to requests to classify sentences. What is the best API to use regarding performance? We would prefer to build a C# service as in https://github.com/Microsoft/CNTK/tree/master/Examples/Evaluation/CSEvalClient but alternatively we are considering building a Python service that is going to load the model in python.
Do you have any recommendations towards one or the other approach? (regarding which API is faster, actively maintained or other parameters you can think of). The next step would be to set up an experiment measuring the performance of both API calls, but was wondering if there is some prior knowledge here that could help us decide.
Thank you
Both APIs are well developed/maintained. For text data I would go with the C# API.
In C# the main focus is fast and easy evaluation and for text loading the data is straightforward.
The Python API is good for development/training of models and at this time not much attention has been paid to evaluation. Furthermore, because of the wealth of packages loading data in exotic formats is easier in Python than C#.
The new C# Eval API based on CNTKLibrary will be available very soon (the first beta is probably next week). This API has functional parity with the C++ and Python API regarding evaluation.
This API supports using multiple threads to serve multiple evaluation requests in parallel, and even better, model parameters of the same loaded model is shared between these threads, which will significantly reduce memory usage in a service environment.
We have also a turorial about how to use Eval API in ASP.Net environment. It still refers to EvalDLL evaluation, but applies to the new C# API too. The document will be updated after the new C# API is released.

Scaling CakePHP Version 2.3.0

I'm beginning a new project using CakePHP. I like the "auto-magic" features, I think its a good fit for the project. I'm wondering about the potential to scale CakePHP to several million IP hits a day. and hundreds of thousands of database writes and reads a day. Also about 50,000 to 500,000 users, often with 3000 concurrently using the site. I'm making use of heavy stored procedures to offset this, and I'm accessing several servers including a load balancer.
I'm wondering about the computational time of some of the auto-magic and how well Cake is able to assist with session requests making many db hits. Has anyone has had success with cake running from a single server array setup with this level of traffic? I'm not using the cloud or a distributed database (yet). I'm really worried about potential bottlenecks with using this framework. I'm interested in advice from anyone who has worked with Cake in production. I've reseached, but I would love a second opinion. Thank you for your time.
This is not a problem but optimization is up to you.
There are different cache methods available you can implement, memcache, redis, full page caching... All of that is supported by cacke already. What you cache and where is up to you.
For searching you could try elastic search to speedup things
There are before dispatcher filters to by pass controller instantiation (you might want to do that in special cases, check the asset filter for example)
Use nginx not apache
Also I would not start with over optimizing and over-thinking this before any code is written, start well, think about caching but when you start to come across bottleneck analyse and fix them. Otherwise you'll waste a lot of time with over optimization before you even have written anything that works.
Cake itself is very fast. Just to proof the bullshit factor of these fancy benchmarks some frameworks do we did one using a dispatcher filter to "optimize" it and even beat Yii who seems to be pretty eager to show how fast it is, but benchmarks are pointless, specially in a huge project where so many human made fail can be introduced.

Representing a DAG (directed acyclic graph)

I need to store dependencies in a DAG. (We're mapping a new school curriculum at a very fine grained level)
We're using rails 3
Considerations
Wider than it is deep
Very large
I estimate 5-10 links per node. As the system grows this will increase.
Many reads, few writes
most common are lookups:
dependencies of first and second degree
searching/verifying dependencies
I know SQL, I'll consider NoSQL.
Looking for pointers to good comparisons of implementation options.
Also interested in what we can start with fast, but will be less painful to transition to something more robust/scalable later.
I found this example of modeling a directed acyclic graph in SQL:
http://www.codeproject.com/KB/database/Modeling_DAGs_on_SQL_DBs.aspx?msg=3051183
I think the upcoming version (beta at the moment) of the Ruby bindings for the graph database Neo4j should be a good fit. It's for use with Rails 3. The underlying data model uses nodes and directed relationships/edges with key/value style attributes on both. To scale read-mostly architectures Neo4j uses a master/slave replication setup.
You could use OrientDB as graph database. It's highly optimized for relationships since are stored as link and not JOIN. Load of bidirectional graph with 1,000 vertices needs few milliseconds.
The language binding for Rails is not yet available, but you can use it with HTTP RESTful calls.
You might want to take a look at the act_as_dag gem.
https://github.com/resgraph/acts-as-dag
Also some good writing on Dags with SQL for people that might need some background on this.
http://www.codeproject.com/Articles/22824/A-Model-to-Represent-Directed-Acyclic-Graphs-DAG-o