Does Arangodb support stored query? - sql

For example, I have Node Collection A and Links Collection B
I would like for each elements of A, add a new properties depend on the number of links it has from B
This operation will run every day once.
Normally, in RDBM such as MySQL, I would use stored query for it.
Is something of equivalent can be done in ArangoDb?

Currently ArangoDB doesn't offer prepared statements, but some users want it.
If you're interested too, subscribe that github issue by giving it a thumbs up.

Related

Subscribe to a query

Is there a way in native SQL, SQL database specific (i.e. PostGresQL) or another (NoSQL database) to subscribe to query and receive updates when a entry matches the criteria? For example I have the query: SELECT * FROM users WHERE birthday = today() is it possible to receive update when a entry matches the criteria instead of using the so called 'pulling' mechanism? The query can be slightly more complex because this idea is required for a solution which send recurring messages based on the user preferences.
The only database I know that has built-in notifications like this is RebirthDB with a feature called "changefeeds":
They allow clients to receive changes on a table, a single document,
or even the results from a specific query as they happen. Nearly any
ReQL query can be turned into a changefeed.
The only problem is that the database began life as RethinkDB, but the company making it folded in 2016, leaving it to the open-source community. It's still alive as "RebirthDB" on GitHub with active development, but the documentation is just a copy of the old Rethink docs with GitHub notices. They have a website url, but no website. I hope they can keep it alive: it's a great idea.
https://github.com/RebirthDB/docs

How to sort all lists in a specific bin in Aerospike using aql?

I have a few question about ordered lists in Aerospike:
How can I see in the DB, using aql, if the list is ordered or not?
Does ordered list means it’s sorted?
I want to scan the db and change all lists (in a specific bin) to be ordered. I want to do is using set_type, but I can’t seem to make it work. Is that possible? how can I do it?
Thanks
I'm posting my answer from your cross-posted question here https://discuss.aerospike.com/t/list-oprations/5282:
You could scan the namespace with a ScanPolicy.includeBinData=false and for each record digest you get back use operate() to wrap the following operations into a single transaction:
ListOperation.setOrder() to ListOrder.ORDERED
ListOperation.sort() with a ListSortFlags.DROP_DUPLICATES
You will only need to run this once to clean up your database.
The ordering type will stick for all future operations. You'd just continue to use the ListWriteFlags.ADD_UNIQUE list policy.
This is for the Java client, but all other clients have these operations and policies in them.
I don't think AQL is the right tool to exploit the full power of lists. Perhaps it is not yet updated to the full functionality of lists. It is built on top of the C client. At least AQL ver 3.15.2.1 that I checked with is not. You might want to write a java client application.

Storing relational data in MongoDB (NoSQL)

I've been trying to get my head around NoSQL, and I do see the benefits to embedding data in documents.
What I can't understand, and hope someone can clear up, is how to store data if it must be relational.
For example.
I have many users. They are all buying a product. So everytime that they buy a product, we add it under the users document in mongo, so its embedded and its all great.
The problem I have is when something in reference to that product changes.
Lets say user A buys a car called "Porsche". Then, we add a reference to that under the users profile. However, in a strange turn of events Porsche gets purchased by Ferrari.
What do you do now, update each and every record and change to name from Porsche to Ferrari?
Typically in SQL, we would create 3 tables. One for users, one for Cars (description, model etc) & one for mapping users to purchases.
Do you do the same thing for Mongo? It seems like if you go down this route, you are trying to make Mongo do things SQL way, which is not what its intended for.
I can understand how certain data is great for embedding (addresses, contact details, comments, etc) but what happens when you need to reference data that can and needs to change at a regular basis?
I hope this question is clear
DBRefs/Manual References were made specifically to solve this issue. Instead of manually adding the data to each document and then needing to update when something changes, you can store a reference to another collection. Here is the mongoDB documentation for details.
References in Mongo
Then all you would need to do is update the reference collection and the change would be reflected in all downstream locations.
When i used the mongoose library for node js it actually creates 3 tables similar to how you might do it in SQL, you can use object id's as foreign keys and enrich them either on the client side or on the backend, still no joining but you could do an 'in' query for the ID's then enrich the objects that way, mongoose can do this automatically by 'populating'

GetOrCreate in RavenDB, or a better alternative?

I have just started using RavenDB on a personal project and so far inserting, updating and querying have all been very easy to implement. However, I have come across a situation where I need a GetOrCreate method and I'm wondering what the best way to achieve this is.
Specifically I am integrating with OpenID and once authentication has taken place the user is redirected to my site. At this point I'd either like to retrieve their user record from Raven (by querying on the ClaimsIdentifier property) or create a new record. The user's ID is currently being set by Raven.
Obviously I can write this in two statements but without some sort of transaction around the select and the create I could potentially end up with two user records in the database with the same claims identifier.
Is there anyway to achieve this kind of functionality? Possibly even more importantly is do you think I'm going down the wrong path. I'm assuming even if I could create a transaction it would make scaling out to multiple servers difficult and in anycase could add a performance bottle-neck.
Would a better approach be to have the Query and Create operations as separate statements and check for duplicates when the user is retrieved and merge at that point. Or do something similar but on a scheduled task?
I can't help but feel I'm missing something obvious here so any advice on this problem would be greatly appreciated.
Note: while scaling out to multiple servers may seem unnessecary for a personal project, I'm using it as an evaluation of Raven before using it in work.
Dan, although RavenDB has support for transactions, I wouldn't go that way in your case. Instead, you could just use the users ClaimsIdentifier as the user documents id, because they are granted to be unique.
Alternatively, you can also stay with user ids being generated by Raven (HiLo btw) and use the new UniqueConstraintsBundle, which lets you attribute certain properties to be unique. Internally it will create an additional document that has the value of your unique property as its id.

Ajax autocomplete extender populated from SQL

OK, first let me state that I have never used this control and this is also my first attempt at using a web service.
My dilemma is as follows. I need to query a database to get back a certain column and use that for my autocomplete. Obviously I don't want the query to run every time a user types another word in the textbox, so my best guess is to run the query once then use that dataset, array, list or whatever to then filter for the autocomplete extender...
I am kinda lost any suggestions??
Why not keep track of the query executed by the user in a session variable, then use that to filter any further results?
The trick to preventing the database from overloading I think is really to just limit how frequently the auto updater is allowed to update, something like once per 2 seconds seems reasonable to me.
What I would do is this: Store the current list returned by the query for word A server side and tie that to a session variable. This should be basically the entire list I would think. Then, for each new word typed, so long as the original word A exists, you can filter the session info and spit the filtered results out without having to query again. So basically, only query again when word A changes.
I'm using "session" in a PHP sense, you may be using a different language with different terminology, but the concept should be the same.
This question depends upon how transactional your data store is. Obviously if you are looking for US states (a data collection that would not change realistically through the life of the application) then I would either cache a System.Collection.Generic List<> type or if you wanted a DataTable.
You could easily set up a cache of the data you wish to query to be dependent upon an XML file or database so that your extender always queries the data object casted from the cache and the cache object is only updated when the datasource changes.
RAM is cheap and SQL is harder to scale than IIS so cache everything in memory:
your entire data source if is not
too large to load it in reasonable
time,
precalculated data,
autocomplete webservice responses.
Depending on your autocomplete desired behavior and performance you may want to precalculate data and create redundant structures optimized for reading. Make use of structs like SortedList (when you need sth like 'select top x ... where z like #query+'%'), Hashtable,...
While caching everything is certainly a good idea, your question about which data structure to use is an issue that wasn't fully answered here.
The best data structure for an autocomplete extender is a Trie.
You can find a good .NET article and code here.