Extracting additional data with query with keen.io - keen-io

I have a (simplified) query that looks as follows.
var pageViews = new Keen.Query('count', {
eventCollection: 'Loaded a Page',
groupBy: 'company.id'
});
And use it as follows.
client.run(pageViews, function(result, error) {
// Do something here
});
This will give me the following JSON to work with:
{
"result": [
{
"company.id": 1,
"result": 3
},
{
"company.id": 2,
"result": 11
},
{
"company.id": 3,
"result": 7
}
]
}
However, I would also like to get back the name of each company, i.e. the company.name property. I looked through keen.io's documentation, and I could find no way of doing this. Is there a way to do this? Logically speaking, I don't see any reason why it would not be possible, but the question is if it has been implemented.

Grouping by multiple properties will get you what you're looking for:
var pageViews = new Keen.Query('count', {
eventCollection: 'Loaded a Page',
groupBy: ['company.id','company.name']
});
That being said, it's important to note that Keen is not an entity database. Keen is optimized to store and analyze event data, which is different than entity data. More complex uses of entity data may not perform well using this solution.

Related

Count of documents 0 after inserting data with Nest

I am using Nest with the following connection settings:
var connectionPool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var settings = new ConnectionSettings(connectionPool, new InMemoryConnection());
settings.DisableDirectStreaming(true); // needed to see good looking debug log on insert
settings.DefaultIndex(Index);
Client = new ElasticClient(settings);
With new InMemoryConnection() I hope to query with Nest - changing data inside an Azure Cloud function.
Strangely the debug logs look promising Indexing:
/*
var res = await Client.IndexManyAsync(response.Elements, Index); //
Console.WriteLine(res.DebugInformation);
*/
/*
var res = await Client.IndexAsync(response, i => i.Index(Index)); // Index = "data"
Console.WriteLine(res.DebugInformation); // <--
*/
And logging directly after the insertions the count is 0:
// var anyDocs = await Client.CountAsync<OverpassElement>(c => c.Index(Index));
var anyDocs = await Client.CountAsync<OverpassElement>(c => c);
Console.WriteLine("count: " + anyDocs.Count);
..but the entire json data being logged with the insertion.
How come i can't count it (so that I can search in a next step), after insertion?
Actually I get:
Invalid NEST response built from a successful (200) low level call on POST: /data/_doc
And there is 0 Items in the on the IndexResponse inserting.
The data is of Element looking like the following part of an array containing 4221 such items:
{
"type": "relation",
"id": 8353694,
"timestamp": "2018-06-04T22:54:27Z",
"version": 1,
"changeset": 59551528,
"user": "asdf2",
"uid": 1416503,
"members": [
{
"type": "way",
"ref": 89956942,
"role": "from"
},
{
"type": "node",
"ref": 1042756547,
"role": "via"
},
{
"type": "way",
"ref": 89956938,
"role": "to"
}
],
"tags": {
"restriction": "no_left_turn",
"type": "restriction"
}
},
ElasticSearch has many similarities to a NoSql data store. In this case, "read after write" is not guaranteed by default. When the index API call returns success, it doesn't mean "this document is now available for searching"; it means "ElasticSearch has accepted your document and it will be available for searching shortly". ElasticSearch uses eventual consistency by default.
However, this can be annoying during testing. So ElasticSearch has a Refresh API that essentially just blocks until all documents already indexed are available for searching. I strongly recommend that you do not call this in production; only in test code.
As the risk of reviving an old question, this answer from Russ Cam explains that InMemoryConnection does not actually run the operation against Elasticsearch.
InMemoryConnection doesn't actually send any requests or receive any responses from Elasticsearch; used in conjunction with .SetConnectionStatusHandler() on Connection settings (or .OnRequestCompleted() in NEST 2.x+), it's a convenient way to see the serialized form of requests.
So you can inspect the query that NEST generates from your code but you won't be able to observe the results.
I don`t know what Nest is, but I'd bet 100$ that if it use Transactional concepts, maybe you should commit it in order to see count correctly ?

how to get dictionary same order as same i am getting from json in objective c

i am parsing json and what i get set of dictionary but after parsing it will automatically change it order. i just need same order as same i am getting from json parsing.
NOTE: i want to make one functionality which depends on dictionary order, i don't want to make it manually. so it will not need to make it every-time to do.it will help to change dynamically in future
Example:
From Json:
Section:
{
category:{},
location:{},
vehicle_type:{},
mode_type:{}
}
after convert into NSDicationary:
Section:
{
vehicle_type:{}
category:{},
location:{},
mode_type:{}
}
Thanks
Order of key:value doesn't matter for JSON, as you can directly access value with the help of key string. An dictionary does not support indexing of elements, so you can't.
Dictionaries are not ordered collections. On other hand, arrays are.
Note, if you don't have access over the json format, you got no guarantees about the order of elements, if you don't have hardcoded logic in your app.
Having said that, let's take a deeper look into your case. It happens that you have one main object in your json that is Section. What is more, you have 4 ordered properties of this object that are of specific kind of their own: 0: category; 1: location; 2: vehicle_type; 3: mode_type. So here comes the good part: just change your json to look something like this:
Section:
[
{
title: "category",
value: {}
},
{
title: "location",
value: {}
},
{
title: "vehicle_type",
value: {}
},
{
title: "mode_type",
value: {}
}
]
Having this json, you just go through the Section ordered elements, check the title of the element, and create the corresponding Object. This way you can achieve what you are after.

mongodb #c driver aggregate: How to implement it in vb .net?

Good morning,
I have a collection with a huge number of documents, and what I want to do is group by a certain field and put the results into a new collection.
I know how to do this in MongoDB, and it works perfect:
db.collection_1.aggregate([
{ $group : {_id : "$field_1" } },
{ $out : "collection_group" } ])
I've found how to do this in C++ in the MongoDB documentation. In this case it executes a sum after grouping, I want to insert the result in a new collection, but I supose I would only have to change "$sum" for "$out" :
var collection = _database.GetCollection<BsonDocument>("collection_1");
var aggregate = collection.Aggregate().Group(new BsonDocument { { "_id", "$field_1" },
{ "count", new BsonDocument("$sum", 1) } });
The thing is that I can't find examples in vb .net. In vb .net, I've tried for example this to group:
Dim collection_act = db.GetCollection(Of BsonDocument)("collection_1")
collection_act.Group(New GroupArgs("_id", "$field_1"))
But it causes an error because the GroupArgs cannot be defined like this.
I tried to use the Aggregate method too, but I have the same problem defining the AggregateArgs, and I can't find how to define it.
The most interesting thing would be to create a pipeline and then add commands to it, because after doing this I have to remove documents based on these group.
Any help will be very appreciate.
Thanks.

one to many - sequelize update - not removing/inserting children

So I've been struggling for a few hours now with a one-to-many mapping update.
I've got a project which has certain tasks (for example).
I add and remove tasks through the frontend and send the revised object to by backend running with sequelize.
Then I tried to update the records as follows:
return models.Project
.findOne({
where: { id: projectToUpdate.id },
include: [models.Task]
})
.then(function (ProjectFromDb) {
return models.sequelize
.transaction({
isolationLevel: models.sequelize.Transaction.ISOLATION_LEVELS.READ_COMMITTED
},
function (t) {
return ProjectFromDb
.update(projectToUpdate,
{
include: [{ model: models.Task }]
})
});
})
.then(function (result) {
return output.getSuccessResult(....
})
.catch(function (error) {
return output.getErrorResult(....
});
But this would only update the Project
Next I tried to update them with an additional then call:
.then(function (updateResult) {
return updateResult.setTasks(projectToUpdate.Tasks, {transaction: t})
})
But this would give me the result that he is trying to update the Task and set the ProjectId to NULL which is not possible because it is non-nullable.
I am currently "manually" adding the tasks and removing them but this seems to be a silly way of using the framework.
Can anyone tell me how to properly make this work with a one-to-many relationship without me calling Tasks.createBulk and Tasks.destroy?
EDIT TO INCLUDE MODEL
JSON object looks like this:
{
id: 1,
projectName: 'nameOfTheProject',
Tasks: [
projectId: 1,
name: 'taskName'
]
}
Please try changing the property name projectId to ProjectId on your Tasks objects that are nested to the projectToUpdate object.
Update
Looking at sequelize's source, it seems that the Instance.$save() function (which is called by Instance.$update() that you're using) does not support nested models creation when you're updating it - it checks if the flag wasNewRecord is true before doing it.

Sproutcore datasources and model relationships

I currently have a Sproutcore app setup with the following relationships on my models:
App.Client = SC.Record.extend({
name: SC.Record.attr(String),
brands: SC.Record.toMany('App.Brand', {isMaster: YES, inverse: 'client'})
});
App.Brand = SC.Record.extend({
name: SC.Record.attr(String),
client: SC.Record.toOne('App.Client, {isMaster: NO, inverse: 'brands'})
});
When I was working with fixtures my fixture for a client looked like this:
{
guid: 1,
name: 'My client',
brands: [1, 2]
}
And my fixture for a brand looked like this:
{
guid: 1,
name: 'My brand',
client: 1
}
Which all worked fine for me getting a clients brands and getting a brands client.
My question is in regards to how Datasources then fit into this and how the server response should be formatted.
Should the data returned from the server mirror exactly the format of the fixtures file? So clients should always contain a brands property containing an array of brand ids? And vice versa.
If I have a source list view which displays Clients with brands below them grouped. How would I go about loading that data for the source view with my datasource? Should I make a call to the server to get all the Clients and then follow that up with a call to fetch all the brands?
Thanks
Mark
The json you return will mostly mirror the fixtures. I recently had pretty much the same question as you, so I built a backend in Grails and a front end in SC, just to explore the store and datasources. My models are:
Scds.Project = SC.Record.extend(
/** #scope Scds.Project.prototype */ {
primaryKey: 'id',
name: SC.Record.attr(String),
tasks: SC.Record.toMany("Scds.Task", {
isMaster: YES,
inverse: 'project'
})
});
Scds.Task = SC.Record.extend(
/** #scope Scds.Task.prototype */ {
name: SC.Record.attr(String),
project: SC.Record.toOne("Scds.Project", {
isMaster: NO
})
});
The json returned for Projects is
[{"id":1,"name":"Project 1","tasks":[1,2,3,4,5]},{"id":2,"name":"Project 2","tasks":[6,7,8]}]
and the json returned for tasks, when I select a Project, is
{"id":1,"name":"task 1"}
obviously, this is the json for 1 task only. If you look in the projects json, you see that i put a "tasks" array with ids in it -- thats how the internals know which tasks to get. so to answer your first question, you dont need the id from child to parent, you need the parent to load with all the children, so the json does not match the fixtures exactly.
Now, it gets a bit tricky. When I load the app, I do a query to get all the Projects. The store calls the fetch method on the datasource. Here is my implementation.
Scds.PROJECTS_QUERY = SC.Query.local(Scds.Project);
var projects = Scds.store.find(Scds.PROJECTS_QUERY);
...
fetch: function(store, query) {
console.log('fetch called');
if (query === Scds.PROJECTS_QUERY) {
console.log('fetch projects');
SC.Request.getUrl('scds/project/list').json().
notify(this, '_projectsLoaded', store, query).
send();
} else if (query === Scds.TASKS_QUERY) {
console.log('tasks query');
}
return YES; // return YES if you handled the query
},
_projectsLoaded: function(response, store, query) {
console.log('projects loaded....');
if (SC.ok(response)) {
var recordType = query.get('recordType'),
records = response.get('body');
store.loadRecords(recordType, records);
store.dataSourceDidFetchQuery(query);
Scds.Statechart.sendEvent('projectsLoaded')
} else {
console.log('oops...error loading projects');
// Tell the store that your server returned an error
store.dataSourceDidErrorQuery(query, response);
}
}
This will get the Projects, but not the tasks. Sproutcore knows that as soon as I access the tasks array on a Project, it needs to get them. What it does is call retrieveRecords in the datasource. That method in turn calls retrieveRecord for every id in the tasks array. My retrieveRecord method looks like
retrieveRecord: function(store, storeKey) {
var id = Scds.store.idFor(storeKey);
console.log('retrieveRecord called with [storeKey, id] [%#, %#]'.fmt(storeKey, id));
SC.Request.getUrl('scds/task/get/%#'.fmt(id)).json().
notify(this, "_didRetrieveRecord", store, storeKey).
send();
return YES;
},
_didRetrieveRecord: function(response, store, storeKey) {
if (SC.ok(response)) {
console.log('succesfully loaded task %#'.fmt(response.get('body')));
var dataHash = response.get('body');
store.dataSourceDidComplete(storeKey, dataHash);
} ...
},
Note that you should use sc-gen to generate your datasource, because it provides a fairly well flushed out stub that guides you towards the methods you need to implement. It does not provide a retrieveMethods implementation, but you can provide your own if you don't want to do a single request for each child record you are loading.
Note that you always have options. If I wanted to, I could have created a Tasks query and loaded all the tasks data up front, that way I wouldn't need to go to my server when I clicked a project. So in answer to your second question, it depends. You can either load the brands when you click on the client, or you can load all the data up front, which is probably a good idea if there isn't that much data.