I have setup a test where I try to PUT an item after fetching it. It fails on the dates when PUT'ing though. The date fields below use DS.attr('date').
Versions:
Ember : 1.1.1
Ember Data : 1.0.0-beta.4+canary.c15b8f80
Handlebars : 1.0.0
jQuery : 1.9.1
Here's my code:
BuildingsController
App.BuildingsController = Ember.Controller.extend({
actions: {
createBuilding: function() {
var store = this.get('store');
store.find('building', 1729).then(function(building) {
building.set('title', 'Test 123');
building.save();
});
}
}
});
Data returned from API when calling store.find:
{
"building":{
"id":"1729",
"name":"Test 123",
"sort":"1",
"published_at":"2013-09-26 11:00:27",
"source":"source test",
"content":"<p>content test<\/p>",
"excerpt":"<p>excerpt test<\/p>",
"lat":"62.39039989300704",
"lon":"17.341790199279785",
"address":"address",
"build_start":"2013-09-22",
"build_end":"2013-09-23",
"created_at":"2013-09-26 11:00:28",
"updated_at":"2013-09-26 11:00:28"
}
}
Data PUT to API:
{
"address" : "address",
"build_end" : "Mon, 23 Sep 2013 00:00:00 GMT",
"build_start" : "Sun, 22 Sep 2013 00:00:00 GMT",
"content" : "<p>content test</p>",
"created_at" : "undefined, NaN undefined NaN NaN:NaN:NaN GMT",
"excerpt" : "<p>excerpt test</p>",
"lat" : 62.39039989300704,
"lon" : 17.341790199279785,
"name" : "Test 123",
"published_at" : "undefined, NaN undefined NaN NaN:NaN:NaN GMT",
"sort" : 1,
"source" : "source test",
"updated_at" : "undefined, NaN undefined NaN NaN:NaN:NaN GMT"
}
You can use a custom attribute type (rather than DS.attr('date')) by registering a custom transform and handle serialization / deserialization manually:
DS.RESTAdapter.registerTransform("mydate", {
deserialize: function(serialized) {
// deserialize date here
return deserialized;
},
serialize: function(deserialized) {
// serialize date here
return serialized;
}
});
As described in this answer to What is the best way to modify the date format when ember-data does serialization?
I've been working with other projects and now I updated Ember from 1.1.1 to 1.1.2 which seem to have magically solved this problem. I'm guessing the update didn't have anything to do with my problem, though.
Thanks for taking your time.
// Stefan
Related
I'm adding a list called 'tourlocation' to my Keystone 5 project. In my mongo database my tourlocations collection has an object called 'coordinates', with two values: 'lat' and 'long'. Example:
"coordinates" : {
"lat" : 53.343761,
"long" : -6.24953
},
In the previous version of keystone, I could define my tourlocation list coordinates object like this:
coordinates: {
lat: {
type: Number,
noedit: true
},
long: {
type: Number,
noedit: true
}
Now unfortunately, when I try to define the list this way it gives the error: The 'tourlocation.coordinates' field doesn't specify a valid type. (tourlocation.coordinates.type is undefined)'
Is there any way to represent objects in keystone 5?
#Alex Hughes I believe your error says "type" which you may need to add it like this
keystone.createList('User', {
fields: {
name: { type: Text }, // Look at the type "Text" even in the MongoDB you can choose the type but it will be better to choose it here from the beginning.
email: { type: Text },
},
});
Note that noedit: true is not supported in version 5 of KeystoneJS.
For more info look at this page https://www.keystonejs.com/blog/field-types#core-field-types
I am not using props.conf. So I guess it is the default behavior.
Below is the single log:
2018-07-19 13:30:40.293 +0000 [http8080] INFO RequestFilter- {
"transaction_id" : "aaaaaaaaawwwwwwww",
"http_method" : "POST",
"date_time" : "2018-07-19 13:30:34.694 +0000",
"requestId" : "20180719-dc7bc01d-b02c-43c8-932b-42af542ccefb"
}
But it is coming in 2 events
2018-07-19 13:30:40.293 +0000 [http8080] INFO RequestFilter- {
"transaction_id" : "aaaaaaaaawwwwwwww",
"http_method" : "POST",
And
"date_time" : "2018-07-19 13:30:34.694 +0000",
"requestId" : "20180719-dc7bc01d-b02c-43c8-932b-42af542ccefb"
}
It is always breaking from "date_time"
Any suggestions? how can i fix it?
You will need to adjust your props.conf to change the event break logic. By default, it will break whenever it detects a valid timestamp which suits most log formats.
This regex should match just the initial row:
LINE_BREAKER = \d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2}.\d{3}\s+\+\d{4}\s+\[.*\]
SHOULD_LINEMERGE = false
I am evaluating Circe and couldn't find out how to use filter for arrays to transform a JSON. I read the guide on its website and API doc, still no clue. Help much appreciated.
Sample data:
{
"Department" : "HR",
"Employees" :[{ "name": "abc", "age": 25 }, {"name":"def", "age" : 30 }]
}
Task:
How to use a filter for Employees to transform the JSON to another JSON, for example, all employees with age older than 50?
For some reason I can't filter from data source before JSON is generated, in case you ask.
Thanks
One possible way of doing this is by
val data = """{"Department" : "HR","Employees" :[{ "name": "abc", "age": 25 }, {"name":"def", "age":30}]}"""
def ageFilter(j:Json): Json = j.withArray { x =>
Json.fromValues(x.filter(_.hcursor.downField("age").as[Int].map(_ > 26).getOrElse(false)))
}
val y: Either[ParsingFailure, Json] = parse(data).map( _.hcursor.downField("Employees").withFocus(ageFilter).top.get)
println(s"$y")
using extraction query (which used url decoded for reading):
https://api.keen.io/3.0/projects/xxx/queries/extraction?api_key=xxxx&event_collection=dispatched-orders&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800
return
{
result: [
{
mobile: "13185716746",
keen : {
timestamp: "2015-02-10T07:10:07.816Z",
created_at: "2015-02-10T07:10:08.725Z",
id: "54d9aed03bc6964a7d311f9e"
},
data : {
itemId: 2130,
num: 1
},
features: {
communityId: 2000,
dispatcherId: 39,
tradeId: 8581
}
}
]
}
but if i use the same filters in my delete query url (which used url decoded for reading):
https://api.keen.io/3.0/projects/xxxxx/events/dispatched-orders?api_key=xxxxxx&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800
return
{
properties: {
data.num: "num",
keen.created_at: "datetime",
mobile: "string",
keen.id: "string",
features.communityId: "num",
features.dispatcherId: "num",
keen.timestamp: "datetime",
features.tradeId: "num",
data.itemId: "num"
}
}
plz help me ...
It looks like you are issuing a GET request for the delete comment. If you perform a GET on a collection you get back the schema that Keen has inferred for that collection.
You'll want to issue the above as a DELETE request. Here's the cURL command to do that:
curl -X DELETE "https://api.keen.io/3.0/projects/xxxxx/events/dispatched-orders?api_key=xxxxxx&filters=[{"property_name":"features.tradeId","operator":"eq","property_value":8581}]&timezone=28800"
Note that you'll probably need to URL encode that JSON as you mentioned in your above post!
I have a document as follow:
{
"_id" : ObjectId("5491d65bf315c2726a19ffe0"),
"tweetID" : NumberLong(535063274220687360),
"tweetText" : "19 RT Toronto #SunNewsNetwork: WATCH: When it comes to taxes, regulations, and economic freedom, is Canada more \"American\" than America? http://t.co/D?",
"retweetCount" : 1,
"Added" : ISODate("2014-11-19T04:00:00.000Z"),
"tweetLat" : 0,
"tweetLon" : 0,
"url" : "http://t.co/DH0xj0YBwD ",
"sentiment" : 18
}
now I want to get all document like this where Added is between 2014-11-19 and 2014-11-23 but we should note that there might be no data in for example this date : 2014-11-21 and now the problem starts: here when this happens I want 0 for sum of sentiment for this date instead of returning nothing( I know I can check this in java but it is not reasonable), my code is as follow which works fine except for the date that is not available it returns nothing instead of 0:
andArray.add(new BasicDBObject("Added", new BasicDBObject("$gte",
startDate)));
andArray.add(new BasicDBObject("Added", new BasicDBObject("$lte",
endDate)));
DBObject where = new BasicDBObject("$match", new BasicDBObject("$and",
andArray));
stages.add(where);
DBObject groupFields = new BasicDBObject("_id", "$Added");
groupFields.put("value",
new BasicDBObject("$avg", "$sentiment"));
DBObject groupBy = new BasicDBObject("$group", groupFields);
stages.add(groupBy);
DBObject project = new BasicDBObject("_id", 0);
project.put("value", 1);
project.put("Date", "$_id");
stages.add(new BasicDBObject("$project", project));
DBObject sort = new BasicDBObject("$sort", new BasicDBObject("Date", 1));
stages.add(sort);
AggregationOutput output = collectionG.aggregate(stages);
Now I want value 0 for the date that is not available in the collections that I have,
For example consider 2014-11-21 in the following :
[ { "value" : 6.0 , "Date" : { "$date" : "2014-11-19T04:00:00.000Z"}} , { "value" : 20.0 , "Date" : { "$date" : "2014-11-20T04:00:00.000Z"}},{ "value" : 0 , "Date" : { "$date" : "2014-11-21T04:00:00.000Z"}}]
instead of :
[ { "value" : 6.0 , "Date" : { "$date" : "2014-11-19T04:00:00.000Z"}} , { "value" : 20.0 , "Date" : { "$date" : "2014-11-20T04:00:00.000Z"}}}]
Is it possible to do that?
Why is checking in Java not reasonable? Setting average to 0 for 'nothing' is reasonable?
Depending on the context of your problem, one solution is for you to insert dummy records with 0 sentiment.