transform json to work with ember-data - ember-data

When visiting my test api it returns a json in the following format to my ember application:
{
{ "1" : {
"id" : "1",
"name" : "test1",
"amount" : "12,90"
}
},
{ "2" : {
"id" : "2",
"name" : "test2",
"amount" : "9,30"
}
},
}
My Model for the data is looks like this:
var budget = DS.Model.extend({
amount: DS.attr('string'),
name: DS.attr('string')
});
export default budget;
With the given JSON my hbs template render nothing as no model was set by ember data...
The format i would expect looks like the following example:
{
[{
"id" : "1",
"name" : "test1",
"amount" : "12,90"
},
{
"id" : "2",
"name" : "test2",
"amount" : "9,30"
}]
}
Is there a possibility to transform the upper json to the json i expected? I am using the Ember App Kit for my application. As API i want to use SLIM Php Framework with NotORM.

EmberData expects the JSON format returned by your API to be in this format for all budgets:
{
"budgets":
[{
"id" : "1",
"name" : "test1",
"amount" : "12,90"
},
{
"id" : "2",
"name" : "test2",
"amount" : "9,30"
}]
}
And for a single budget, it expects:
{
"budget": { "id": 2, "name": "test", "amount": "9,30" }
}
If you want to override this by creating your own extractSingle and extract methods in the BudgetSerializer. You can find more info here
App.BudgetSerializer = DS.RESTSerializer.extend({
// First, restructure the top-level so it's organized by type
extractSingle: function(store, type, payload, id, requestType) {
// do your transformation
return this._super(store, type, payload, id, requestType);
}
});

Related

How to load Nested json data into a single column in druid

I am trying to load nested json data in Apache druid:
Data-->
{
"a": "a_data",
"b": "b_data",
"c_blob_Column": {"aaaa"{"k":"sample"{"c":"sample2"}}}}
Spec -->
{ "type" : "kafka", "dataSchema" : { "dataSource" : "blob", "parser" : { "type" : "string", "parseSpec" : { "format" : "json", "dimensionsSpec" : { "dimensions" : [ "a", "b", "c_blob_Column"
]
},
"timestampSpec": {
"column": "timestamp",
"format": "iso"
}
}
},
"metricsSpec" : [],
"granularitySpec" : {
"type" : "uniform",
"segmentGranularity" : "DAY",
"queryGranularity" : "none",
"rollup" : false
}
},
"ioConfig" : {
"topic":"blob_topic",
"consumerProperties":{
"bootstrap.servers":"<local server>"
},
"appendToExisting" : false,
"useEarliestOffset": true,
"taskDuration": "PT15M"
},
"tuningConfig" : {
"type" : "kafka",
"maxRowsPerSegment" : 5000000,
"maxRowsInMemory" : 25000
}
}
Output columns-->
a,b,c_blob_Column,__time
I am able to load the data but the issue is in the column c_blob_Column the data is not coming as in json form data Could someone please help me to find how to load the json blob data?
you can use jq expression:
"flattenSpec": {
"fields": [
{
"type": "jq",
"name": "c_blob_Column",
"expr": ".c_blob_Column | tojson"
}
]
}

Json Schema required validation

I have my json schema where all values are required. For example:
....
{
"properties" : {
"minimumDelay" : {
"type" : "number"
},
"length" : {
"type" : "number"
},
},
"required": {
"minimumDelay",
"length"
}
Here the json data will be valid if I enter both minimumDelay and length values.
But my requirement is json data must be valid when I enter either 1 of the values(like XOR case). How my schema must be modified to achieve the same?
In JSON Schema, the XOR operator is oneOf.
{
"properties" : {
"minimumDelay" : {
"type" : "number"
},
"length" : {
"type" : "number"
}
},
"oneOf": [
{ "required": ["minimumDelay"] },
{ "required": ["length"] }
]
}

How DataTables determine columns type

I have a dynamic table enhanced by jQuery DataTables, which display a custom object similar to this example.
JSON:
{
"data": [
{
"name": "Tiger Nixon",
"position": "System Architect",
"salary": "$320,800",
"start_date": {
"display": "SomeString",
"timestamp": 1303686000
},
"office": "Edinburgh",
"extn": "5421"
},
// ... skipped ...
]}
JavaScript:
$(document).ready(function() {
$('#example').DataTable( {
ajax: "data/orthogonal.txt",
columns: [
{ data: "name" },
{ data: "position" },
{ data: "office" },
{ data: "extn" },
{ data: {
_: "start_date.display",
sort: "start_date.timestamp"
} },
{ data: "salary" }
]
} );
} );
The difference is that I dynamically build the columns configuration, because the columns can be in any order, and others columns can be added or removed from the list. For this example (my case is very similar) the problem is that for some reason the timestamp property is ordered as a String instead of being ordered as a number.
I discovered that after setting the column "type" as "number" the ordering works perfectly. I'm presuming that DataTables is auto detecting the column as "String" for some reason (maybe because the display element is a string).
How does DataTables set the type of the columns, when is not explicitly declared?
Edit 1
I made a sample code to show the problem
http://jsfiddle.net/Teles/agrLjd2n/16/
jQuery DataTables has built-in mechanism for type detection. There are multiple predefined functions for various types with fallback to string data type.
It's also possible to use third-party plug-ins or write your own.
There are multiple ways to specify data type, below are just the few.
SOLUTION 1
Use type option.
$(document).ready(function() {
$('#example').DataTable( {
ajax: "data/orthogonal.txt",
columns: [
{ data: "name" },
{ data: "position" },
{ data: "office" },
{ data: "extn" },
{ data: "start_date.display", type: "date" },
{ data: "salary" }
]
} );
} );
SOLUTION 2
Use returned JSON data for type detection, see columns.data for more information.
$(document).ready(function() {
$('#example').DataTable( {
ajax: "data/orthogonal.txt",
columns: [
{ data: "name" },
{ data: "position" },
{ data: "office" },
{ data: "extn" },
{ data: {
_: "start_date.display",
sort: "start_date.timestamp",
type: "start_date.timestamp",
} },
{ data: "salary" }
]
} );
} );
DataTables always check the "type" property of the column "data" to auto detect the type of the column, if no "type" property is specified it will check the default value "_".
So if you want DataTables to auto detect the type of the column checking the type of your "sort" property you should set the "type" property of data to be equals to your "sort" value
Here is a sample code with different approchs to achieve what I was tryng to do. Thanks #Gyrocode.com and #davidkonrad.
var Cell = function(display, value) {
this.display = display;
this.value = value;
}
$(document).ready(function() {
var cells = [
new Cell("120 (10%)", 120),
new Cell("60 (5%)", 60),
new Cell("30 (2.5%)", 30)
];
$('#example').DataTable( {
data: cells,
columns: [
{
title : "Column NOT OK",
data: {
_: "display",
sort: "value"
}
}, {
type : "num",
title : "Column Ok setting column type",
data: {
_: "display",
sort: "value"
}
}, {
title : "Column Ok changing default value",
data: {
_: "value",
display: "display",
filter: "display"
}
}, {
title : "Column Ok setting data type",
data: {
_: "display",
sort: "value",
type: "value"
}
}, {
type : "num",
title : "Column Not OK",
data: "display"
}
]
} );
} );

make a new array from a nested object using Lodash

Here is my data
[
{
"properties": {
"key": {
"data": "companya data",
"company": "Company A"
}
},
"uniqueId" : 1
},
{
"properties": {
"key": {
"data": "companyb data",
"company": "Company B"
}
},
"uniqueId" : 2
},
{
"properties": {
"key": {
"data": "companyc data",
"company": "Company C"
}
},
"uniqueId" : 3
}
]
The format I need for my typeahead directive is below. I was trying to figure out the other post I made but still couldn't make it work. The best was to just make the nested collection as a simple collection of object.
[
{
"uniqueId" : 1,
"data": "companya data"
},
{
"uniqueId" : 2,
"data": "companyb data"
},
{
"uniqueId" : 3,
"data": "companyc data"
}
]
I got it!
console.log(
_(jsonData).map(function(obj) {
return {
d : obj.properties.key.data,
id : obj.uniqueId
}
})
.value()
);
You do not have to use the chaining feature of lodash as long as you are only performing one operation. You can simply use:
_.map(jsonData, function(obj) {
return {
d : obj.properties.key.data,
id : obj.uniqueId
}
});

ElasticSearch for Attribute(Key) value data set

I am using Elasticsearch with Haystacksearch and Django and want to search the follow structure:
{
{
"title": "book1",
"category" : ["Cat_1", "Cat_2"],
"key_values" :
[
{
"key_name" : "key_1",
"value" : "sample_value_1"
},
{
"key_name" : "key_2",
"value" : "sample_value_12"
}
]
},
{
"title": "book2",
"category" : ["Cat_3", "Cat_2"],
"key_values" :
[
{
"key_name" : "key_1",
"value" : "sample_value_1"
},
{
"key_name" : "key_3",
"value" : "sample_value_6"
},
{
"key_name" : "key_4",
"value" : "sample_value_5"
}
]
}
}
Right now I have set up an index model using Haystack with a "text" that put all the data together and runs a full text search! In my opinion this is not the a well established search 'cause I am not using my data set structure and hence this is some kind odd.
As an example if for an object I have a key-value
{
"key_name": "key_1",
"value": "sample_value_1"
}
and for another object I have
{
"key_name": "key_2",
"value": "sample_value_1"
}
and we it gets a query like "Key_1 sample_value_1" comes I get a thoroughly mixed result of objects who have these words in their fields rather than using their structures.
P.S. I am totally new to ElasticSearch and better to say new to the search technologies and challenges. I have searched the web and SO button didn't find anything satisfying. Please let me know if there is something wrong with my thoughts and expectations from these search engines and if there is SO duplicate question! And also if there is a better approach to design a database for this kind of search
Read the es docs on nested mappings and do something like this:
"book_type" : {
"properties" : {
// title, cat mappings
"key_values" : {
"type" : "nested"
"properties": {
"key_name": {
"type": "string", "index": "not_analyzed"
},
"value": {
"type": "string"
}
}
}
}
}
Then query using a nested query
"nested" : {
"path" : "key_values",
"query" : {
"bool" : {
"must" : [
{
"term" : {"key_values.key_name" : "key_1"}
},
{
"match" : {"key_values.value" : "sample_value_1"}
}
]
}
}
}