Decode and flatten nested json into an Elm record - elm

I just started learning Elm and I have hit a roadblock. Looking for some help from this awesome community.
I'm looking to decode a nested json and pulling in a particular nested value into a elm record.
The json source looks like this:
{
"id": 672761,
"modified": "2018-02-12T00:53:04",
"slug": "Vivamus sagittis lacus vel augue laoreet rutrum faucibus dolor auctor.",
"type": "post",
"link": "https://awesomelinkhere.com",
"title": {
"rendered": "Sed posuere consectetur est at lobortis."
},
"content": {
"rendered": "Nulla vitae elit libero, a pharetra augue.",
},
"excerpt": {
"rendered": "Donec sed odio dui.",
}
}
and I want to pull apart the title.rendered and content.rendered into a field in my model , the model looks like so:
type alias Post =
{ id : Int
, published : String
, title : String
, link : String
, slug : String
, excerpt : String
}
my naive decoder looks like this
postDecoder : Decoder Post
postDecoder =
Decode.map6 Post
(field "id" Decode.int)
(field "modified" Decode.string)
(field "title" Decode.string)
(field "link" Decode.string)
(field "slug" Decode.string)
(field "excerpt" Decode.string)

Update
As it usually happens I found the answer as soon as I posted this. I reviewed the Json.Decode documentation and stumbled on the at function
my working decoder looks like this now
postDecoder : Decoder Post
postDecoder =
Decode.map6 Post
(field "id" Decode.int)
(field "modified" Decode.string)
(at [ "title", "rendered" ] Decode.string)
(field "link" Decode.string)
(field "slug" Decode.string)
(at [ "excerpt", "rendered" ] Decode.string)

Related

How to validate Nested JSON Response

I am facing issue while validate Nested JSON response in API Testing using Karate Framework.
JSON Response:
Feed[
{ "item_type": "Cake" ,
"title": "Birthday Cake",
"Services":
[
{
"id": "1",
"name": {
"first_name": "Rahul",
"last_name": "Goyal"
}
},
{
"id": "2",
"name":{
"first_name": "Hitendra",
"last_name": "garg"
}
}
]
},
{
"item_type":"Cycle",
"title": "used by"
},
{
"item_type": "College"
"dept":
[
{"branch": "EC"},
{"branch": "CSE"},
{"branch": "CIVIL"}
]
},
]
}
Now i need to validate response based on Item type. as we can see nested JSON is different for different item_type.
I have tried with below solution
Schema Design for Item_type value cake
def Feed_Cake_Service_name={first_name: '#string',last_name: '#string'}
def Feed_Cake_Services= {id: '#string',name:#(Feed_Cake_Service_name)}
def Feed_Cake={item_type:'#string',title: '#string',Services: '#[] Feed_Cake_Services'}
def Feed_Cake_Response= {Feed: '#[] Feed_Cake'}
Schema Design for item_type Cycle
def Feed_Cycle={item_type:'#string',title:'#string'}
Schema Design for item type College
def Feed_College_Dept_Branch={branch:'#string'}
def Feed_College={item_type:'#string',dept: '[] Feed_College_Dept_Branch'}
now if i want to verify only item type Cake then i have written match like below
match response contains Feed_Cake_Response
but here my test case is getting failed. because it is comparing for all item type.
so here i have two question
1.) How we can compare particular item type schema
2.) How we can include all item type in one match equation since any item type can come in JSON response , and i want to validate all
Thanks
I'll just give you one hint. For the rest, read the documentation please:
* def item = { item_type: '#string', title: '##string', dept: '##[]', Services: '##[]' }
* match each response == item

How to use POCOs with Fields in Elasticsearch.Net (NEST)?

How to I get a strongly-typed list of objects back when doing a search that uses Fields()? For example:
var searchResult = client.Search<Person>(s => s
.Fields("title", "name")
.Query(q => q.Match(...etc...)
.Highlight(...etc...)
);
It seems like the generic type parameter is useless when .Fields() is used because the Hits that are returned have a null .Source property.
(I'm hoping there's a way to do it without having to manually map the search results back to my original Person POCO.)
When you use fields parameter in your query, elasticsearch returns specified fields in fields section of response.
{
"took" : 36,
"timed_out" : false,
"_shards" : {
"total" : 2,
"successful" : 2,
"failed" : 0
},
"hits" : {
"total" : 18,
"max_score" : 1.0,
"hits" : [{
"_index" : "nest_test_data-2672",
"_type" : "elasticsearchprojects",
"_id" : "1",
"_score" : 1.0,
"fields" : {
"pingIP" : ["127.0.0.1"],
"country" : ["Faroese"],
"intValues" : [1464623485],
"locScriptField" : [0],
"stupidIntIWantAsLong" : [0],
"floatValues" : [84.96025, 95.19422],
"floatValue" : [31.93136],
"myAttachment" : [""],
"doubleValue" : [31.931359384176954],
"suggest" : [""],
"version" : [""],
"content" : ["Bacon ipsum dolor sit amet tail non prosciutto shankle turducken, officia bresaola aute filet mignon pork belly do ex tenderloin. Ut laboris quis spare ribs est prosciutto, non short ribs voluptate fugiat. Adipisicing ex ad jowl short ribs corned beef. Commodo cillum aute, sint dolore ribeye ham hock bresaola id jowl ut. Velit mollit tenderloin non, biltong officia et venison irure chuck filet mignon. Meatloaf veniam sausage prosciutto qui cow. Spare ribs non bresaola, in venison sint short loin deserunt magna laborum pork loin cillum."],
"longValue" : [-7046341211867792384],
"myBinaryField" : [""],
"name" : ["pyelasticsearch"],
"boolValue" : [false],
"id" : [1],
"startedOn" : ["1994-02-28T12:24:26.9977119+01:00"]
}
}
]
}
}
You can retrieve them from searchResult.FieldSelections or searchResult.Hits[...].Fields.
In your case Source filtering should be much more convenient.
[Test]
public void MatchAllShortcut()
{
var results = this.Client.Search<ElasticsearchProject>(s => s
.From(0)
.Size(10)
.Source(source=>source.Include(f => f.Id, f => f.Country))
.SortAscending(f => f.LOC)
.SortDescending(f => f.Country)
.MatchAll()
);
Assert.NotNull(results);
Assert.True(results.IsValid);
Assert.NotNull(results.Hits);
Assert.GreaterOrEqual(results.Hits.Count(), 10);
Assert.True(results.Hits.All(h => !string.IsNullOrEmpty(h.Source.Country)));
Assert.NotNull(results.Documents);
Assert.GreaterOrEqual(results.Documents.Count(), 10);
Assert.True(results.Documents.All(d => !string.IsNullOrEmpty(d.Country)));
}

How to setup a field mapping for ElasticSearch that allows both exact and full text searching?

Here is my problem:
I have a field called product_id that is in a format similar to:
A+B-12321412
If I used the standard text analyzer it splits it into tokens like so:
/_analyze/?analyzer=standard&pretty=true" -d '
A+B-1232412
'
{
"tokens" : [ {
"token" : "a",
"start_offset" : 1,
"end_offset" : 2,
"type" : "<ALPHANUM>",
"position" : 1
}, {
"token" : "b",
"start_offset" : 3,
"end_offset" : 4,
"type" : "<ALPHANUM>",
"position" : 2
}, {
"token" : "1232412",
"start_offset" : 5,
"end_offset" : 12,
"type" : "<NUM>",
"position" : 3
} ]
}
Ideally, I would like to sometimes search for an exact product id and other times use a sub string and or just do a query for part of the product id.
My understanding of mappings and analyzers is that I can only specify one analyzer per field.
Is there a way to store a field as both analyzed and exact match?
Yes, you can use the fields parameter. In your case:
"product_id": {
"type": "string",
"fields": {
"raw": { "type": "string", "index": "not_analyzed" }
}
}
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/_multi_fields.html
This allows you to index the same data twice, using two different definitions. In this case it will be indexed via both the default analyzer and not_analyzed which will only pick up exact matches. This is also useful for sorting return results:
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/multi-fields.html
However, you will need to spend some time thinking about how you want to search. In particular, given part numbers with a mix of alpha, numeric and punctuation or special characters you may need to get creative to tune your queries and matches.

Ember pagination jsonp callback api

I have an API example.com/api/v1
and when navigate to example.com/api/v1/items
I got json data below.
[{
"id":"1",
"title":"Lorem ipsum dolor sit amet",
"link":"/api/v1/items/1"
},
{
"id":"2",
"title":"consectetur adipisicing elit",
"link":"/api/v1/items/2"
},
{
"id":"3",
"title":"sed do eiusmod tempor incididunt",
"link":"/api/v1/items/3"
}]
.
.
.
This api accept parameter offset, count, and callback.
Ex: example.com/api/v1/items?count=10 I will get
[{
"id":"1",
"title":"Lorem ipsum dolor sit amet",
"link":"/api/v1/items/1"
},
.
.
.
{
"id":"10",
"title":"consectetur adipisicing elit",
"link":"/api/v1/items/10"
}]
Ex: example.com/api/v1/items?count=10&offset=10 I will get
[{
"id":"11",
"title":"Lorem ipsum dolor sit amet",
"link":"/api/v1/items/11"
},
.
.
.
{
"id":"20",
"title":"consectetur adipisicing elit",
"link":"/api/v1/items/20"
}]
How to implement pagination with ember?
Thanks for your time.
you could used Ember-pagination where you only have to create and array like:
var Cities = CollectionArray.create({
paginationSelector: "#cities-paging",
itemsPerPage : 6
});
And then the library will do all the hard work for you including a typeahead box where your users could search for specific Items in the collection.
I hope that this can help you

MultiLevel JSON in PIG

I am new to PIG scripting and working with JSONs. I am in the need of parsing multi-level json files in PIG. Say,
{
"firstName": "John",
"lastName" : "Smith",
"age" : 25,
"address" :
{
"streetAddress": "21 2nd Street",
"city" : "New York",
"state" : "NY",
"postalCode" : "10021"
},
"phoneNumber":
[
{
"type" : "home",
"number": "212 555-1234"
},
{
"type" : "fax",
"number": "646 555-4567"
}
]
}
I am able to parse a single level json through JsonLoader() and do join and other operations and get the desired results as JsonLoader('name:chararray,field1:int .....');
Is it possible to parse the above mentioned JSON file using the built-in JsonLoader() function of PIG 0.10.0. If it is. Please explain me how it is done and accessing fields of the particular JSON?
You can handle nested json loading with Twitter's Elephant Bird: https://github.com/kevinweil/elephant-bird
a = LOAD 'file3.json' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad')
This will parse the JSON into a map http://pig.apache.org/docs/r0.11.1/basic.html#map-schema the JSONArray gets parsed into a DataBag of maps.
It is possible by creating your own UDF. A simple UDF example is shown in below link
http://pig.apache.org/docs/r0.9.1/udf.html#udf-java
C = load 'path' using JsonLoader('firstName:chararray,lastName:chararray,age:int,address:(streetAddress:chararray,city:chararray,state:chararray,postalCode:chararray),
phoneNumber:{(type:chararray,number:chararray)}')