Get data on basis of fields in Yii2 - api

I am trying to get data on basis of fields in query param ie
users/1?fields=id,name
its give id and name using findOne
User::findOne(1);
Result:
{
"id": 12,
"name": 'Jhon'
}
When
users?fields=id,name
Its give all fields of user Model using findAll()
User::findAll([$ids])
Result:
[
{
'id': 1
'name': abc
'age':30
'dob':1970
'email':abc#test.com
},
{
'id': 2
'name': abc
'age':30
'dob':1970
'email':abc1#test.com
},
Why findAll() not work like findOne() result

I have read Data provider and solve the problem

Related

How to validate Nested JSON Response

I am facing issue while validate Nested JSON response in API Testing using Karate Framework.
JSON Response:
Feed[
{ "item_type": "Cake" ,
"title": "Birthday Cake",
"Services":
[
{
"id": "1",
"name": {
"first_name": "Rahul",
"last_name": "Goyal"
}
},
{
"id": "2",
"name":{
"first_name": "Hitendra",
"last_name": "garg"
}
}
]
},
{
"item_type":"Cycle",
"title": "used by"
},
{
"item_type": "College"
"dept":
[
{"branch": "EC"},
{"branch": "CSE"},
{"branch": "CIVIL"}
]
},
]
}
Now i need to validate response based on Item type. as we can see nested JSON is different for different item_type.
I have tried with below solution
Schema Design for Item_type value cake
def Feed_Cake_Service_name={first_name: '#string',last_name: '#string'}
def Feed_Cake_Services= {id: '#string',name:#(Feed_Cake_Service_name)}
def Feed_Cake={item_type:'#string',title: '#string',Services: '#[] Feed_Cake_Services'}
def Feed_Cake_Response= {Feed: '#[] Feed_Cake'}
Schema Design for item_type Cycle
def Feed_Cycle={item_type:'#string',title:'#string'}
Schema Design for item type College
def Feed_College_Dept_Branch={branch:'#string'}
def Feed_College={item_type:'#string',dept: '[] Feed_College_Dept_Branch'}
now if i want to verify only item type Cake then i have written match like below
match response contains Feed_Cake_Response
but here my test case is getting failed. because it is comparing for all item type.
so here i have two question
1.) How we can compare particular item type schema
2.) How we can include all item type in one match equation since any item type can come in JSON response , and i want to validate all
Thanks
I'll just give you one hint. For the rest, read the documentation please:
* def item = { item_type: '#string', title: '##string', dept: '##[]', Services: '##[]' }
* match each response == item

Filter an object array to modify json with circe

I am evaluating Circe and couldn't find out how to use filter for arrays to transform a JSON. I read the guide on its website and API doc, still no clue. Help much appreciated.
Sample data:
{
"Department" : "HR",
"Employees" :[{ "name": "abc", "age": 25 }, {"name":"def", "age" : 30 }]
}
Task:
How to use a filter for Employees to transform the JSON to another JSON, for example, all employees with age older than 50?
For some reason I can't filter from data source before JSON is generated, in case you ask.
Thanks
One possible way of doing this is by
val data = """{"Department" : "HR","Employees" :[{ "name": "abc", "age": 25 }, {"name":"def", "age":30}]}"""
def ageFilter(j:Json): Json = j.withArray { x =>
Json.fromValues(x.filter(_.hcursor.downField("age").as[Int].map(_ > 26).getOrElse(false)))
}
val y: Either[ParsingFailure, Json] = parse(data).map( _.hcursor.downField("Employees").withFocus(ageFilter).top.get)
println(s"$y")

Get Most Recent Column Value With Nested And Repeated Fields

I have a table with the following structure:
and the following data in it:
[
{
"addresses": [
{
"city": "New York"
},
{
"city": "San Francisco"
}
],
"age": "26.0",
"name": "Foo Bar",
"createdAt": "2016-02-01 15:54:25 UTC"
},
{
"addresses": [
{
"city": "New York"
},
{
"city": "San Francisco"
}
],
"age": "26.0",
"name": "Foo Bar",
"createdAt": "2016-02-01 15:54:16 UTC"
}
]
What I'd like to do is recreate the same table (same structure) but with only the latest version of a row. In this example let's say that I'd like to group by everything by name and take the row with the most recent createdAt.
I tried to do something like this: Google Big Query SQL - Get Most Recent Column Value but I couldn't get it to work with record and repeated fields.
I really hoped someone from Google Team will provide answer on this question as it is very frequent topic/problem asked here on SO. BigQuery definitelly not friendly enough with writing Nested / Repeated stuff back to BQ off of BQ query.
So, I will provide the workaround I found relatively long time ago. I DO NOT like it, but (and that is why I hoped for the answer from Google Team) it works. I hope you will be able to adopt it for you particular scenario
So, based on your example, assume you have table as below
and you expect to get most recent records based on createdAt column, so result will look like:
Below code does this:
SELECT name, age, createdAt, addresses.city
FROM JS(
( // input table
SELECT name, age, createdAt, NEST(city) AS addresses
FROM (
SELECT name, age, createdAt, addresses.city
FROM (
SELECT
name, age, createdAt, addresses.city,
MAX(createdAt) OVER(PARTITION BY name, age) AS lastAt
FROM yourTable
)
WHERE createdAt = lastAt
)
GROUP BY name, age, createdAt
),
name, age, createdAt, addresses, // input columns
"[ // output schema
{'name': 'name', 'type': 'STRING'},
{'name': 'age', 'type': 'INTEGER'},
{'name': 'createdAt', 'type': 'INTEGER'},
{'name': 'addresses', 'type': 'RECORD',
'mode': 'REPEATED',
'fields': [
{'name': 'city', 'type': 'STRING'}
]
}
]",
"function(row, emit) { // function
var c = [];
for (var i = 0; i < row.addresses.length; i++) {
c.push({city:row.addresses[i]});
};
emit({name: row.name, age: row.age, createdAt: row.createdAt, addresses: c});
}"
)
the way above code works is: it implicitely flattens original records; find rows that belong to most recent records (partitioned by name and age); assembles those rows back into respective records. final step is processing with JS UDF to build proper schema that can be actually written back to BigQuery Table as nested/repeated vs flatten
The last step is the most annoying part of this workaround as it needs to be customized each time for specific schema(s)
Please note, in this example - it is only one nested field inside addresses record, so NEST() fuction worked. In scenarious when you have more than just one
field inside - above approach still works, but you need to involve concatenation of those fields to put them inside nest() and than inside js function to do extra splitting those fields, etc.
You can see examples in below answers:
Create a table with Record type column
create a table with a column type RECORD
How to store the result of query on the current table without changing the table schema?
I hope this is good foundation for you to experiment with and make your case work!

Cannot pass input field of repeated record type into Bigquery UDF

When I pass an input field of repeated record type into Bigquery UDF, it keeps saying that the input field is not found.
This is my 2 rows of data:
{"name":"cynthia", "Persons":[ { "name":"john","age":1},{"name":"jane","age":2} ]}
{"name":"jim","Persons":[ { "name":"mary","age":1},{"name":"joe","age":2} ]}
This is the schema of the data:
[
{"name":"name","type":"string"},
{"name":"Persons","mode":"repeated","type":"RECORD",
"fields":
[
{"name": "name","type": "STRING"},
{"name": "age","type": "INTEGER"}
]
}
]
And this is the query:
SELECT
name,maxts
FROM
js
(
//input table
[dw_test.clokTest_bag],
//input columns
name, Persons,
//output schema
"[
{name: 'name', type:'string'},
{name: 'maxts', type:'string'}
]",
//function
"function(r, emit)
{
emit({name: r.name, maxts: '2'});
}"
)
LIMIT 10
Error I got when trying to run the query:
Error: 5.3 - 15.6: Undefined input field Persons
Job ID: ord2-us-dc:job_IPGQQEOo6NHGUsoVvhqLZ8pVLMQ
Would someone please help?
Thank you.
In your list of input columns, list the leaf fields directly:
//input columns
name, Persons.name, Persons.age,
They'll still appear in their proper structure when you get the records in your UDF.

Is it possible to turn an array returned by the Mongo GeoNear command (using Ruby/Rails) into a Plucky object?

As a total newbie I have been trying to get the geoNear command working in my rails application and it appear to be working fine. The major annoyance for me is that it is returning an array with strings rather than keys which I can call on to pull out data.
Having dug around, I understand that MongoMapper uses Plucky to turn the the query resultant into a friendly object which can be handled easily but I haven't been able to find out how to transform the result of my geoNear query into a plucky object.
My questions are:
(a) Is it possible to turn this into a plucky object and how do i do that?
(b) If it is not possible how can I most simply and systematically extract each record and each field?
here is the query in my controller
#mult = 3963 * (3.14159265 / 180 ) # Scale to miles on earth
#results = #db.command( {'geoNear' => "places", 'near'=> #search.coordinates , 'distanceMultiplier' => #mult, 'spherical' => true})
Here is the object i'm getting back (with document content removed for simplicity)
{"ns"=>"myapp-development.places", "near"=>"1001110101110101100100110001100010100010000010111010", "results"=>[{"dis"=>0.04356444023196527, "obj"=>{"_id"=>BSON::ObjectId('4ee6a7d210a81f05fe000001'),...}}], "stats"=>{"time"=>0, "btreelocs"=>0, "nscanned"=>1, "objectsLoaded"=>1, "avgDistance"=>0.04356444023196527, "maxDistance"=>0.0006301239824196907}, "ok"=>1.0}
Help is much appreciated!!
Ok so lets say you store the results into a variable called places_near:
places_near = t.command( {'geoNear' => "places", 'near'=> [50,50] , 'distanceMultiplier' => 1, 'spherical' => true})
This command returns an hash that has a key (results) which maps to a list of results for the query. The returned document looks like this:
{
"ns": "test.places",
"near": "1100110000001111110000001111110000001111110000001111",
"results": [
{
"dis": 69.29646421910687,
"obj": {
"_id": ObjectId("4b8bd6b93b83c574d8760280"),
"y": [
1,
1
],
"category": "Coffee"
}
},
{
"dis": 69.29646421910687,
"obj": {
"_id": ObjectId("4b8bd6b03b83c574d876027f"),
"y": [
1,
1
]
}
}
],
"stats": {
"time": 0,
"btreelocs": 1,
"btreelocs": 1,
"nscanned": 2,
"nscanned": 2,
"objectsLoaded": 2,
"objectsLoaded": 2,
"avgDistance": 69.29646421910687
},
"ok": 1
}
To iterate over the responses just iterate as you would over any list in ruby:
places_near['results'].each do |result|
# do stuff with result object
end