MongoDB Aggregating counts with different conditions at once - sql

I'm totally new to MongoDB.
I wonder if it is possible to aggregate counts with different conditions at once.
Such as, there is a collection like below.
_id | no | val |
--------------------
1 | 1 | a |
--------------------
2 | 2 | a |
--------------------
3 | 3 | b |
--------------------
4 | 4 | c |
--------------------
And I want result like below.
Value a : 2
Value b : 1
Value c : 1
How can I get this result all at once?
Thank you:)

db.collection.aggregate([
{
"$match": {}
},
{
"$group": {
"_id": "$val",
"count": { "$sum": 1 }
}
},
{
"$project": {
"field": {
"$arrayToObject": [
[ { k: { "$concat": [ "Value ", "$_id" ] }, v: "$count" } ]
]
}
}
},
{
"$replaceWith": "$field"
}
])
mongoplayground

Related

Get average of JSONB array in postgres

I have a postgres table 'games' containing different scores for a game. I want to query all the games and have the average score of all the scores for that specific game. I tried a lot of different queries but I always get in trouble because of the JSONB datatype. The data of the games are saved in JSONB format and the games table looks like this:
gameID gameInfo
---------------------------------------------------------------
1 {
"scores": [
{
"scoreType": "skill",
"score": 1
},
{
"scoreType": "speed",
"score": 3
},
{
"scoreType": "strength",
"score": 2
}
]}
2 {
"scores": [
{
"scoreType": "skill",
"score": 4
},
{
"scoreType": "speed",
"score": 4
},
{
"scoreType": "strength",
"score": 4
}
]}
3 {
"scores": [
{
"scoreType": "skill",
"score": 1
},
{
"scoreType": "speed",
"score": 3
},
{
"scoreType": "strength",
"score": 5
}
]}
Expected output:
GameId
AverageScore
1
2
2
4
2
3
What query can I use to get the expected output?
Extract JSONB representing an array, use a JSONB function to get array of JSONB, extract the string value.
select gameid, avg(score::int) s
from (
select gameid, jsonb_array_elements(gameInfo #>'{scores}') ->'score' score
from foo
) t
group by gameid
order by gameid
Also you can use lateral join in next way:
select gameID, avg((s->>'score')::int) avg_score
from g, lateral jsonb_array_elements((gameInfo->>'scores')::jsonb) s
group by gameID
;
SQL editor online
Result:
+========+====================+
| gameid | avg_score |
+========+====================+
| 3 | 3.0000000000000000 |
+--------+--------------------+
| 2 | 4.0000000000000000 |
+--------+--------------------+
| 1 | 2.0000000000000000 |
+--------+--------------------+

Template Request Scenario Outline empty secondary array object causes invalid JSON to form

I am trying to formulate a template request which can be used via a scenario outline which will be used to test the API without the need for many payloads to be constructed & maintained.
But I've struck a problem when an array is included in the request which contains different optional blocks - that if left empty cause an invalid JSON to be formed and sent.
How can I have an array set correctly (disclude the ",{}" when there is the absence of values from in the scenario outline?
Background:
* def template_request = read('./request/request.json')
Scenario Outline: blah
* def request_payload = karate.filterKeys(template_request, <request_filter>)
Given request request_payload
When method post
Examples:
|denominationType1 |amount1|denominationType2 |amount2 |request_filter |
|NOTE |10 | | | 'depositDetail' |
|NOTE |10 |COINS |20 | 'depositDetail' |
The request sent from the first row of tbl (which has empty value for denomination type 2 (COINS) is: **See below the second array object which is still set (albeit with no values - which is as per tbl) -> the comma and curly braces , {} which is setting the second array is causing the problem
{
"depositDetail": {
"denomination": [
{
"amount": "10",
"denominationType": "NOTE"
},
{
}
]
}
The request for the second row is fine
{
"depositDetail": {
"denomination": [
{
"amount": "10",
"denominationType": "NOTE"
},
{
"amount": "20",
"denominationType": "COINS"
}
]
}
requestTemplate.json
{
"depositDetail": {
"denomination": [
{
"denominationType": ##(denominationType1),
"count": ##(count),
"amount": ##(amount1)
},
{
"denominationType": ##(denominationType2),
"amount": ##(amount2)
}
]
}
}
I am having no luck with filters or functions - could you please help me.
You still haven't provided the template. Anyway, this is how I would do it. You can try it and see it work.
Scenario Outline:
* def payload = { denomination: [] }
* if (type1) payload.denomination.push({ type: type1, amount: amount1 })
* if (type2) payload.denomination.push({ type: type2, amount: amount2 })
* url 'https://httpbin.org/anything'
* request payload
* method post
Examples:
| type1 | amount1 | type2 | amount2 |
| NOTE | 10 | | |
| NOTE | 10 | COIN | 20 |
But even the above is not elegant in my opinion. I recommend you don't overcomplicate your tests and just do something like this:
Scenario Outline:
* def payload = {}
* payload[key] = items
* url 'https://httpbin.org/anything'
* request payload
* method post
Examples:
| key | items! |
| foo | [{ type: 'NOTE', amount: 10 }] |
| bar | [{ type: 'NOTE', amount: 10 }, { type: 'COIN', amount: 20 }] |

BigQuery : best use of UNNEST Arrays

I really need some help, I have a big file JSON that I ingested into BigQuery, I want to write a query that uses UNNEST twice, namely I have this like :
{
"categories": [
{
"id": 1,
"name" : "C0",
"properties": [
{
"name": "Property_1",
"value": {
"type": "String",
"value": "11111"
}
},
{
"name": "Property_2",
"value": {
"type": "String",
"value": "22222"
}
}
]}
]}
And I want to do a query that give's me something like this result
---------------------------------------------------------------------
| Category_ID | Name_ID | Property_1 | Property_2 |
------------------------------------------------------------------
| 1 | C0 | 11111 | 22222 |
---------------------------------------------------------------------
I already made something like but it's not working :
SELECT
c.id as Category_ID,
c.name as Name_ID,
p.value.value as p.name
From `DataBase-xxxxxx` CROSS JOIN
UNNEST(categories) AS c,
UNNEST(c.properties) AS p;
Thank you more 🙏

How I can add atrribute to Laravel model collection?

I have Collection of Category.
with App\Category::all() I get:
ID | PARENT_ID | NAME | DEPTH'
1 | 0 | parent1 | 0
2 | 0 | parent2 | 0
3 | 1 | child1 | 1
4 | 2 | child2 | 1
How I can add custom atrribute (column or method results) to my collection?
The result I wanna get when I write etc: $categories=Category::with('childs');
ID| PARENT_ID | NAME | DEPTH' | CHILDS
1 | 0 | parent1 | 0 | {2 | 1 | child1 | 1 | NULL}
2 | 0 | parent2 | 0 | {3 | 2 | child2 | 1 | NULL}
3 | 1 | child1 | 1 | NULL
4 | 2 | child2 | 1 | NULL
I think you get the idea. I tried use Accessors & Mutators and I successfully added attribute with data etc.
$category->childs; // value should be {12 | 10 | name1 | 1 | NULL}
but I'm stuck because I can't pass data to method with queried data and return it back. I want to use one table, later I will add left and right columns to table to have tree database, now I'm just trying a little simpler - have parent and add children to it's collection
You should use model relationship to itself:
Category.php
<?php
namespace App;
use Illuminate\Database\Eloquent\Model;
class Category extends Model
{
protected $with = ['childs'];
public function childs()
{
return $this->hasMany(Category::class, 'parent_id');
}
}
CategoryController.php
public function index()
{
$categories = Category::all();
return $categories;
}
$categories will return result as you need:
[
{
"id": 1,
"parent_id": 0,
"name": "parent1",
"depth": 0,
"childs": [
{
"id": 3,
"parent_id": 1,
"name": "child1",
"depth": 0,
"childs": []
}
]
},
{
"id": 2,
"parent_id": 0,
"name": "parent2",
"depth": 0,
"childs": [
{
"id": 4,
"parent_id": 2,
"name": "child2",
"depth": 0,
"childs": []
}
]
},
{
"id": 3,
"parent_id": 1,
"name": "child1",
"depth": 0,
"childs": []
},
{
"id": 4,
"parent_id": 2,
"name": "child2",
"depth": 0,
"childs": []
}
]

How to setup properties on CSV import in OrientDB?

I have a CSV file like:
FN | MI | LN | ADDR | CITY | ZIP | GENDER
------------------------------------------------------------------------------
Patricia | | Faddar | 7063 Carr xxx | Carolina | 00979-7033 | F
------------------------------------------------------------------------------
Lui | E | Baves | PO Box xxx | Boqueron | 00622-1240 | F
------------------------------------------------------------------------------
Janine | S | Perez | 25 Calle xxx | Salinas | 00751-3332 | F
------------------------------------------------------------------------------
Rose | | Mary | 229 Calle xxx | Aguadilla | 00603-5536 | F
And I am importing it into OrientDB like:
{
"source": { "file": { "path": "/sample.csv" } },
"extractor": { "csv": {} },
"transformers": [
{ "vertex": { "class": "Users" } }
],
"loader": {
"orientdb": {
"dbURL": "plocal:/orientdb/databases/test",
"dbType": "graph",
"classes": [
{"name": "Users", "extends": "V"}
]
}
}
}
I would like to set the import so that it created properties so that FN becomes first_name, MI becomes middle_name and so on, as well as set some values to lowercase. For ex: Carolina to become carolina
I could probably make this changes from the SCHEMA once the data is added. My reason to do this here is that I have multiple CSV files and I want to keep the the same schema for all
Any ideas?
To rename a field, take a look at the Field transformer:
http://orientdb.com/docs/last/Transformer.html#field-transformer
Rename the field from salary to renumeration:
{ "field":
{ "fieldName": "remuneration",
"expression": "salary"
}
},
{ "field":
{ "fieldName": "salary",
"operation": "remove"
}
}
in the same way, you can apply the lowerCase function to the property
{field: {fieldName:'name', expression: '$input.name.toLowerCase()'}}
Try it and let me know if it works.