Postgresql batch update - sql

How to batch update the following sample more efficiently.
users = [{id: 424, pos: 1}, {id: 23, pos: 2}, {id: 55, pos: 3}, ...]
//currently loop updating each {i}:
UPDATE users SET position = i.pos WHERE id = i.id

You can use unnest():
update users u
set position = user.pos
from (values ([{id: 424, pos: 1}, {id: 23, pos: 2}, {id: 55, pos: 3}, ...])
) v(users) cross join lateral
unnest(users) user
where u.id = user.id

Related

How do I insert and update array columns in Node-Postgres?

I have the following table in Postgres:
_id: integer, user_id: integer, items: Array
I wish to insert the following into the table:
1, 1, [{productId: 1, size: 'large', quantity: 5}]
Next I wish to update the row with the following:
1, 1, [{productId: 1, size: 'small', quantity: 3}]
How do I do this in node-postgres?
Pseudocode:
update cart
set items.quantity = 3
where cart._id = 1
and cart.items.product_id = 1
and cart.items.size='large'

How input data in line chart in Pentaho?

I want to make this chart in Pentaho CDE:
based in this chart (I think that is the most similar from among CCC Components):
(The code is in this link.)
but I don't know how I can adapt my data input to that graph.
For example, I want to consume the data with this format:
[Year, customers_A, customers_B, cars_A, cars_B] [2014, 8, 4, 23, 20]
[2015, 20, 6, 30, 38]
How I can input my data in this chart?
Your data should come as an object such as this:
data = {
metadata: [
{ colName: "Year", colType:"Numeric", colIndex: 1},
{ colName: "customers_A", colType:"Numeric", colIndex: 2},
{ colName: "customers_B", colType:"Numeric", colIndex: 3},
{ colName: "cars_A", colType:"Numeric", colIndex: 4},
{ colName: "cars_B", colType:"Numeric", colIndex: 5}
],
resultset: [
[2014, 8, 4, 23, 20],
[2015, 20, 6, 30, 38]
],
queryInfo: {totalRows: 2}
}

how to use lodash to sum data with the same key?

I have data like this:
var data = [
{2: 1, 6: 1},
{2: 2},
{1: 3, 6: 2},
];
(the "2" is like a key and "1" means "count")
and I want to output like this:
output = [
{2: 3, 6: 3, 1: 3},
];
is there a way to archive this by using lodash?
Use _.mergeWith() with spread to merge all keys and sum their values:
const data = [{2: 1, 6: 1}, {2: 2}, {1: 3, 6: 2}];
const result = _.mergeWith({}, ...data, (objValue, srcValue) =>
_.isNumber(objValue) ? objValue + srcValue : srcValue);
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.5/lodash.min.js"></script>

rethinkdb: secondary compound indexes / aggregation queries and intermediate documents generation

Let's assume such table content where for the same product_id, we have as many rows than updates during status==1 (published) and finally status==0 (unpublished) and then becomes==2 (deleted)
{id: <auto>, product_id: 1, last_updated: 2015-12-1, status: 1, price: 1}
{id: <auto>, product_id: 2, last_updated: 2015-12-1, status: 1, price: 10}
{id: <auto>, product_id: 1, last_updated: 2015-12-2, status: 1, price: 2}
{id: <auto>, product_id: 1, last_updated: 2015-12-3, status: 0, price: 2}
{id: <auto>, product_id: 2, last_updated: 2015-12-2, status: 0, price: 10}
{id: <auto>, product_id: 3, last_updated: 2015-12-2, status: 1, price: 123}
{id: <auto>, product_id: 1, last_updated: 2015-12-4, status: 2, price: 2}
{id: <auto>, product_id: 2, last_updated: 2015-12-4, status: 2, price: 10}
Now, I am trying to find a way, maybe using a secondary compound index, do get for example, given a date like in col1 (using r.time)
DATE STATUS==1 STATUS==0 STATUS==2
2015-12-1 [101, 102] [] []
2015-12-2 [103, 106] [105] []
2015-12-3 [106] [104, 105] []
2015-12-4 [] [] [107, 108]
The difficulty here, is that a product_id document is still to be considered as the most recent status as long as its last_updated date is less or equal to the provided date.
I try by grouping by product_id, then take the max('last_updated'), then only keep each reduction unique document if status==1
I have in mind to have an index for each status / given_date
Or another solution, would be to insert in another table the result of an aggregation which would only store a unique document per date, containing all the initial documents ids matching the same criteria, and so on...
And then later perform joins using these intermediate records to fetch the values of each product_id at the given date/status.
something like:
{
date: <date_object>,
documents: [
{id: document_id, status: 1},
{id: document_id, status: 1},
{id: document_id, status: 2},
{id: document_id, status: 0},
...
]
}
Please advise
Edit 1:
This is an example of a query I try to run to analyse my data, here it is for example to get an overview of the statuses for each group with more than 1 document:
r.db('test').table('products_10k_sample')
.group({index: 'product_id'})
.orderBy(r.desc('last_updated'))
.ungroup()
.map(function(x){
return r.branch(
x('reduction').count().gt(1),
x('reduction').map(function(m){
return [m('last_updated').toISO8601(), m('status'), m('product_id')]
}),
null
)
})

objective-c equivalent to group by in groovy

Source array:
[ { a: 1, b: 1}, { a: 1, b: 2}, { a: 2, b: 3} ]
Target dictionary:
{ 1: [{a: 1, b: 1}, {a: 1, b: 2}], 2: [{ a: 2, b: 3}] }
So i want to have the objects in the source array grouped by their value of a.
In groovy it's done using array.groupBy({ it.a }). Is there a nice equivalent in objective-c?