Ramda - how to add new properties to nested object - ramda.js

I am trying to add new properties width and height to nested objects.
My data structure looks like this:
const graph = {
id: 'root',
children: [
{
id: 'n1'
},
{
id: 'n2'
}
]
};
I am trying to add unique width and height properties to each child based on id
I tried R.lensPath. Here you can check it in ramda editor:
const widthLens = R.curry((id, data) => R.lensPath([
'children',
R.findIndex(R.whereEq({ id }),
R.propOr([], 'children', data)),
'width',
]));
const setWidth = widthLens('n1', graph);
R.set(setWidth, '100', graph);
And this is working almost as it should but it is adding only width plus I need to iterate over all children and return the same object with new properties. It also looks overcomplicated so any suggestions are more than welcome. Thank you.

There are several different ways of approaching this. But one possibility is to use custom lens types. (This is quite different from Ori Drori's excellent answer, which simply uses Ramda's lensPath.)
Ramda (disclaimer: I'm one of the authors) only supplies only a few specific types of lenses -- one for simple properties, another for array indices, and a third for more complex object paths. But it allows you to build ones that you might need. And lenses are not designed only for simple object/array properties. Think of them instead as a framing of some set of your data, something you can focus on.
So we can write a lens which focuses on the array element with a specific id. There are decisions to make about how we handle missing ids. I'll choose here -- if the id is not found -- to return undefined for a get and to append to the end on a set, but there are reasonable alternatives one might explore.
In terms of implementation, there is nothing special about id, so I will do this based on a specific named property and specialize it to id in a separate function. We could write this:
const lensMatch = (propName) => (key) => lens (
find (propEq (propName, key)),
(val, arr, idx = findIndex (propEq (propName, key), arr)) =>
update(idx > -1 ? idx : length (arr), val, arr)
)
const lensId = lensMatch ('id')
It would work like this:
const lens42 = lensId (42)
const a = [{id: 17, x: 'a'}, {id: 42, x: 'b'}, {id: 99, x: 'c'}, {id: 57, x: 'd'}]
view (lens42, a) //=> {id: 42, x: 'b'}
set (lens42, {id: 42, x: 'z', foo: 'bar'}, a)
//=> [{id: 17, x: 'a'}, {id: 42, x: 'z', foo: 'bar'}, {id: 99, x: 'c'}, {id: 57, x: 'd'}]
over (lens42, assoc ('foo', 'qux'), a)
//=> [{id: 17, x: 'a'}, {id: 42, x: 'b', foo: 'qux'}, {id: 99, x: 'c'}, {id: 57, x: 'd'}]
But then we need to deal with our width and height properties. One very useful way to do this is to focus on an object with given particular properties, so that we get something like {width: 100, height: 200}, and we pass an object like this into set. It turns out to be quite elegant to write:
const lensProps = (props) => lens (pick (props), mergeLeft)
And we would use it like this:
const bdLens = lensProps (['b', 'd'])
const o = ({a: 1, b: 2, c: 3, d: 4, e: 5})
view (bdLens, o) //=> {b: 2, d: 4}
set (bdLens, {b: 42, d: 99}, o) //=> {a: 1, b: 42, c: 3, d: 99, e: 5}
over (bdLens, map (n => 10 * n), o) //=> {a: 1, b: 20, c: 3, d: 40, e : 5}
Combining these, we can develop a function to use like this: setDimensions ('n1', {width: 100, height: 200}, graph) We first write a lens to handle the id and our dimension:
const lensDimensions = (id) => compose (
lensProp ('children'),
lensId (id),
lensProps (['width', 'height'])
)
And then we call the setter of this lens via
const setDimensions = (id, dimensions, o) =>
set (lensDimensions (id), dimensions, o)
We can put this all together as
const lensMatch = (propName) => (key) => lens (
find (propEq (propName, key)),
(val, arr, idx = findIndex (propEq (propName, key), arr)) =>
update(idx > -1 ? idx : length (arr), val, arr)
)
const lensProps = (props) => lens (pick (props), mergeLeft)
const lensId = lensMatch ('id')
const lensDimensions = (id) => compose (
lensProp ('children'),
lensId (id),
lensProps (['width', 'height'])
)
const setDimensions = (id, dimensions, o) => set (lensDimensions (id), dimensions, o)
const graph = {id: 'root', children: [{id: 'n1'}, {id: 'n2'}]}
console .log (setDimensions ('n1', {width: 100, height: 200}, graph))
//=> {id: "root", children: [{ id: "n1", height: 200, width: 100}, {id: "n2"}]}
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>
<script> const {find, propEq, findIndex, update, length, lens, pick, mergeLeft, compose, lensProp, set} = R </script>
This clearly involves more lines of code than does the answer from Ori Drori. But it creates the useful, reusable lens creators, lensMatch, lensId, and lensProps.
Note: This as is will fail if we try to work with unknown ids. I have a fix for it, but I don't have the time right now to dig into why it fails, probably something to do with the slightly unintuitive way lenses compose. If I find time soon, I'll dig back into it. But for the moment, we can simply change lensProps to
const lensProps = (props) => lens (compose (pick (props), defaultTo ({})), mergeLeft)
And then an unknown id will append to the end:
console .log (setDimensions ('n3', {width: 100, height: 200}, graph))
//=> {id: "root", children: [{id: "n1"}, {id: "n2"}, {id: "n3", width : 100, height : 200}]}

You can use R.over with R.mergeLeft to add the properties to the object at the index:
const { curry, lensPath, findIndex, whereEq, propOr, over, mergeLeft } = R;
const graph = {"id":"root","children":[{"id":"n1"},{"id":"n2"}]};
const widthLens = curry((id, data) => lensPath([
'children',
findIndex(whereEq({ id }), propOr([], 'children', data)),
]));
const setValues = widthLens('n1', graph);
const result = over(setValues, mergeLeft({ width: 100, height: 200 }), graph);
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js" integrity="sha512-rZHvUXcc1zWKsxm7rJ8lVQuIr1oOmm7cShlvpV0gWf0RvbcJN6x96al/Rp2L2BI4a4ZkT2/YfVe/8YvB2UHzQw==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>

Related

Filter array based on a value in nested array with Ramda

I'm trying to learn Ramda, but I'm struggling with seemingly simple stuff. How would I write the filter and sort using Ramda's pipe?
const items = [
{ id: 1, subitems: [{name: 'Foo', price: 1000}]},
{ id: 2, subitems: [{name: 'Bar'}]},
{ id: 3, subitems: [{name: 'Foo', price: 500}]},
{ id: 4, subitems: [{name: 'Qux'}]},
]
const findFoo = value => value.name === 'Foo'
items
.filter(item => item.subitems.find(findFoo))
.sort((a, b) => a.subitems.find(findFoo).price > b.subitems.find(findFoo).price ? -1 : 1)
// [{ id: 3, subitems: [...] }, { id: 1, subitems: [...] })
I've tried something like this but it returns an empty array:
R.pipe(
R.filter(
R.compose(
R.path(['subitems']),
R.propEq('name', 'Foo')
)
),
// Todo: sorting...
)(items)
Ramda's sortBy may help here. You could just do the following:
const findFoo = pipe (prop ('subitems'), find (propEq ('name', 'Foo')))
const fn = pipe (
filter (findFoo),
sortBy (pipe (findFoo, prop ('price')))
)
const items = [{id: 1, subitems: [{name: 'Foo', price: 1000}]}, {id: 2, subitems: [{name: 'Bar'}]}, {id: 3, subitems: [{name: 'Foo', price: 500}]}, {id: 4, subitems: [{name: 'Qux'}]}]
console .log (fn (items))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
<script> const {pipe, prop, find, propEq, filter, sortBy} = R </script>
Obviously if we tried, we could make this entirely point-free, and address the concerns about double-extracting the Foo subobject. Here's a working version that converts the elements into [fooSubObject, element] pairs (where the former may be nil), then runs a filter to collect the elements where the fooSubObject is not nil, sorts by their price, then unwraps the elements from the pairs.
const fn = pipe (
map (chain (pair) (pipe (prop ('subitems'), find (propEq ('name', 'Foo'))))),
pipe (filter (head), sortBy (pipe (head, prop ('price')))),
map (last)
)
const items = [{id: 1, subitems: [{name: 'Foo', price: 1000}]}, {id: 2, subitems: [{name: 'Bar'}]}, {id: 3, subitems: [{name: 'Foo', price: 500}]}, {id: 4, subitems: [{name: 'Qux'}]}]
console .log (fn (items))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
<script> const {pipe, map, chain, pair, prop, find, propEq, filter, last, sortBy, head} = R</script>
But to my eyes, this is a horrible, unreadable mess. We can tame it a bit by extracting a helper taking a function to generate the gloss object we will need for filtering and sorting, and our main process function (that actually does the filtering and sorting) using the gloss function to create the [gloss, element] pairs as above, calling our process and then extracting the second element from each resulting pair. As per Ori Drori's answer, we'll name that function dsu. It might look like this:
const dsu = (gloss, process) =>
compose (map (last), process, map (chain (pair) (gloss)))
const fn = dsu (
pipe (prop ('subitems'), find (propEq ('name', 'Foo'))),
pipe (filter (head), sortBy (pipe (head, prop ('price'))))
)
const items = [{id: 1, subitems: [{name: 'Foo', price: 1000}]}, {id: 2, subitems: [{name: 'Bar'}]}, {id: 3, subitems: [{name: 'Foo', price: 500}]}, {id: 4, subitems: [{name: 'Qux'}]}]
console .log (fn (items))
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
<script> const {compose, map, last, chain, pair, pipe, prop, find, propEq, filter, head, sortBy} = R</script>
This is better, and maybe marginally acceptable. But I still prefer the first version above.
You can use a DSU sort:
Decorate - map the original array, and create a tuple with the price of the found item in the sub-array, or -Infinity if none, and the original object.
Sort by using the price (the 1st item in the tuple).
Undecorate - Map again an extract the original object.
const { pipe, propEq, find, map, applySpec, prop, propOr, identity, sortWith, descend, head, last } = R
const findItemByProp = pipe(propEq, find)
const dsu = (value) => pipe(
map(applySpec([ // decorate with the value of the item in the sub-array
pipe(
prop('subitems'), // get subitems
findItemByProp('name', value), // find an item with the name
propOr(-Infinity, 'price') // extract the price or use -Infinity as a fallback
),
identity // get the original item
])),
sortWith([descend(head)]), // sort using the decorative value
map(last) // get the original item
)
const items = [{"id":1,"subitems":[{"name":"Foo","price":1000}]},{"id":2,"subitems":[{"name":"Bar"}]},{"id":3,"subitems":[{"name":"Foo","price":500}]},{"id":4,"subitems":[{"name":"Qux"}]}]
const result = dsu('Foo')(items)
console.log(result)
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js" integrity="sha512-t0vPcE8ynwIFovsylwUuLPIbdhDj6fav2prN9fEu/VYBupsmrmk9x43Hvnt+Mgn2h5YPSJOk7PMo9zIeGedD1A==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>

Item filtering but keeping track of filtered out items

Let's say I have a list of items like below and I would like to apply a list of filters onto it with ramda.
const data = [
{id: 1, name: "Andreas"},
{id: 2, name: "Antonio"},
{id: 3, name: "Bernhard"},
{id: 4, name: "Carlos"}
]
No biggie: pipe(filter(predA), filter(predB), ...)(data)
The tricky part is I would like to define my filters with a key for tracking what items have been filtered out by which filter.
const filterBy = (key, pred) => subs => {
const [res, rej] = partition(pred, subs)
return [{[key]: rej.map(prop('id'))}, res]
}
This all screams monad chaining or a transducer, but I can't get my head around it how to put it all together.
Let's say I have a 2 predicates:
const isEven = filterBy('id', i => i % 2 === 0)
const startsWithA = filterBy('name', startsWith('A'))
I would like to get a result that looks like this tuple with a rejection map and a list of "accepted" items (isEven threw out 1 and 3 and startsWithA rejected 3 and 4):
[
{
id: [1, 3],
name: [3, 4]
},
[{id: 2, name: "Antonio"}]
]
Vanilla JS version
I'm bothered by using the field name to describe the predicate. What happens if we also have, say, const nameTooLong = ({name}) => name .length < 8. Then how could we distinguish the two predicates in the output? So I would prefer to use descriptive predicate names, for instance,
[
{isEven: [1, 3], startsWithA: [3, 4]},
[{id: 2, name: "Antonio"}]
]
So that's what I do in this code:
const process = (preds) => (xs) => {
const rej = Object .fromEntries (Object .entries (preds)
.map (([k, v]) => [k, xs .filter (x => !v (x)) .map (x => x .id)])
)
const excluded = Object .values (rej) .flat()
return [rej, data .filter (({id}) => !excluded .includes (id))]
}
const data = [{id: 1, name: "Andreas"}, {id: 2, name: "Antonio"}, {id: 3, name: "Bernhard"}, {id: 4, name: "Carlos"}]
console .log (process ({
isEven: ({id}) => id % 2 === 0,
startsWithA: ({name}) => name .startsWith ('A')
}) (data))
.as-console-wrapper {max-height: 100% !important; top: 0}
It would not be overly difficult to alter this to return something like your requested format.
Using Ramda
The question was tagged Ramda, and I wrote this initially using Ramda tools, with a version that looks like this:
const process = (preds) => (xs) => {
const rej = pipe (map (flip (reject) (xs)), map (pluck ('id'))) (preds)
const excluded = uniq (flatten (values (rej)))
return [rej, reject (pipe (prop ('id'), flip (includes) (excluded))) (data)]
}
And we could continue to hack away at this until we made it entirely point-free. I just don't see any reason for that.
I'm a founder of Ramda and a big fan, but I don't see this as any more readable than the vanilla version. There is one exception: Ramda's map working on a plain object is much nicer than the Object .entries -> map -> Object .fromEntries dance in the vanilla code. I might use that feature and leave the rest in vanilla, though.
Ok so after some fiddling I came up with this kind of solution. Implementing a new monad seemed unnecessary and overwriting fantasy-land/filter was also a bad idea, as my predicates are basically tagged.
This seems to have a good mix of readability and returns basically an extended array for further processing.
class Partition extends Array {
constructor(items, filtered = {}) {
super(...items)
this.filtered = filtered
}
filterWithKey = (key, pred) => {
const [ok, notOk] = partition(pred, this.slice())
const filtered = mergeDeepWith(concat, this.filtered, {[key]: notOk})
return new Partition(ok, filtered)
}
filter = pred => this.filterWithKey("", pred)
}
const res = new Partition([
{id: 1, name: "Andreas"},
{id: 2, name: "Antonio"},
{id: 3, name: "Bernhard"},
{id: 4, name: "Carlos"}
])
.filterWithKey('id', ({id}) => id % 2 === 0)
.filterWithKey('name', ({name}) => name.startsWith('A'))
const toIds = map(prop('id'))
const rejected = map(toIds, res.filtered)
const accepted = [...res]
console.log(rejected, accepted)

Ramda: Filtering through arrays with associated value

This is my initial dataset:
arr1 = [{
url: ['https://example.com/A.jpg?', 'https://example.com/B.jpg?', 'https://example.com/C.jpg?'],
width: ['w=300', 'w=400', 'w=500'],
type: [-1, 1, 2]
}];
By filtering with type: n => n > 0 and passing the result through the arr1, I would like to produce arr2 with Ramda. If nth value is excluded as the result of the filter, then nth value in another arrays are also excluded.
arr2 = [{
url: ['https://example.com/B.jpg?', 'https://example.com/C.jpg?'],
width: ['w=400', 'w=500'],
type: [1, 2]
}];
I tried the code below, but not working...
const isgt0 = n => n > 0 ;
const arr2 = R.applySpec({
url : arr1,
width : arr1,
type : R.filter(isgt0),
});
console.log(arr2(arr1));
Once I get the desired object, I intend to R.transpose the array to generate an URL like: [https://example.com/B.jpg?w=400, https://example.com/C.jpg?w=500]
The main steps are:
Get the arrays of the values with R.props:
[-1, 1, 2]
['w=300', 'w=400', 'w=500']
['https://example.com/A.jpg?', 'https://example.com/B.jpg?', 'https://example.com/C.jpg?']
Transpose them to arrays of items with the same index:
[-1, 'w=300', 'https://example.com/A.jpg?']
[1, 'w=400', 'https://example.com/B.jpg?']
[1, 'w=500', 'https://example.com/C.jpg?']
Filter by index 0 (the original type), transpose back, and then reconstruct the object using R.applySpec.
const { pipe, props, transpose, filter, propSatisfies, gt, __, tranpose, applySpec, nth, map } = R
const filterProps = pipe(
props(['type', 'width', 'url']), // get an array of property
transpose, // convert to arrays of all property values with the same index
filter(propSatisfies(gt(__, 0), 0)), // filter by the type (index 0)
transpose, // convert back to arrays of each type
applySpec({ // reconstruct the object
type: nth(0),
width: nth(1),
url: nth(2),
})
)
const data = [
{
type: [-1, 1, 2],
width: ['w=300', 'w=400', 'w=500'],
url: [
'https://example.com/A.jpg?',
'https://example.com/B.jpg?',
'https://example.com/C.jpg?',
],
}
]
const result = map(filterProps, data)
console.log(result)
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js" integrity="sha512-3sdB9mAxNh2MIo6YkY05uY1qjkywAlDfCf5u1cSotv6k9CZUSyHVf4BJSpTYgla+YHLaHG8LUpqV7MHctlYzlw==" crossorigin="anonymous"></script>
Another way to think about it more generically is to filter using a configuration object that holds the tests to apply for various properties. Here it is only type, but it's easy enough to imagine others.
My solution for this problem is configured with this object:
{
type: n => n > 0
}
This solutions uses many Ramda functions, but also uses Array.prototype.filter to have access to the index parameter of filter. We could choose R.addIndex instead, but I would only bother if I was trying to make it point-free, which doesn't seem worthwhile here. This is what it might look like:
const filterOnProps = (config) => (obj) => {
const test = allPass (map(([k, v]) => (i) => v (obj [k] [i]), toPairs (config)))
const indices = filter (test) (range (0, values (obj) [0] .length))
return map(a => a .filter ((_, i) => contains (i, indices)), obj)
}
const transform = map (filterOnProps ({type: n => n > 0}))
const arr1 = [{url: ['https://example.com/A.jpg?', 'https://example.com/B.jpg?', 'https://example.com/C.jpg?'], width: ['w=300', 'w=400', 'w=500'], type: [-1, 1, 2]}]
console .log (transform (arr1))
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>
<script> const {allPass, map, toPairs, filter, range, values, contains} = R </script>
With obj in scope, we create test, which will be somewhat equivalent to
allPass([
i => obj['type'][i] > 0
])
If we had more conditions in the original configuration object, they would also be in this list.
Then we filter the indices, to see on which ones the record passes this test.
Finally we map over our object, filtering each array to keep only those where the index is in the list.
While this should work, and is reasonably generic, it points to a problem with your data structure. I would suggest that as much as possible, you shy away from situations where structures are dependent on shared indices. To my mind the only reasonable use of that is for a relatively compact serialization format. On deserialization, I would immediately rehydrate that to something more useful, perhaps something like
const data = [
{url: 'https://example.com/A.jpg?', width: 'w=300', type: -1},
{url: 'https://example.com/B.jpg?', width: 'w=400', type: 1},
{url: 'https://example.com/C.jpg?', width: 'w=500', type: 2}
]
This structure is much easier to work with. For example, data.filter(({type}) => type > 0) would be the equivalent to the work above, if you started with this structure.
This might help a bit
const gte1 = R.filter(R.gte(R.__, 1));
const fn = R.map(
R.evolve({
type: gte1,
}),
);
// =====
const data = [
{
type: [-1, 1, 2],
width: ['w=300', 'w=400', 'w=500'],
url: [
'https://example.com/A.jpg?',
'https://example.com/B.jpg?',
'https://example.com/C.jpg?',
],
}
];
console.log(
fn(data),
);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js" integrity="sha512-rZHvUXcc1zWKsxm7rJ8lVQuIr1oOmm7cShlvpV0gWf0RvbcJN6x96al/Rp2L2BI4a4ZkT2/YfVe/8YvB2UHzQw==" crossorigin="anonymous"></script>

Ramdajs, group array with arguments

List to group:
const arr = [
{
"Global Id": "1231",
"TypeID": "FD1",
"Size": 160,
"Flöde": 55,
},
{
"Global Id": "5433",
"TypeID": "FD1",
"Size": 160,
"Flöde": 100,
},
{
"Global Id": "50433",
"TypeID": "FD1",
"Size": 120,
"Flöde": 100,
},
{
"Global Id": "452",
"TypeID": "FD2",
"Size": 120,
"Flöde": 100,
},
]
Input to function which specifies what keys to group:
const columns = [
{
"dataField": "TypeID",
"summarize": false,
},
{
"dataField": "Size",
"summarize": false,
},
{
"dataField": "Flöde",
"summarize": true,
},
]
Expected output:
const output = [
{
"TypeID": "FD1",
"Size": 160,
"Flöde": 155 // 55 + 100
"nrOfItems": 2
},
{
"TypeID": "FD1",
"Size": 120,
"Flöde": 100,
"nrOfItems": 1
},
{
"TypeID": "FD2",
"Size": 120,
"Flöde": 100,
"nrOfItems": 1
}
]
// nrOfItems adds up 4. 2 + 1 +1. The totalt nr of items.
Function:
const groupArr = (columns) => R.pipe(...);
The "summarize" property tells if the property should summarize or not.
The dataset is very large, +100k items. So I don't want to iterate more than necessary.
I've looked at the R.group but I'm not sure it can be applied here?
Maybe something with R.reduce? Store the group in the accumulator, summarize values and add to count if the group already exists? Need to find the group fast so maybe store the group as a key?
Or is it better to use vanilla javascript in this case?
Here's an answer in vanilla javascipt first, because I'm not super familiar with the Ramda API. I'm pretty sure the approach is the quite similar with Ramda.
The code has comments explaining every step. I'll try to follow up with a rewrite to Ramda.
const arr=[{"Global Id":"1231",TypeID:"FD1",Size:160,"Flöde":55},{"Global Id":"5433",TypeID:"FD1",Size:160,"Flöde":100},{"Global Id":"50433",TypeID:"FD1",Size:120,"Flöde":100},{"Global Id":"452",TypeID:"FD2",Size:120,"Flöde":100}],columns=[{dataField:"TypeID",summarize:!1},{dataField:"Size",summarize:!1},{dataField:"Flöde",summarize:!0}];
// The columns that don't summarize
// give us the keys we need to group on
const groupKeys = columns
.filter(c => c.summarize === false)
.map(g => g.dataField);
// We compose a hash function that create
// a hash out of all the items' properties
// that are in our groupKeys
const groupHash = groupKeys
.map(k => x => x[k])
.reduce(
(f, g) => x => `${f(x)}___${g(x)}`,
() => "GROUPKEY"
);
// The columns that summarize tell us which
// properties to sum for the items within the
// same group
const sumKeys = columns
.filter(c => c.summarize === true)
.map(c => c.dataField);
// Again, we compose in to a single function.
// This function concats two items, taking the
// "last" item with only applying the sum
// logic for keys in concatKeys
const concats = sumKeys
.reduce(
(f, k) => (a, b) => Object.assign(f(a, b), {
[k]: (a[k] || 0) + b[k]
}),
(a, b) => Object.assign({}, a, b)
)
// Now, we take our data and group by the groupHash
const groups = arr.reduce(
(groups, x) => {
const k = groupHash(x);
if (!groups[k]) groups[k] = [x];
else groups[k].push(x);
return groups;
},
{}
);
// These are the keys we want our final objects to have...
const allKeys = ["nrTotal"]
.concat(groupKeys)
.concat(sumKeys);
// ...baked in to a helper to remove other keys
const cleanKeys = obj => Object.assign(
...allKeys.map(k => ({ [k]: obj[k] }))
);
// With the items neatly grouped, we can reduce each
// group using the composed concatenator
const items = Object
.values(groups)
.flatMap(
xs => cleanKeys(
xs.reduce(concats, { nrTotal: xs.length })
),
);
console.log(items);
Here's an attempt at porting to Ramda, but I didn't get much further than replacing the vanilla js methods with the Ramda equivalents. Curious to see which cool utilities and functional concepts I missed! I'm sure somebody more knowledgable on the Ramda specifics will chime in!
const arr=[{"Global Id":"1231",TypeID:"FD1",Size:160,"Flöde":55},{"Global Id":"5433",TypeID:"FD1",Size:160,"Flöde":100},{"Global Id":"50433",TypeID:"FD1",Size:120,"Flöde":100},{"Global Id":"452",TypeID:"FD2",Size:120,"Flöde":100}],columns=[{dataField:"TypeID",summarize:!1},{dataField:"Size",summarize:!1},{dataField:"Flöde",summarize:!0}];
const [ sumCols, groupCols ] = R.partition(
R.prop("summarize"),
columns
);
const groupKeys = R.pluck("dataField", groupCols);
const sumKeys = R.pluck("dataField", sumCols);
const grouper = R.reduce(
(f, g) => x => `${f(x)}___${g(x)}`,
R.always("GROUPKEY"),
R.map(R.prop, groupKeys)
);
const reducer = R.reduce(
(f, k) => (a, b) => R.mergeRight(
f(a, b),
{ [k]: (a[k] || 0) + b[k] }
),
R.mergeRight,
sumKeys
);
const allowedKeys = new Set(
[ "nrTotal" ].concat(sumKeys).concat(groupKeys)
);
const cleanKeys = R.pipe(
R.toPairs,
R.filter(([k, v]) => allowedKeys.has(k)),
R.fromPairs
);
const items = R.flatten(
R.values(
R.map(
xs => cleanKeys(
R.reduce(
reducer,
{ nrTotal: xs.length },
xs
)
),
R.groupBy(grouper, arr)
)
)
);
console.log(items);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.min.js"></script>
Here's my initial approach. Everything but summarize is a helper function which I suppose could be inlined if you really wanted. I find it cleaner with this separation.
const getKeys = (val) => pipe (
filter (propEq ('summarize', val) ),
pluck ('dataField')
)
const keyMaker = (columns, keys = getKeys (false) (columns)) => pipe (
pick (keys),
JSON .stringify
)
const makeReducer = (
columns,
toSum = getKeys (true) (columns),
toInclude = getKeys (false) (columns),
) => (a, b) => ({
...mergeAll (map (k => ({ [k]: b[k] }), toInclude ) ),
...mergeAll (map (k => ({ [k]: (a[k] || 0) + b[k] }), toSum ) ),
nrOfItems: (a .nrOfItems || 0) + 1
})
const summarize = (columns) => pipe (
groupBy (keyMaker (columns) ),
values,
map (reduce (makeReducer (columns), {} ))
)
const arr = [{"Flöde": 55, "Global Id": "1231", "Size": 160, "TypeID": "FD1"}, {"Flöde": 100, "Global Id": "5433", "Size": 160, "TypeID": "FD1"}, {"Flöde": 100, "Global Id": "50433", "Size": 120, "TypeID": "FD1"}, {"Flöde": 100, "Global Id": "452", "Size": 120, "TypeID": "FD2"}]
const columns = [{"dataField": "TypeID", "summarize": false}, {"dataField": "Size", "summarize": false}, {"dataField": "Flöde", "summarize": true}]
console .log (
summarize (columns) (arr)
)
<script src="https://bundle.run/ramda#0.26.1"></script><script>
const {pipe, filter, propEq, pluck, pick, mergeAll, map, groupBy, values, reduce} = ramda</script>
There is a lot of overlap with the solution from Joe, but also some real differences. His was already posted when I saw the question, but I wanted my own approach not to be influenced, so I didn't look until I wrote the above. Note the difference in our hash functions. Mine does JSON.stringify on values like {TypeID: "FD1", Size: 160} while Joe's creates "GROUPKEY___FD1___160". I think I like mine better for the simplicity. On the other hand, Joe's solution is definitely better than mine in handling nrOfItems. I updated it on each reduce iteration and have to use an || 0 to handle the initial case. Joe simply starts the fold with the already-known value. But overall, the solutions are quite similar.
You mention wanting to reduce the number of passes through the data. The way I write Ramda code tends not to help with this. This code iterates the whole list to group it into like items then iterates through each of those groups to fold down to individual values. (Also there is a perhaps a minor iteration in values.) These could certainly be changed to combine those two iterations. It might even make for shorter code. But to my mind, it would become harder to understand.
Update
I was curious about the single-pass approach, and found that I could use all the infrastructure I built for the multi-pass one, rewriting only the main function:
const summarize2 = (columns) => (
arr,
makeKey = keyMaker (columns),
reducer = makeReducer (columns)
) => values (reduce (
(a, item, key = makeKey (item) ) => assoc (key, reducer (key in a ? a[key]: {}, item), a),
{},
arr
))
console .log (
summarize2 (columns) (arr)
)
I wouldn't choose this over the original unless testing showed that this code was a bottleneck in my application. But it's not as much more complex as I thought it would be, and it does everything in one iteration (well, except for whatever values does.) Interestingly, it makes me change my mind a bit about the handling of nrOfItems. My helper code just worked in this version, and I never had to know the total size of the group. That wouldn't have happened if I used Joe's approach.

Ramda js maximum elements

I wonder how will be the best way to get max elements from array.
For example I have regions with temperaturs:
let regions = [{name: 'alabama', temp: 20}, {name: 'newyork', temp: 30}...];
It can be done with one line but I want to be performant.
I want to iterate over the array only once.
If more than 1 region has the same max temperature i want to get them all
Do you know a way to make it with more compact code than procedure code with temporary variables and so on.
If it can be done in "functional programming" way it will be very good.
This is sample procedure code:
regions = [{name:'asd', temp: 13},{name: 'fdg', temp: 30}, {name: 'asdsd', temp: 30}]
maxes = []
max = 0
for (let reg of regions) {
if (reg.temp > max) {
maxes = [reg];
max = reg.temp
} else if (reg.temp == max) {
maxes.push(reg)
} else {
maxes =[]
}
}
Another Ramda approach:
const {reduce, append} = R
const regions = [{name:'asd', temp: 13},{name: 'fdg', temp: 30}, {name: 'asdsd', temp: 30}]
const maxTemps = reduce(
(tops, curr) => curr.temp > tops[0].temp ? [curr] : curr.temp === tops[0].temp ? append(curr, tops) : tops,
[{temp: -Infinity}]
)
console.log(maxTemps(regions))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.js"></script>
This version only iterates the list once. But it's a bit ugly.
I would usually prefer the version from Ori Drori unless testing shows that the performance is a problem in my application. Even with the fix from my comment, I think that code is easier to understand than this one. (That wouldn't be true if there were only two cases. (< versus >= for instance.) But when there are three, this gets hard to read, however we might format it.
But if performance is really a major issue, then your original code is probably faster than this one too.
Use R.pipe to
Group the objects by temp's value,
Convert the object of groups to an array of pairs
Reduce the pairs to the one with the max key (the temp)
return the value from the pair
const { pipe, groupBy, prop, toPairs, reduce, maxBy, head, last } = R;
const regions = [
{name: 'california', temp: 30},
{name: 'alabama', temp: 20},
{name: 'newyork', temp: 30}
];
const result = pipe(
groupBy(prop('temp')),
toPairs,
reduce(maxBy(pipe(head, Number)), [-Infinity]),
last
)(regions);
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.js"></script>
A different approach to this (albeit a little more verbose) is to create some helpers to generically take care of folding over a list of things to extract the list of maximums.
We can do this by defining a Semigroup wrapper class (could also be a plain function instead of a class).
const MaxManyBy = fn => class MaxMany {
constructor(values) {
this.values = values
}
concat(other) {
const otherValue = fn(other.values[0]),
thisValue = fn(this.values[0])
return otherValue > thisValue ? other
: otherValue < thisValue ? this
: new MaxMany(this.values.concat(other.values))
}
static of(x) {
return new MaxMany([x])
}
}
The main purpose of this class is to be able to combine two lists by comparing the values contained within, with the invariant that each list contains the same comparable values.
We now can introduce a new helper function which applies some function to each value of a list and then combines them all using concat.
const foldMap = (fn, [x, ...xs]) =>
xs.reduce((acc, next) => acc.concat(fn(next)), fn(x))
With these helpers, we can now create a function that pulls the maximum temperatures from your example.
const maxTemps = xs =>
foldMap(MaxManyBy(({temp}) => temp).of, xs).values
maxTemps([
{name: 'california', temp: 30},
{name: 'alabama', temp: 20},
{name: 'newyork', temp: 30}
])
//=> [{"name": "california", "temp": 30}, {"name": "newyork", "temp": 30}]
There is an assumption here that the list being passed to foldMap is non-empty. If there's a chance that you'll encounter an empty list then you will need to modify accordingly to return a default value of some kind (or wrap it in a Maybe type if no sane default exists).
See the complete snippet below.
const MaxManyBy = fn => class MaxMany {
constructor(values) {
this.values = values
}
concat(other) {
const otherValue = fn(other.values[0]),
thisValue = fn(this.values[0])
return otherValue > thisValue ? other
: otherValue < thisValue ? this
: new MaxMany(this.values.concat(other.values))
}
static of(x) {
return new MaxMany([x])
}
}
const foldMap = (fn, [x, ...xs]) =>
xs.reduce((acc, next) => acc.concat(fn(next)), fn(x))
const maxTemps = xs =>
foldMap(MaxManyBy(({temp}) => temp).of, xs).values
const regions = [
{name: 'california', temp: 30},
{name: 'alabama', temp: 20},
{name: 'newyork', temp: 30}
]
console.log(maxTemps(regions))