Modify Array of Object using ramda - ramda.js

I have an array of object like below ,
[
{
"myValues": []
},
{
"myValues": [],
"values": [
{"a": "x", "b": "1"},
{"a": "y", "b": "2"}
],
"availableValues": [],
"selectedValues": []
}
]
also if i iterate the object, the "values" key present in the object, it will convert it into like below,
[
{
"myValues": []
},
{
"myValues": [],
"values": [
{"a": "x", "b": "1"},
{"a": "y", "b": "2"}
],
"availableValues": [
{"a": "x", "b": "1"},
{"a": "y", "b": "2"}
],
"selectedValues": ["x", "y"]
}
]
I tried with ramda functional programming but no result,
let findValues = x => {
if(R.has('values')(x)){
let availableValues = R.prop('values')(x);
let selectedValues = R.pluck('a')(availableValues);
R.map(R.evolve({
availableValues: availableValues,
selectedValues: selectedValues
}))
}
}
R.map(findValues, arrObj);

Immutability
First of all, you need to be careful with your verbs. Ramda will not modify your data structure. Immutability is central to Ramda, and to functional programming in general. If you're actually looking to mutate your data, Ramda will not be your toolkit. However, Ramda will transform it, and it can do so in a relatively memory-friendly way, reusing those parts of your data structure not themselves transformed.
Fixing your approach
Let's first look at cleaning up several problems in your function:
let findValues = x => {
if (R.has('values')(x)) {
let availableValues = R.prop('values')(x);
let selectedValues = R.pluck('a')(availableValues);
R.map(R.evolve({
availableValues: availableValues,
selectedValues: selectedValues
}))
}
}
The first, most obvious issue is that this function does not return anything. As mentioned above, Ramda does not mutate your data. So a function that does not return something is useless when supplied to Ramda's map. We can fix this by returning the result of the map(evolve) call and returning the original value when the if condition fails:
let findValues = x => {
if (R.has('values')(x)) {
let availableValues = R.prop('values')(x);
let selectedValues = R.pluck('a')(availableValues);
return R.map(R.evolve({
availableValues: availableValues,
selectedValues: selectedValues
}))
}
return x;
}
Next, the map call makes no sense. evolve already iterates the properties of the object. So we can remove that. But we also need to apply evolve to your input value:
let findValues = x => {
if (R.has('values')(x)){
let availableValues = R.prop('values')(x);
let selectedValues = R.pluck('a')(availableValues);
return R.evolve({
availableValues: availableValues,
selectedValues: selectedValues
}, x)
}
return x;
}
There's one more problem. Evolve expects an object containing functions that transform values. For instance,
evolve({
a: n => n * 10,
b: n => n + 5
})({a: 1, b: 2, c: 3}) //=> {a: 10, b: 7, c: 3}
Note that the values of the properties in the object supplied to evolve are functions. This code is supplying values. We can wrap those values with R.always, via availableValues: R.always(availableValues), but I think it simpler to use a lambda with a parameter of "_", which signifies that the parameter is unimportant: availableValues: _ => availableValues. You could also write availableValues: () => available values, but the underscore demonstrates the fact that the usual value is ignored.
Fixing this gets a function that works:
let findValues = x => {
if (R.has('values')(x)){
let availableValues = R.prop('values')(x);
let selectedValues = R.pluck('a')(availableValues);
return R.evolve({
availableValues: _ => availableValues,
selectedValues: _ => selectedValues
}, x)
}
return x;
}
let arrObj = [{myValues: []}, {availableValues: [], myValues: [], selectedValues: [], values: [{a: "x", b: "1"}, {a: "y", b: "2"}]}];
console.log(R.map(findValues, arrObj))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.js"></script>
If you run this snippet here, you will also see an interesting point, that the resulting availableValues property is a reference to the original values one, not a separate copy of it. This helps show that Ramda is reusing what it can of your original data. You can also see this by noting that R.map(findValues, arrObj)[0] === arrObj[0] //=> true.
Other Approaches
I wrote my own versions of this, not starting from yours. Here is one working function:
const findValues = when(
has('values'),
x => evolve({
availableValues: _ => prop('values', x),
selectedValues: _ => pluck('a', prop('values', x))
}, x)
)
Note the use of when, which captures the notion of "when this predicate is true for your value, use the following transformation; otherwise use your original value." We pass the predicate has('values'), essentially the same as above. Our transformation is similar to yours, using evolve, but it skips the temporary variables, at the minor cost of repeating the prop call.
Something still bothers me about this version. Using those lambdas with "_" is really a misuse of evolve. I think this is cleaner:
const findValues = when(
has('values'),
x => merge(x, {
availableValues: prop('values', x),
selectedValues: pluck('a', prop('values', x))
})
)
Here we use merge, which is a pure function version of Object.assign, adding the parameters of the second object to those of the first, replacing when they conflict.
This is what I would choose.
Anther solution
Since I started writing this answer, Scott Christopher posted another one, essentially
const findValues = R.map(R.when(R.has('values'), R.applySpec({
myValues: R.prop('myValues'),
values: R.prop('values'),
availableValues: R.prop('values'),
selectedValues: R.o(R.pluck('a'), R.prop('values'))
})))
This is a great solution. It also is nicely point-free. If your input objects are fixed to those properties, then I would choose this over my merge version. If your actual objects contain many more properties, or if there are dynamic properties that should be included in the output only if they are in the input, then I would choose my merge solution. The point is that Scott's solution is cleaner for this specific structure, but mine scales better to more complex objects.
Step-by-step
This is not a terribly complex transformation, but it is non-trivial. One simple way to build something like this is to build it up in very minor stages. I often do this in the Ramda REPL so I can see the results as I go.
My process looked something like this:
Step 1
Make sure I properly transform only the correct values:
const findValues = when(
has('values'),
always('foo')
)
map(findValues, arrObj) //=> [{myValues: []}, "foo"]
Step 2
Make sure my transformer function is given the proper values.
const findValues = when(
has('values'),
keys
)
map(findValues, arrObj)
//=> [{myValues: []}, ["myValues", "values", "availableValues", "selectedValues"]]
Here identity would work just as well as keys. Anything that demonstrates that the right object is passed would be fine.
Step 3
Test that merge will work properly here.
const findValues = when(
has('values'),
x => merge(x, {foo: 'bar'})
)
map(findValues, arrObj)
//=> [
// {myValues: []},
// {
// myValues: [],
// values: [{a: "x", b: "1"}, {a: "y", b: "2"}],
// availableValues: [],
// selectedValues: [],
// foo: "bar"
// }
// ]
So I can merge two objects properly. Note that this keeps all the existing properties of my original object.
Step 4
Now, actually transform the first value I want:
const findValues = when(
has('values'),
x => merge(x, {
availableValues: prop('values', x)
})
)
map(findValues, arrObj)
//=> [
// {myValues: []},
// {
// myValues: [],
// values: [{a: "x", b: "1"}, {a: "y", b: "2"}],
// availableValues: [{a: "x", b: "1"}, {a: "y", b: "2"}],
// selectedValues: []
// }
// ]
Step 5
This does what I want. So now add the other property:
const findValues = when(
has('values'),
x => merge(x, {
availableValues: prop('values', x),
selectedValues: pluck('a', prop('values', x))
})
)
map(findValues, arrObj)
//=> [
// {myValues: []},
// {
// myValues: [],
// values: [{a: "x", b: "1"}, a: "y", b: "2"}],
// availableValues: [{a: "x", b" "1"}, {a: "y", b: "2"}],
// selectedValues: ["x", "y"]
// }
// ]
And this finally gives the result I want.
This is a quick process. I can run and test very quickly in the REPL, iterating my solution until I get something that does what I want.

R.evolve allows you to update the values of an object by passing them individually to their corresponding function in the supplied object, though it doesn't allow you to reference the values of other properties.
However, R.applySpec similarly takes an object of functions and returns a new function that will pass its arguments to each of the functions supplied in the object to create a new object. This will allow you to reference other properties of your surrounding object.
We can also make use of R.when to apply some transformation only when it satisfies a given predicate.
const input = [
{
"myValues": []
},
{
"myValues": [],
"values": [
{"a": "x", "b": "1"},
{"a": "y", "b": "2"}
],
"availableValues": [],
"selectedValues": []
}
]
const fn = R.map(R.when(R.has('values'), R.applySpec({
myValues: R.prop('myValues'),
values: R.prop('values'),
availableValues: R.prop('values'),
selectedValues: R.o(R.pluck('a'), R.prop('values'))
})))
const expected = [
{
"myValues": []
},
{
"myValues": [],
"values": [
{"a": "x", "b": "1"},
{"a": "y", "b": "2"}
],
"availableValues": [
{"a": "x", "b": "1"},
{"a": "y", "b": "2"}
],
"selectedValues": ["x", "y"]
}
]
console.log(R.equals(fn(input), expected))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>

Related

Want to parse string BLOB as JSON, modify few fields and parse it back to string BIGQUERY

So I have a column which contains JSONs as string BLOBs. For example,
{
"a": "a-1",
"b": "b-1",
"c":
{
"c-1": "c-1-1",
"c-2": "c-2-1"
},
"d":
[
{
"k": "v1"
},
{
"k": "v2"
},
{
"k": "v3"
}
]
}
Now my use case is to parse the key b, hash the value of key b, assign it back to the key b and store it back as a string in Bigquery.
I initially tried to do a lazy approach where I am only extracting the key b using the json_extract_scalar function in Bigquery and for other keys (like c and d - which I dont want to modify), I used json_extract function. Then I converted back to string after doing hashing the key b. Here is the query -
SELECT
TO_JSON_STRING(
STRUCT(
json_EXTRACT(COL_NAME, "$.a") AS a,
MD5(json_extract_scalar(_airbyte_data,"$.b")) AS b,
json_EXTRACT(COL_NAME,"$.c") AS c,
json_EXTRACT(COL_NAME,"$.d") AS d ) )
FROM
dataset.TABLE
But the issue with this query is the JSON objects are getting converted to string and double quotes getting escaped due to TO_JSON_STRING (I tried using CAST AS STRING on top of STRUCT but it isn't supported). For example, the output row for this query looks like this:
{
"a": "a-1",
"b": "b-1",
"c":
"{
\"c-1\": \"c-1-1\",
\"c-2\": \"c-2-1\"
}",
"d":
"[
{
\"k\": \"v1\"
},
{
\"k\": \"v2\"
},
{
\"k\": \"v3\"
}
]"
}
I can achieve the required output if I use JSON_EXTRACT and JSON_EXTRACT_SCALAR functions on every key (and on every nested keys) but this approach isn't scalable as I have close to 200 keys and many of them are nested 2-3 levels deep.
Can anyone suggest a better approach of achieving this? TIA
This should work
declare _json string default """{
"a": "a-1",
"b": "b-1",
"c":
{
"c-1": "c-1-1",
"c-2": "c-2-1"
},
"d":
[
{
"k": "v1"
},
{
"k": "v2"
},
{
"k": "v3"
}
]
}""";
SELECT regexp_replace(_json, r'"b": "(\w+)-(\w+)"',concat('"b":',TO_JSON_STRING( MD5(json_extract_scalar(_json,"$.b")))))
output
{
"a": "a-1",
"b":"sdENsgFsL4PBOyX8sXDN6w==",
"c":
{
"c-1": "c-1-1",
"c-2": "c-2-1"
},
"d":
[
{
"k": "v1"
},
{
"k": "v2"
},
{
"k": "v3"
}
]
}
If you need specific regex then please specify the example for b values.

RxJs marble testing : Assertion fail log hard to understand

I have this Rxjs testing code. It fail deliberately, because i want to show you the failing log. Which i found hard to understand, or at least i cannot read it fluently.
Someone can explain me what means : $[i].frame = i' to equals i'' ?
import { delay } from 'rxjs/operators';
import { TestScheduler } from 'rxjs/testing';
describe('Rxjs Testing', () => {
let s: TestScheduler;
beforeEach(() => {
s = new TestScheduler((actual, expected) => {
expect(actual).toEqual(expected);
});
});
it('should not work', () => {
s.run(m => {
const source = s.createColdObservable('-x-y-z|');
const expected = '-x-y-z|'; // correct expected value is '---x-y-z|'
const destination = source.pipe(delay(2));
m.expectObservable(destination).toBe(expected);
});
});
});
To help you better understand what is going on with the output, let's first try to follow statements from the console. There is a link that points at where the error has happened. It's at 10th line of code which is this line:
expect(actual).toEqual(expected);
Setting breakpoint to this line and running the test in debug mode reveals actual and expected objects.
actual values are (represented in JSON format):
[
{
"frame": 3,
"notification": {"kind": "N", "value": "x", "hasValue": true}
},
{
"frame": 5,
"notification": {"kind": "N", "value": "y", "hasValue": true}
},
{
"frame": 7,
"notification": {"kind": "N", "value": "z", "hasValue": true}
},
{
"frame": 8,
"notification": {"kind": "C", "hasValue": false}
}
]
And the expected:
[
{
"frame": 1,
"notification": {"kind": "N", "value": "x", "hasValue": true}
},
{
"frame": 3,
"notification": {"kind": "N", "value": "y", "hasValue": true}
},
{
"frame": 5,
"notification": {"kind": "N", "value": "z", "hasValue": true}
},
{
"frame": 6,
"notification": {"kind": "C", "hasValue": false}
}
]
Comparing the two arrays, you can see that frame properties are different for each object of the same index. This weird output comes from Jasmine's toEqual function, so let's try to understand it based on the values from above. This line from the console
Expected $[0].frame = 3 to equal 1.
means that expected value of 1 is not 1, but is actually 3. This part $[0].frame = 3 suggests what the actual value is, and this to equal 1 is what you as developer think it should be. I.e. expected[0].frame (which is 1) is not equal to actual[0].frame (which is 3). And so on, expected[1].frame is not equal to actual[1].frame...
Now, you may wonder why do you even get such values for actual and expected. This is explained much more in detail on the official docs. When you create cold observable with this marble diagram -x-y-z| and delay it with 2 units, it becomes ---x-y-z| which is then transformed to something comparable - an actual array. The first three - signs indicate three empty, non-emitting frames. They are at the positions 0, 1 and 2. They do not have representations in any of the two arrays.
Then comes the first real value x. It is represented as the first object in actual array (actual[0]). x is at position 3 thus the frame property has the same value. notification property has some meta data like the value of the emitted item. You can conclude values for y and z the same way.
Side note: when using run() method, frames become values of 1, 2, 3, etc. instead of 10, 20, 30 when not using run(), for legacy reasons.

Using Ramda compose, groupBy and sort in order to process an array of objects

I am new to Ramda and I am trying to achieve the following:
I have an array of objects (i.e. messages).
I want to group the messages by counter party ID (either the sender or the recipient ID, whichever is not 1, see my groupBy lambda below).
I am going to obtain an object whose keys will be the counter party IDs and values an array of the messages exchanged with that counter party.
I then want to sort by date descending those arrays of messages.
And finally keep the most recent message and thus obtain an array containing the most recent message exchanged with each of the counter parties.
Because I have two counter parties above, I should have an array of two messages.
Here is what I have attempted:
const rawMessages = [
{
"sender": {
"id": 1,
"firstName": "JuliettP"
},
"recipient": {
"id": 2,
"firstName": "Julien"
},
"sendDate": "2017-01-28T19:21:15.863",
"messageRead": true,
"text": "ssssssss"
},
{
"sender": {
"id": 3,
"firstName": "Juliani"
},
"recipient": {
"id": 1,
"firstName": "JuliettP"
},
"sendDate": "2017-02-01T18:08:12.894",
"messageRead": true,
"text": "sss"
},
{
"sender": {
"id": 2,
"firstName": "Julien"
},
"recipient": {
"id": 1,
"firstName": "JuliettP"
},
"sendDate": "2017-02-07T22:19:51.649",
"messageRead": true,
"text": "I love redux!!"
},
{
"sender": {
"id": 1,
"firstName": "JuliettP"
},
"recipient": {
"id": 3,
"firstName": "Juliani"
},
"sendDate": "2017-03-13T20:57:52.253",
"messageRead": false,
"text": "hello Juliani"
},
{
"sender": {
"id": 1,
"firstName": "JuliettP"
},
"recipient": {
"id": 3,
"firstName": "Juliani"
},
"sendDate": "2017-03-13T20:56:52.253",
"messageRead": false,
"text": "hello Julianito"
}
];
const currentUserId = 1;
const groupBy = (m: Message) => m.sender.id !== currentUserId ? m.sender.id : m.recipient.id;
const byDate = R.descend(R.prop('sendDate'));
const sort = (value, key) => R.sort(byDate, value);
const composition = R.compose(R.map, R.head, sort, R.groupBy(groupBy));
const latestByCounterParty = composition(rawMessages);
console.log(latestByCounterParty);
Here is the corresponding codepen:
https://codepen.io/balteo/pen/JWOWRb
Can someone please help?
edit: Here is a link to the uncurried version: here. The behavior is identical without the currying. See my comment below with my question as to the necessity of currying.
While I think the solution from Scott Christopher is fine, there are two more steps that I might take with it myself.
Noting that one of the important rules about map is that
map(compose(f, g)) ≍ compose(map(f), map(g))
when we're already inside a composition pipeline, we can choose to unnest this step:
R.map(R.compose(R.head, R.sort(R.descend(R.prop('sendDate'))))),
and turn the overall solution into
const currentMessagesForId = R.curry((id, msgs) =>
R.compose(
R.values,
R.map(R.head),
R.map(R.sort(R.descend(R.prop('sendDate')))),
R.groupBy(m => m.sender.id !== id ? m.sender.id : m.recipient.id)
)(msgs)
)
Doing this, is, of course, a matter of taste. But I find it cleaner. The next step is also a matter of taste. I choose to use compose for things that can be listed on a single line, and hence make some obvious connection between the formats compose(f, g, h)(x) and f(g(h(x))). If it spans multiple lines, I prefer to use pipe, which behaves the same way, but runs it's functions from first to last. So I would change this a bit further to look like this:
const currentMessagesForId = R.curry((id, msgs) =>
R.pipe(
R.groupBy(m => m.sender.id !== id ? m.sender.id : m.recipient.id),
R.map(R.sort(R.descend(R.prop('sendDate')))),
R.map(R.head),
R.values
)(msgs)
)
I find this top-down reading easier than the bottom up needed with longer compose versions.
But, as I said, these are matters of taste.
You can see these examples on the Ramda REPL.
Your example was close to what you wanted, though you need just needed to move the composition of head and sort to the function argument given to map and then call values on the final result to convert the object to an array of values.
const currentMessagesForId = R.curry((id, msgs) =>
R.compose(
R.values,
R.map(R.compose(R.head, R.sort(R.descend(R.prop('sendDate'))))),
R.groupBy(m => m.sender.id !== id ? m.sender.id : m.recipient.id)
)(msgs))
currentMessagesForId(currentUserId, rawMessages)

lodash casting, mapping and picking

I have an array of objects that I am simplifying as this:
var expenses = [
{
arm: "0",
cat_id: "1",
crop: {
crop: "corn",
id: 1
},
crop_id: "1",
dist: "164.97",
expense: "Fertilizer",
id: "1",
loan_id: "1"
},
{
arm: "20",
cat_id: "8",
crop: {
crop: "corn",
id: 1
},
crop_id: "1",
dist: "0",
expense: "Labor",
id: "8",
loan_id: "1"
}
];
I am trying to end up with this:
var expenses = [{
arm: 0,
cat_id: 1,
crop: "corn",
crop_id: 1,
dist: 164.97,
expense: "Fertilizer",
id: 1,
loan_id: 1
},{
arm: 20,
cat_id: 6,
crop: "corn",
crop_id: 1,
dist: 0,
expense: "Labor",
id: 1,
loan_id: 1
}];
I can get certain pieces in that direction but can't pull it all together without error. I can't find out how to cast the values to float or put crop INSIDE of stub because casted returns all nulls. I currently have this:
flattened = _.map(expenses, function(item){
var crop = item.crop.crop;
var stub = _.pick(item, [
'id',
'loan_id',
'cat_id',
'expense',
'crop_id',
'arm',
'dist'
]);
var casted = _.map(stub, function(i){
i.crop = crop;
return i;
});
return stub;
});
Any help is appreciated.
Problem 1: I can't find out how to cast the values to float
This should be easily fixed by using parseFloat.
e.g.
item.dist = parseFloat(item.dist);
Problem 2: put crop INSIDE of stub because casted returns all nulls
Since you're already using lodash, might as well get used to their chaining feature (lazy evaluation).
DEMO
var flattened = _.map(expenses, function(item) {
item.dist = parseFloat(item.dist);
return _(item)
.omit('crop')
.assign(_.omit(item.crop, 'id'))
.value();
});
The solution above maps the entire expenses array, converting item.dist to a floating point value and then flattening the values from the item.crop object towards the item object with the exception of the item.crop.id value.
Note: In regards to your solution above, using _.map in an object
results to an array.
My attempt for my own learning purposes based on #ryeballar code.
var flattened = _.map(expenses, function(item) {
return _(item)
.set('crop', item.crop.crop) // set item.crop
.set('dist', parseFloat(item.dist)) // set dist as (float)item.dist
.value();
});

DataTables' Autofill extension - column disabling not working

I am using the Datatables plugin with the Autofill extension with input elements as outlined here:
DataTables' Autofill extension with input elements not working.
This works well. However, I am unable to disable the autofill for specific columns. When I use the "enable": false option, and set it to specific columns, then the callbacks stop working. Does anyone know if there is a way to disable certain columns for autofill, while still allowing the callbacks to function properly? The following disables cols 1-4, but the read/write/step functions no longer copy edited input values:
new $.fn.dataTable.AutoFill(table, {
"columnDefs": [{
"targets": [5, 6, 7, 8, 9],
"read": function (cell) {
return $('input', cell).val();
},
"write": function (cell, val) {
return $('input', cell).val(val);
},
"step": function (cell, read, last, i, x, y) {
return last === undefined ? read : last;
},
"enable": false, "targets": [1,2,3,4] //omitting this leaves all columns enabled.
}]
});
The way you've written it, you are defining the targets property twice in the same object. What you need to do is to give columnDefs another object pointing to the other targets. Like so:
new $.fn.dataTable.AutoFill(table, {
columnDefs: [
{
targets: [5, 6, 7, 8, 9],
read: function (cell) {
return $('input', cell).val();
},
write: function (cell, val) {
return $('input', cell).val(val);
},
step: function (cell, read, last, i, x, y) {
return last === undefined ? read : last;
}
},
{
targets: [1,2,3,4],
enable: false
}
]
});