Multiple props in Ramda lens - ramda.js

Is there a way to apply transforms to multiple keys of an object in Ramda? I am aware this is achievable by R.evolve, but I am interested in knowing if this can be achieved by some modification of lenses.
E.g.:
const data = {
a: "100",
b: "non_numeric_string",
c: "0.5"
}
const numerize = x => +x
const mapping = {
a: numerize,
c: numerize
}
magicFunction(mapping, data)
output:
{
a: 100,
b: "non_numeric_string",
c: 0.5
}

The whole point of a lens is to focus on one part of a data structure. While it is not hard to write something using lensProp to achieve this, I'm don't think it's either very satisfying or a particularly appropriate use of lenses. Here's one Ramda solution:
const magicFunction = (mapping, data) =>
reduce
( (o, [k, fn]) => over (lensProp(k), fn, o)
, data
, toPairs (mapping)
)
const numerize = x => Number (x)
const mapping = {
a: numerize,
c: numerize
}
const data = {a: "100", b: "non_numeric_string", c: "0.5"}
console .log (
magicFunction (mapping, data)
)
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
<script> const { lensProp, over, reduce, toPairs } = R </script>
But note that a plain ES6 function does the job just as simply, without using lenses:
const magicFunction = (mapping, data) =>
Object.entries (mapping). reduce
( (o, [k, fn]) => ({...o, [k]: fn (o [k]) })
, data
)
Lenses simply don't gain you much here.

Related

what is the difference between writing composables and exporting stuff separately?

We're developing an application on vue3-composition-api.
so we are developing scripts on this concept called composables. We use this composables in different places for reusing stuff instead of rewriting them, The problem is that some parts of the composables are created but not being used and it seems to have an impact on performance. Checkout these three approaches.
useFeatureA.js - FIRST APPROACH (using stateful composables)
export function useFeatureA() {
const x = ref(false)
const y = computed(() => {
// uses x.value
})
const z = computed(() => {
// uses x.value
})
const foo = () => {
// uses y.value and z.value
// sets on x.value
}
const bar = () => {
// uses z.value
// sets on x.value
}
return { y, foo, bar }
}
useFeatureA.js - SECOND APPROACH (exporting separately and recompute everytime)
export const getX = () => {
return ref(false);
}
export const getY = (x) => {
return computed(() => {
// uses x.value
});
}
export const foo = (xValue, yValue) => {
const z = // using xValue
// uses yValue and z
return // something to write on x
}
export const bar = (xValue) => {
const z = // using xValue
// uses and z
return // something to write on x
}
ComponentA
<script setup>
const { getX, getY, foo, bar } = useFeatureA();
const x = getX();
const y = getY(x);
x.value = foo(x.value, y.value);
x.value = bar(x.value);
</setup>
useFeatureA.js - THIRD APPROACH (move state to component)
export const getX = () => {
return ref(false);
}
export const getY = (x) => {
return computed(() => {
// uses x.value
});
}
export const getZ = (x) => {
return computed(() => {
// uses x.value
});
}
export const foo = (yValue, zValue) => {
// uses yValue and zValue
return // something to write on x
}
export const bar = (zValue) => {
// uses and z
return // something to write on x
}
ComponentA
<script setup>
const { getX, getY, foo, bar } = useFeatureA();
const x = getX();
const y = getY(x);
const z = getZ(x);
x.value = foo(y.value, z.value);
x.value = bar(z.value);
</setup>
we are wondering whether this solution can be efficient or not and what the differences between these two approaches are.
NOTE: We came up with the idea of separating those two functions in two different composables but the problem is that the project is too big and this approach makes way too many composables while foo and bar are very close to each other because of being related to a specific feature. so we guess this approach of separating them on different composables is out of question.
UPDATE: we have this idea that if we need stateful composable, the second approach is a bottleneck since if the components uses the composable, after the component is destroyed, the state remains in memory.
we guess in second approach tree-shaking is more efficient.

How to apply a function to multiple columns of a polars DataFrame in Rust

I'd like to apply a user-define function which takes a few inputs (corresponding some columns in a polars DataFrame) to some columns of a polars DataFrame in Rust. The pattern that I'm using is as below. I wonder is this the best practice?
fn my_filter_func(col1: &Series, col2: &Series, col2: &Series) -> ReturnType {
let it = (0..n).map(|i| {
let col1 = match col.get(i) {
AnyValue::UInt64(val) => val,
_ => panic!("Wrong type of col1!"),
};
// similar for col2 and col3
// apply user-defined function to col1, col2 and col3
}
// convert it to a collection of the required type
}
You can downcast the Series to the proper type you want to iterate over, and then use rust iterators to apply your logic.
fn my_black_box_function(a: f32, b: f32) -> f32 {
// do something
a
}
fn apply_multiples(col_a: &Series, col_b: &Series) -> Float32Chunked {
match (col_a.dtype(), col_b.dtype()) {
(DataType::Float32, DataType::Float32) => {
let a = col_a.f32().unwrap();
let b = col_b.f32().unwrap();
a.into_iter()
.zip(b.into_iter())
.map(|(opt_a, opt_b)| match (opt_a, opt_b) {
(Some(a), Some(b)) => Some(my_black_box_function(a, b)),
_ => None,
})
.collect()
}
_ => panic!("unpexptected dtypes"),
}
}
Lazy API
You don't have to leave the lazy API to be able to access my_black_box_function.
We can collect the columns we want to apply in a Struct data type and then apply a closure over that Series.
fn apply_multiples(lf: LazyFrame) -> Result<DataFrame> {
df![
"a" => [1.0, 2.0, 3.0],
"b" => [3.0, 5.1, 0.3]
]?
.lazy()
.select([concat_lst(["col_a", "col_b"]).map(
|s| {
let ca = s.struct_()?;
let b = ca.field_by_name("col_a")?;
let a = ca.field_by_name("col_b")?;
let a = a.f32()?;
let b = b.f32()?;
let out: Float32Chunked = a
.into_iter()
.zip(b.into_iter())
.map(|(opt_a, opt_b)| match (opt_a, opt_b) {
(Some(a), Some(b)) => Some(my_black_box_function(a, b)),
_ => None,
})
.collect();
Ok(out.into_series())
},
GetOutput::from_type(DataType::Float32),
)])
.collect()
}
The solution I found working for me is with map_multiple(my understanding - this to be used if no groupby/agg) or apply_multiple(my understanding - whenerver you have groupby/agg). Alternatively, you could also use map_many or apply_many. See below.
use polars::prelude::*;
use polars::df;
fn main() {
let df = df! [
"names" => ["a", "b", "a"],
"values" => [1, 2, 3],
"values_nulls" => [Some(1), None, Some(3)],
"new_vals" => [Some(1.0), None, Some(3.0)]
].unwrap();
println!("{:?}", df);
//df.try_apply("values_nulls", |s: &Series| s.cast(&DataType::Float64)).unwrap();
let df = df.lazy()
.groupby([col("names")])
.agg( [
total_delta_sens().sum()
]
);
println!("{:?}", df.collect());
}
pub fn total_delta_sens () -> Expr {
let s: &mut [Expr] = &mut [col("values"), col("values_nulls"), col("new_vals")];
fn sum_fa(s: &mut [Series])->Result<Series>{
let mut ss = s[0].cast(&DataType::Float64).unwrap().fill_null(FillNullStrategy::Zero).unwrap().clone();
for i in 1..s.len(){
ss = ss.add_to(&s[i].cast(&DataType::Float64).unwrap().fill_null(FillNullStrategy::Zero).unwrap()).unwrap();
}
Ok(ss)
}
let o = GetOutput::from_type(DataType::Float64);
map_multiple(sum_fa, s, o)
}
Here total_delta_sens is just a wrapper function for convenience. You don't have to use it.You can do directly this within your .agg([]) or .with_columns([]) :
lit::<f64>(0.0).map_many(sum_fa, &[col("norm"), col("uniform")], o)
Inside sum_fa you can as Richie already mentioned downcast to ChunkedArray and .iter() or even .par_iter()
Hope that helps

tfjs-node memory leak even after proper tensor disposal

I've struggling to find where a memory leak occurs in this file. This file is exported as an Event Listener. For context, I have 92 shards (meaning 92 of these listeners) running. I import the model from outside of this file so it's only loaded once per shard occurrence (stable 75 tensors in memory). However, after a few minutes, all the RAM on my computer is consumed (the function inside the file is called a dozen or so times per second). Have I overlooked any place which may cause this memory leak?
const use = require(`#tensorflow-models/universal-sentence-encoder`);
const tf = require(`#tensorflow/tfjs-node`);
const run = async (input, model) => {
const useObj = await use.load();
const encodings = [ await useObj.tokenizer.encode(input) ];
const indicesArr = encodings.map(function (arr, i) { return arr.map(function (d, index) { return [i, index]; }); });
var flattenedIndicesArr = [];
for (i = 0; i < indicesArr.length; i++) {
flattenedIndicesArr = flattenedIndicesArr.concat(indicesArr[i]);
}
const indices = tf.tensor2d(flattenedIndicesArr, [flattenedIndicesArr.length, 2], 'int32')
const value = tf.tensor1d(tf.util.flatten([ encodings ]), 'int32')
const prediction = await model.executeAsync({ Placeholder_1: indices, Placeholder: value });
const classes = [ 'Identity Attack', 'Insult', 'Obscene', 'Severe Toxicity', 'Sexual Explicit', 'Threat', 'Toxicity' ]
let finArr = [];
let finMsg = `Input: ${input}, `;
for (i = 0; i < prediction.length; i++) {
const sorted = tf.topk(prediction[i], 2);
const predictions = [ sorted.values.arraySync(), sorted.indices.arraySync() ];
const percentage = (predictions[0][0][0]*100).toFixed(2);
if (predictions[1][0][0] == 1) {
finArr.push(`${classes[i]} (${percentage}%)`);
}
tf.dispose([ sorted, predictions ]);
}
for (i = 0; i < finArr.length; i++) {
finMsg+=`${finArr[i]}, `;
}
tf.dispose([ prediction, indices, value, useObj ]);
console.log(finMsg);
console.log(tf.memory());
};
const main = async (message, client, Discord, model) => {
if (message.author.bot) return;
const input = message.content;
await run(input, model);
};
module.exports = {
event: 'messageCreate',
run: async (message, client, Discord, model) => {
await main(message, client, Discord, model);
},
};
to start with, you say this runs multiple times - so why are you loading model again and again? and disposing model is tricky, big chance that's part of your memory leak.
move const useObj = await use.load() outside of run loop and don't dispose it until you're done with all of the runs.

RxJs Marble testing concatMap with withLatestFrom

How can be unit tested this Observable?
e1.pipe(
concatMap(x => of(x).pipe(withLatestFrom(e2)))
);
Following unit test fails:
it('test', () => {
const e1 = hot( '--1^---2----3-|');
const e2 = hot( '-a-^-b-----c--|');
const expected = cold( '----x----y-|', {
x: ['2', 'b'],
y: ['3', 'c']
});
const result = e1.pipe(
concatMap(x => of(x).pipe(
withLatestFrom(e2))
)
);
// but this works:
// const result = e1.pipe(withLatestFrom(e2));
expect(result).toBeObservable(expected);
});
How the marbles should be written in order to pass this unit test? What did I do wrong?
I expect by inserting concatMap operator in the chain (before withLatestFrom) I have to also somehow "mark" it in the marbles.
In your real example
e1.pipe(
concatMap(x => of(x).pipe(withLatestFrom(e2)))
);
everything works fine probably because is either a BehaviorSubject or a ReplaySubject, which it's not case in your test.
Although you're using hot( '-a-^-b-----c--|');, it does not imply that you're using a BehaviorSubject. If we look at the implementation, we'll see that HotObservable extends the Subject class:
export class HotObservable<T> extends Subject<T> implements SubscriptionLoggable { /* ... */ }
which should help understand why this works:
const result = e1.pipe(withLatestFrom(e2));
and this doesn't:
const result = e1.pipe(
concatMap(x => of(x).pipe(
withLatestFrom(e2))
)
);
In the first snippet, e2 is subscribed when e1 is subscribed. In the second one, because you're using concatMap, every time e1 emits, withLatestFrom(e2)) will be subscribed and then unsubscribed, due to the complete notification that comes from of(x).
With this in mind, here would be my approach:
Note: I'm using the built-in functions provided by rxjs/testing
it('test', () => {
// might want to add this in a `beforeEach` function
let testScheduler = new TestScheduler(
(actual, expected) => (console.log({actual, expected}),expect(actual).toEqual(expected))
);
testScheduler.run(({ hot, expectObservable }) => {
const e1 = hot( '--1^---2----3-|');
const e2src = hot( '-a-^-b-----c--|');
const e2 = new BehaviorSubject(undefined);
const result = e1.pipe(
concatMap(x => of(x).pipe(
withLatestFrom(e2))
)
);
const source = merge(
result,
e2src.pipe(
tap(value => e2.next(value)),
// this is important as we're not interesting in `e2src`'s values
// it's just a way to `feed` the `e2` BehaviorSubject
ignoreElements()
)
);
expectObservable(source).toBe('----x----y-|', {
x: ['2', 'b'],
y: ['3', 'c']
});
});
})

custom sum elements by key using lodash

I do have two objects containing keys like
var a = {bar:[1,2], foo:[7,9]}
var b = {bar:[2,2], foo:[3,1]}
I want to get the fallowing results:
var c = {bar:[3,4], foo:[10,10]}
I already have a for logic like:
for (let key in b) {
if (a[key]) {
a[key][0] += b[key][0];
a[key][1] += b[key][1];
}
else a[key] = b[key];
}
But I would like to make this logic in a lodash way. How can I Do it?
You can use create a function that takes n objects, and collects them to an array using rest parameters. Now you can spread the array into _.mergeWith() to combine the objects, and in the customizer function sum the items in the arrays using Array.map() or lodash's _.map() and _.add():
const { mergeWith, isArray, map, add } = _
const fn = (...rest) => _.mergeWith({}, ...rest, (o = [], s) =>
map(s, (n, i) => add(n, o[i]))
)
const a = {bar:[1,2], foo:[7,9]}
const b = {bar:[2,2], foo:[3,1]}
const c = {bar:[3,2], foo:[5,6]}
const d = {bar:[4,2], foo:[5,4]}
const result = fn(a, b, c, d)
console.log(result)
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.11/lodash.min.js"></script>
You can also use lodash/fp to create a function that merges all values to a multidimensional array with _.mergeAllWith(), then transpose the arrays using _.zipAll(), and sums each array:
const { rest, flow, mergeAllWith, isArray, head, mapValues, zipAll, map, sum } = _
const fn = rest(flow(
mergeAllWith((o, s) => [...isArray(head(o)) ? o : [o], s]), // combine to a multidimensional array
mapValues(flow(
zipAll,
map(sum)
)),
))
const a = {bar:[1,2], foo:[7,9]}
const b = {bar:[2,2], foo:[3,1]}
const c = {bar:[3,2], foo:[5,6]}
const d = {bar:[4,2], foo:[5,4]}
const result = fn(a, b, c, d)
console.log(result)
<script src='https://cdn.jsdelivr.net/g/lodash#4(lodash.min.js+lodash.fp.min.js)'></script>
You can accomplish this using plain JavaScript with Object.entries, concat and reduce:
const a = { bar: [1,2], foo: [7,9] };
const b = { bar: [2,2], foo: [3,1] };
const entries = Object.entries(a).concat(Object.entries(b));
const result = entries.reduce((accum, [key, val]) => {
accum[key] = accum[key] ? accum[key].map((x, i) => x + val[i]) : val;
return accum;
}, { });
console.log(result);