List all members of a class in Pike - pike

In Pike, it is possible to retrieve all members of an object by calling indices(). Is it also possible to see all members of a class without instantiating it?
> class A {int foo; string bar;};
> A a = A();
> indices(a);
(1) Result: ({ /* 2 elements */
"foo",
"bar"
})
> indices(A);
(2) Result: ({ })

Yes, you can, although output won't be as friendly as indices one. You need to use _describe_program function, like this:
> _describe_program(A);
(4) Result: ({ /* 2 elements */
({ /* 7 elements */
0,
"foo",
int,
0,
0,
0,
0
}),
({ /* 7 elements */
0,
"bar",
string,
0,
16,
0,
0
})
})

Related

Item filtering but keeping track of filtered out items

Let's say I have a list of items like below and I would like to apply a list of filters onto it with ramda.
const data = [
{id: 1, name: "Andreas"},
{id: 2, name: "Antonio"},
{id: 3, name: "Bernhard"},
{id: 4, name: "Carlos"}
]
No biggie: pipe(filter(predA), filter(predB), ...)(data)
The tricky part is I would like to define my filters with a key for tracking what items have been filtered out by which filter.
const filterBy = (key, pred) => subs => {
const [res, rej] = partition(pred, subs)
return [{[key]: rej.map(prop('id'))}, res]
}
This all screams monad chaining or a transducer, but I can't get my head around it how to put it all together.
Let's say I have a 2 predicates:
const isEven = filterBy('id', i => i % 2 === 0)
const startsWithA = filterBy('name', startsWith('A'))
I would like to get a result that looks like this tuple with a rejection map and a list of "accepted" items (isEven threw out 1 and 3 and startsWithA rejected 3 and 4):
[
{
id: [1, 3],
name: [3, 4]
},
[{id: 2, name: "Antonio"}]
]
Vanilla JS version
I'm bothered by using the field name to describe the predicate. What happens if we also have, say, const nameTooLong = ({name}) => name .length < 8. Then how could we distinguish the two predicates in the output? So I would prefer to use descriptive predicate names, for instance,
[
{isEven: [1, 3], startsWithA: [3, 4]},
[{id: 2, name: "Antonio"}]
]
So that's what I do in this code:
const process = (preds) => (xs) => {
const rej = Object .fromEntries (Object .entries (preds)
.map (([k, v]) => [k, xs .filter (x => !v (x)) .map (x => x .id)])
)
const excluded = Object .values (rej) .flat()
return [rej, data .filter (({id}) => !excluded .includes (id))]
}
const data = [{id: 1, name: "Andreas"}, {id: 2, name: "Antonio"}, {id: 3, name: "Bernhard"}, {id: 4, name: "Carlos"}]
console .log (process ({
isEven: ({id}) => id % 2 === 0,
startsWithA: ({name}) => name .startsWith ('A')
}) (data))
.as-console-wrapper {max-height: 100% !important; top: 0}
It would not be overly difficult to alter this to return something like your requested format.
Using Ramda
The question was tagged Ramda, and I wrote this initially using Ramda tools, with a version that looks like this:
const process = (preds) => (xs) => {
const rej = pipe (map (flip (reject) (xs)), map (pluck ('id'))) (preds)
const excluded = uniq (flatten (values (rej)))
return [rej, reject (pipe (prop ('id'), flip (includes) (excluded))) (data)]
}
And we could continue to hack away at this until we made it entirely point-free. I just don't see any reason for that.
I'm a founder of Ramda and a big fan, but I don't see this as any more readable than the vanilla version. There is one exception: Ramda's map working on a plain object is much nicer than the Object .entries -> map -> Object .fromEntries dance in the vanilla code. I might use that feature and leave the rest in vanilla, though.
Ok so after some fiddling I came up with this kind of solution. Implementing a new monad seemed unnecessary and overwriting fantasy-land/filter was also a bad idea, as my predicates are basically tagged.
This seems to have a good mix of readability and returns basically an extended array for further processing.
class Partition extends Array {
constructor(items, filtered = {}) {
super(...items)
this.filtered = filtered
}
filterWithKey = (key, pred) => {
const [ok, notOk] = partition(pred, this.slice())
const filtered = mergeDeepWith(concat, this.filtered, {[key]: notOk})
return new Partition(ok, filtered)
}
filter = pred => this.filterWithKey("", pred)
}
const res = new Partition([
{id: 1, name: "Andreas"},
{id: 2, name: "Antonio"},
{id: 3, name: "Bernhard"},
{id: 4, name: "Carlos"}
])
.filterWithKey('id', ({id}) => id % 2 === 0)
.filterWithKey('name', ({name}) => name.startsWith('A'))
const toIds = map(prop('id'))
const rejected = map(toIds, res.filtered)
const accepted = [...res]
console.log(rejected, accepted)

Ramda.js - How to make R.merge point free - 2 functions with same dataset

Given the below code snippet that uses Ramda.js, are there multiple ways of making parseDetails point free? If so, is there an "ideal" way of doing it?
const someData = {
products: [{
stockId: 123,
name: "chocolate",
translatedTags: {
wasPrice: "Was 10"
}
}]
}
const getProductData = R.pipe(
R.pathOr([], ['products', 0]),
R.pick([
'stockId',
'name']
)
);
const getProductTags = R.pipe(
R.pathOr([], ['products', 0, 'translatedTags'])
);
const parseDetails = (data) => R.merge(
getProductData(data),
getProductTags(data)
);
parseDetails(someData);
There are several ways I can think of, but by far the best is to use lift. The way I think of using lift is that it takes a function which works on values and returns an equivalent function which works on containers of those values.
And functions that return those values can be thought of as containers. So we can do it like this:
const getProductData = pipe (
pathOr ([], ['products', 0]),
pick ([
'stockId',
'name']
)
)
// `pipe` not necessary here
const getProductTags = pathOr ([], ['products', 0, 'translatedTags'])
const parseDetails = lift (merge) (getProductData, getProductTags)
const someData = {products: [{stockId: 123, name: "chocolate", translatedTags: {wasPrice: "Was 10"}}]}
console .log (parseDetails (someData))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>
<script> const {pipe, pathOr, pick, lift, merge} = R </script>
And if you have no other need for getProductData or getProductTag, you can inline them in the function:
const parseDetails = lift (merge) (
pipe (pathOr ([], ['products', 0]), pick (['stockId', 'name'])),
pathOr ([], ['products', 0, 'translatedTags'])
)

Ramda: Filtering through arrays with associated value

This is my initial dataset:
arr1 = [{
url: ['https://example.com/A.jpg?', 'https://example.com/B.jpg?', 'https://example.com/C.jpg?'],
width: ['w=300', 'w=400', 'w=500'],
type: [-1, 1, 2]
}];
By filtering with type: n => n > 0 and passing the result through the arr1, I would like to produce arr2 with Ramda. If nth value is excluded as the result of the filter, then nth value in another arrays are also excluded.
arr2 = [{
url: ['https://example.com/B.jpg?', 'https://example.com/C.jpg?'],
width: ['w=400', 'w=500'],
type: [1, 2]
}];
I tried the code below, but not working...
const isgt0 = n => n > 0 ;
const arr2 = R.applySpec({
url : arr1,
width : arr1,
type : R.filter(isgt0),
});
console.log(arr2(arr1));
Once I get the desired object, I intend to R.transpose the array to generate an URL like: [https://example.com/B.jpg?w=400, https://example.com/C.jpg?w=500]
The main steps are:
Get the arrays of the values with R.props:
[-1, 1, 2]
['w=300', 'w=400', 'w=500']
['https://example.com/A.jpg?', 'https://example.com/B.jpg?', 'https://example.com/C.jpg?']
Transpose them to arrays of items with the same index:
[-1, 'w=300', 'https://example.com/A.jpg?']
[1, 'w=400', 'https://example.com/B.jpg?']
[1, 'w=500', 'https://example.com/C.jpg?']
Filter by index 0 (the original type), transpose back, and then reconstruct the object using R.applySpec.
const { pipe, props, transpose, filter, propSatisfies, gt, __, tranpose, applySpec, nth, map } = R
const filterProps = pipe(
props(['type', 'width', 'url']), // get an array of property
transpose, // convert to arrays of all property values with the same index
filter(propSatisfies(gt(__, 0), 0)), // filter by the type (index 0)
transpose, // convert back to arrays of each type
applySpec({ // reconstruct the object
type: nth(0),
width: nth(1),
url: nth(2),
})
)
const data = [
{
type: [-1, 1, 2],
width: ['w=300', 'w=400', 'w=500'],
url: [
'https://example.com/A.jpg?',
'https://example.com/B.jpg?',
'https://example.com/C.jpg?',
],
}
]
const result = map(filterProps, data)
console.log(result)
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.js" integrity="sha512-3sdB9mAxNh2MIo6YkY05uY1qjkywAlDfCf5u1cSotv6k9CZUSyHVf4BJSpTYgla+YHLaHG8LUpqV7MHctlYzlw==" crossorigin="anonymous"></script>
Another way to think about it more generically is to filter using a configuration object that holds the tests to apply for various properties. Here it is only type, but it's easy enough to imagine others.
My solution for this problem is configured with this object:
{
type: n => n > 0
}
This solutions uses many Ramda functions, but also uses Array.prototype.filter to have access to the index parameter of filter. We could choose R.addIndex instead, but I would only bother if I was trying to make it point-free, which doesn't seem worthwhile here. This is what it might look like:
const filterOnProps = (config) => (obj) => {
const test = allPass (map(([k, v]) => (i) => v (obj [k] [i]), toPairs (config)))
const indices = filter (test) (range (0, values (obj) [0] .length))
return map(a => a .filter ((_, i) => contains (i, indices)), obj)
}
const transform = map (filterOnProps ({type: n => n > 0}))
const arr1 = [{url: ['https://example.com/A.jpg?', 'https://example.com/B.jpg?', 'https://example.com/C.jpg?'], width: ['w=300', 'w=400', 'w=500'], type: [-1, 1, 2]}]
console .log (transform (arr1))
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>
<script> const {allPass, map, toPairs, filter, range, values, contains} = R </script>
With obj in scope, we create test, which will be somewhat equivalent to
allPass([
i => obj['type'][i] > 0
])
If we had more conditions in the original configuration object, they would also be in this list.
Then we filter the indices, to see on which ones the record passes this test.
Finally we map over our object, filtering each array to keep only those where the index is in the list.
While this should work, and is reasonably generic, it points to a problem with your data structure. I would suggest that as much as possible, you shy away from situations where structures are dependent on shared indices. To my mind the only reasonable use of that is for a relatively compact serialization format. On deserialization, I would immediately rehydrate that to something more useful, perhaps something like
const data = [
{url: 'https://example.com/A.jpg?', width: 'w=300', type: -1},
{url: 'https://example.com/B.jpg?', width: 'w=400', type: 1},
{url: 'https://example.com/C.jpg?', width: 'w=500', type: 2}
]
This structure is much easier to work with. For example, data.filter(({type}) => type > 0) would be the equivalent to the work above, if you started with this structure.
This might help a bit
const gte1 = R.filter(R.gte(R.__, 1));
const fn = R.map(
R.evolve({
type: gte1,
}),
);
// =====
const data = [
{
type: [-1, 1, 2],
width: ['w=300', 'w=400', 'w=500'],
url: [
'https://example.com/A.jpg?',
'https://example.com/B.jpg?',
'https://example.com/C.jpg?',
],
}
];
console.log(
fn(data),
);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js" integrity="sha512-rZHvUXcc1zWKsxm7rJ8lVQuIr1oOmm7cShlvpV0gWf0RvbcJN6x96al/Rp2L2BI4a4ZkT2/YfVe/8YvB2UHzQw==" crossorigin="anonymous"></script>

Ramda js maximum elements

I wonder how will be the best way to get max elements from array.
For example I have regions with temperaturs:
let regions = [{name: 'alabama', temp: 20}, {name: 'newyork', temp: 30}...];
It can be done with one line but I want to be performant.
I want to iterate over the array only once.
If more than 1 region has the same max temperature i want to get them all
Do you know a way to make it with more compact code than procedure code with temporary variables and so on.
If it can be done in "functional programming" way it will be very good.
This is sample procedure code:
regions = [{name:'asd', temp: 13},{name: 'fdg', temp: 30}, {name: 'asdsd', temp: 30}]
maxes = []
max = 0
for (let reg of regions) {
if (reg.temp > max) {
maxes = [reg];
max = reg.temp
} else if (reg.temp == max) {
maxes.push(reg)
} else {
maxes =[]
}
}
Another Ramda approach:
const {reduce, append} = R
const regions = [{name:'asd', temp: 13},{name: 'fdg', temp: 30}, {name: 'asdsd', temp: 30}]
const maxTemps = reduce(
(tops, curr) => curr.temp > tops[0].temp ? [curr] : curr.temp === tops[0].temp ? append(curr, tops) : tops,
[{temp: -Infinity}]
)
console.log(maxTemps(regions))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.js"></script>
This version only iterates the list once. But it's a bit ugly.
I would usually prefer the version from Ori Drori unless testing shows that the performance is a problem in my application. Even with the fix from my comment, I think that code is easier to understand than this one. (That wouldn't be true if there were only two cases. (< versus >= for instance.) But when there are three, this gets hard to read, however we might format it.
But if performance is really a major issue, then your original code is probably faster than this one too.
Use R.pipe to
Group the objects by temp's value,
Convert the object of groups to an array of pairs
Reduce the pairs to the one with the max key (the temp)
return the value from the pair
const { pipe, groupBy, prop, toPairs, reduce, maxBy, head, last } = R;
const regions = [
{name: 'california', temp: 30},
{name: 'alabama', temp: 20},
{name: 'newyork', temp: 30}
];
const result = pipe(
groupBy(prop('temp')),
toPairs,
reduce(maxBy(pipe(head, Number)), [-Infinity]),
last
)(regions);
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.js"></script>
A different approach to this (albeit a little more verbose) is to create some helpers to generically take care of folding over a list of things to extract the list of maximums.
We can do this by defining a Semigroup wrapper class (could also be a plain function instead of a class).
const MaxManyBy = fn => class MaxMany {
constructor(values) {
this.values = values
}
concat(other) {
const otherValue = fn(other.values[0]),
thisValue = fn(this.values[0])
return otherValue > thisValue ? other
: otherValue < thisValue ? this
: new MaxMany(this.values.concat(other.values))
}
static of(x) {
return new MaxMany([x])
}
}
The main purpose of this class is to be able to combine two lists by comparing the values contained within, with the invariant that each list contains the same comparable values.
We now can introduce a new helper function which applies some function to each value of a list and then combines them all using concat.
const foldMap = (fn, [x, ...xs]) =>
xs.reduce((acc, next) => acc.concat(fn(next)), fn(x))
With these helpers, we can now create a function that pulls the maximum temperatures from your example.
const maxTemps = xs =>
foldMap(MaxManyBy(({temp}) => temp).of, xs).values
maxTemps([
{name: 'california', temp: 30},
{name: 'alabama', temp: 20},
{name: 'newyork', temp: 30}
])
//=> [{"name": "california", "temp": 30}, {"name": "newyork", "temp": 30}]
There is an assumption here that the list being passed to foldMap is non-empty. If there's a chance that you'll encounter an empty list then you will need to modify accordingly to return a default value of some kind (or wrap it in a Maybe type if no sane default exists).
See the complete snippet below.
const MaxManyBy = fn => class MaxMany {
constructor(values) {
this.values = values
}
concat(other) {
const otherValue = fn(other.values[0]),
thisValue = fn(this.values[0])
return otherValue > thisValue ? other
: otherValue < thisValue ? this
: new MaxMany(this.values.concat(other.values))
}
static of(x) {
return new MaxMany([x])
}
}
const foldMap = (fn, [x, ...xs]) =>
xs.reduce((acc, next) => acc.concat(fn(next)), fn(x))
const maxTemps = xs =>
foldMap(MaxManyBy(({temp}) => temp).of, xs).values
const regions = [
{name: 'california', temp: 30},
{name: 'alabama', temp: 20},
{name: 'newyork', temp: 30}
]
console.log(maxTemps(regions))

Lodash: Start an Array iteration from nth index

In lodash how can I start the iteration of an Array from nth index?
For time being I am using following logic:
var arr = [10, 2, 67, 7, 3, 24, 90, 19, 4, 1, 8];
// I want to start iteration of this array "arr" from 4th index by using lodash's APIs.
var arr1 = _.drop(arr, 3);
_.each(arr1, function(value){
console.log(value)
});
You could combine slice() with each(), which isn't all that different from what you're already doing:
_(arr)
.slice(4)
.each(function(item) { console.log(item); })
.run();
// →
// 3
// 24
// 90
// 19
// 4
// 1
// 8
You can use the lodash's function findIndex. Just don't return true inside the function, so it does not stop.
_.findIndex(arr, function(value, index) {
console.log(value, index);
}, 3);