How to rewrite this in terms of R.compose - ramda.js

var take = R.curry(function take(count, o) {
return R.pick(R.take(count, R.keys(o)), o);
});
This function takes count keys from an object, in the order, in which they appear. I use it to limit a dataset which was grouped.
I understand that there are placeholder arguments, like R.__, but I can't wrap my head around this particular case.

This is possible thanks to R.converge, but I don't recommend going point-free in this case.
// take :: Number -> Object -> Object
var take = R.curryN(2,
R.converge(R.pick,
R.converge(R.take,
R.nthArg(0),
R.pipe(R.nthArg(1),
R.keys)),
R.nthArg(1)));
One thing to note is that the behaviour of this function is undefined since the order of the list returned by R.keys is undefined.

I agree with #davidchambers that it is probably better not to do this points-free. This solution is a bit cleaner than that one, but is still not to my mind as nice as your original:
// take :: Number -> Object -> Object
var take = R.converge(
R.pick,
R.useWith(R.take, R.identity, R.keys),
R.nthArg(1)
);
useWith and converge are similar in that they accept a number of function parameters and pass the result of calling all but the first one into that first one. The difference is that converge passes all the parameters it receives to each one, and useWith splits them up, passing one to each function. This is the first time I've seen a use for combining them, but it seems to make sense here.
That property ordering issue is supposed to be resolved in ES6 (final draft now out!) but it's still controversial.
Update
You mention that it will take some time to figure this out. This should help at least show how it's equivalent to your original function, if not how to derive it:
var take = R.converge(
R.pick,
R.useWith(R.take, R.identity, R.keys),
R.nthArg(1)
);
// definition of `converge`
(count, obj) => R.pick(R.useWith(R.take, R.identity, R.keys)(count, obj),
R.nthArg(1)(count, obj));
// definition of `nthArg`
(count, obj) => R.pick(R.useWith(R.take, R.identity, R.keys)(count, obj), obj);
// definition of `useWith`
(count, obj) => R.pick(R.take(R.identity(count), R.keys(obj)), obj);
// definition of `identity`
(count, obj) => R.pick(R.take(count, R.keys(obj)), obj);
Update 2
As of version 18, both converge and useWith have changed to become binary. Each takes a target function and a list of helper functions. That would change the above slightly to this:
// take :: Number -> Object -> Object
var take = R.converge(R.pick, [
R.useWith(R.take, [R.identity, R.keys]),
R.nthArg(1)
]);

Related

Async Wait Efficient Execution

I need to iterate 100's of ids in parallel and collect the result in list. I am trying to do it in following way
val context = newFixedThreadPoolContext(5, "custom pool")
val list = mutableListOf<String>()
ids.map {
val result:Deferred<String> = async(context) {
getResult(it)
}
//list.add(result.await()
}.mapNotNull(result -> list.add(result.await())
I am getting error at
mapNotNull(result -> list.add(result.await())
as await method is not available. Why await is not applicable at this place? Instead commented line
//list.add(result.await()
is working fine.
What is the best way to run this block in parallel using coroutine with custom thread pool?
Generally, you go in the right direction: you need to create a list of Deferred and then await() on them.
If this is exactly the code you are using then you did not return anything from your first map { } block, so you don't get a List<Deferred> as you expect, but List<Unit> (list of nothing). Just remove val result:Deferred<String> = - this way you won't assign result to a variable, but return it from the lambda. Also, there are two syntactic errors in the last line: you used () instead of {} and there is a missing closing parenthesis.
After these changes I believe your code will work, but still, it is pretty weird. You seem to mix two distinct approaches to transform a collection into another. One is using higher-order functions like map() and another is using a loop and adding to a list. You use both of them at the same time. I think the following code should do exactly what you need (thanks #Joffrey for improving it):
val list = ids.map {
async(context) {
getResult(it)
}
}.awaitAll().filterNotNull()

Passing custom parameters in render function

I have below code to create column:
DTColumnBuilder.newColumn(null).withTitle('Validation').renderWith(validationRenderer)
and render function:
function validationRenderer(data, type, full, meta) {
.......
}
Now, I want to pass custom parameters to validationRenderer so that I can access it inside the function, like below:
DTColumnBuilder.newColumn(null).withTitle('Validation').renderWith(validationRenderer('abc'))
function validationRenderer(data, type, full, meta, additionalParam) {
// do something with additionalParam
}
I could not find it in the documentation but there must be something to pass additional parameters in meta as per the reference from here
Yes, you can. Or, better, you technically can, but you may use a clever workaround to handle your issue.
I had this issue today, and found a pretty sad (but working) solution.
Basically, the big problem is that the render function is a parameter passed to the datatable handler, which is (of course) isolated.
In my case, to make a pratical example, I had to add several dynamic buttons, each with a different action, to a dynamic datatable.
Apparently, there was no solution, until I thought the following: the problem seems to be that the renderer function scope is somewhat isolated and unaccessible. However, since the "return" of the function is called only when the datatable effectively renders the field, you may wrap the render function in a custom self-invoking-anonymous-function, providing arguments there to use them once the cell is being rendered.
Here is what I did with my practical example, considering the following points:
The goal was to pass the ID field of each row to several different custom functions, so the problem was passing the ID of the button to call when the button is effectively clicked (since you can't get any external reference of it when it is rendered).
I'm using a custom class, which is the following:
hxDatatableDynamicButton = function(label, onClick, classNames) {
this.label = label;
this.onClick = onClick;
this.classNames = this.classNames || 'col5p text-center';
}
Basically, it just creates an instance that I'm later using.
In this case, consider having an array of 2 different instances of these, one having a "test" label, and the other one having a "test2" label.
I'm injecting these instances through a for loop, hence I need to pass the "i" to my datatable to know which of the buttons is being pressed.
Since the code is actually quite big (the codebase is huge), here is the relevant snippet that you need to accomplish the trick:
scope.datatableAdditionalActionButtons.reverse();
scope._abstractDynamicClick = function(id, localReferenceID) {
scope.datatableAdditionalActionButtons[localReferenceID].onClick.call(null, id);
};
for (var i = 0; i < scope.datatableAdditionalActionButtons.length; i++) {
var _localReference = scope.datatableAdditionalActionButtons[i];
var hax = (function(i){
var _tmp = function (data, type, full, meta) {
var _label = scope.datatableAdditionalActionButtons[i].label;
return '<button class="btn btn-default" ng-click="_abstractDynamicClick('+full.id+', '+i+')">'+_label+'</button>';
}
return _tmp;
})(i);
dtColumns.unshift(DTColumnBuilder.newColumn(null).notSortable().renderWith(hax).withClass(_localReference.classNames));
}
So, where is the trick? the trick is entirely in the hax function, and here is why it works: instead of passing the regular renderWith function prototype, we are using a "custom" render, which has the same arguments (hence same parameters) as the default one. However, it is isolated in a self invoking anonymous function, which allows us to arbitrarely inject a parameter inside it and, so, allows us to distinguish, when rendering, which "i" it effectively is, since the isolated scope of the function is never lost in this case.
Basically, the output is as follow:
And the inspection actually shows that elements are effectively rendered differently, hence each "i" is being rendered properly, while it wouldn't have if the function wouldn't have been wrapped in a self invoking anonymous function:
So, basically, in your case, you would do something like this:
var _myValidator = (function(myAbcParam){
var _validate = function (data, type, full, meta) {
console.log("additional param is: ", myAbcParam); // logs "abc"
return '<button id="'+myAbcParam+'">Hello!</button>'; // <-- renders id ="abc"
}
return _validate ;
})('abc');
DTColumnBuilder.newColumn(null).withTitle('Validation').renderWith(_myValidator);
// <-- note that _myValidator is passed instead of "_myValidator()", since it is already executed and already returns a function.
I know this is not exactly the answer someone may be expecting, but if you need to accomplish something that complex in datatable it really looks like the only possible way to do this is using a self invoking anonymous function.
Hope this helps someone who is still having issues with this.

How do I handle errors from libc functions in an idiomatic Rust manner?

libc's error handling is usually to return something < 0 in case of an error. I find myself doing this over and over:
let pid = fork()
if pid < 0 {
// Please disregard the fact that `Err(pid)`
// should be a `&str` or an enum
return Err(pid);
}
I find it ugly that this needs 3 lines of error handling, especially considering that these tests are quite frequent in this kind of code.
Is there a way to return an Err in case fork() returns < 0?
I found two things which are close:
assert_eq!. This needs another line and it panics so the caller cannot handle the error.
Using traits like these:
pub trait LibcResult<T> {
fn to_option(&self) -> Option<T>;
}
impl LibcResult<i64> for i32 {
fn to_option(&self) -> Option<i64> {
if *self < 0 { None } else { Some(*self) }
}
}
I could write fork().to_option().expect("could not fork"). This is now only one line, but it panics instead of returning an Err. I guess this could be solved using ok_or.
Some functions of libc have < 0 as sentinel (e.g. fork), while others use > 0 (e.g. pthread_attr_init), so this would need another argument.
Is there something out there which solves this?
As indicated in the other answer, use pre-made wrappers whenever possible. Where such wrappers do not exist, the following guidelines might help.
Return Result to indicate errors
The idiomatic Rust return type that includes error information is Result (std::result::Result). For most functions from POSIX libc, the specialized type std::io::Result is a perfect fit because it uses std::io::Error to encode errors, and it includes all standard system errors represented by errno values. A good way to avoid repetition is using a utility function such as:
use std::io::{Result, Error};
fn check_err<T: Ord + Default>(num: T) -> Result<T> {
if num < T::default() {
return Err(Error::last_os_error());
}
Ok(num)
}
Wrapping fork() would look like this:
pub fn fork() -> Result<u32> {
check_err(unsafe { libc::fork() }).map(|pid| pid as u32)
}
The use of Result allows idiomatic usage such as:
let pid = fork()?; // ? means return if Err, unwrap if Ok
if pid == 0 {
// child
...
}
Restrict the return type
The function will be easier to use if the return type is modified so that only "possible" values are included. For example, if a function logically has no return value, but returns an int only to communicate the presence of error, the Rust wrapper should return nothing:
pub fn dup2(oldfd: i32, newfd: i32) -> Result<()> {
check_err(unsafe { libc::dup2(oldfd, newfd) })?;
Ok(())
}
Another example are functions that logically return an unsigned integer, such as a PID or a file descriptor, but still declare their result as signed to include the -1 error return value. In that case, consider returning an unsigned value in Rust, as in the fork() example above. nix takes this one step further by having fork() return Result<ForkResult>, where ForkResult is a real enum with methods such as is_child(), and from which the PID is extracted using pattern matching.
Use options and other enums
Rust has a rich type system that allows expressing things that have to be encoded as magic values in C. To return to the fork() example, that function returns 0 to indicate the child return. This would be naturally expressed with an Option and can be combined with the Result shown above:
pub fn fork() -> Result<Option<u32>> {
let pid = check_err(unsafe { libc::fork() })? as u32;
if pid != 0 {
Some(pid)
} else {
None
}
}
The user of this API would no longer need to compare with the magic value, but would use pattern matching, for example:
if let Some(child_pid) = fork()? {
// execute parent code
} else {
// execute child code
}
Return values instead of using output parameters
C often returns values using output parameters, pointer parameters into which the results are stored. This is either because the actual return value is reserved for the error indicator, or because more than one value needs to be returned, and returning structs was badly supported by historical C compilers.
In contrast, Rust's Result supports return value independent of error information, and has no problem whatsoever with returning multiple values. Multiple values returned as a tuple are much more ergonomic than output parameters because they can be used in expressions or captured using pattern matching.
Wrap system resources in owned objects
When returning handles to system resources, such as file descriptors or Windows handles, it good practice to return them wrapped in an object that implements Drop to release them. This will make it less likely that a user of the wrapper will make a mistake, and it makes the use of return values more idiomatic, removing the need for awkward invocations of close() and resource leaks coming from failing to do so.
Taking pipe() as an example:
use std::fs::File;
use std::os::unix::io::FromRawFd;
pub fn pipe() -> Result<(File, File)> {
let mut fds = [0 as libc::c_int; 2];
check_err(unsafe { libc::pipe(fds.as_mut_ptr()) })?;
Ok(unsafe { (File::from_raw_fd(fds[0]), File::from_raw_fd(fds[1])) })
}
// Usage:
// let (r, w) = pipe()?;
// ... use R and W as normal File object
This pipe() wrapper returns multiple values and uses a wrapper object to refer to a system resource. Also, it returns the File objects defined in the Rust standard library and accepted by Rust's IO layer.
The best option is to not reimplement the universe. Instead, use nix, which wraps everything for you and has done the hard work of converting all the error types and handling the sentinel values:
pub fn fork() -> Result<ForkResult>
Then just use normal error handling like try! or ?.
Of course, you could rewrite all of nix by converting your trait to returning Results and including the specific error codes and then use try! or ?, but why would you?
There's nothing magical in Rust that converts negative or positive numbers into a domain specific error type for you. The code you already have is the correct approach, once you've enhanced it to use a Result either by creating it directly or via something like ok_or.
An intermediate solution would be to reuse nix's Errno struct, perhaps with your own trait sugar on top.
so this would need another argument
I'd say it would be better to have different methods: one for negative sentinel values and one for positive sentinel values.

Components, Isolate function, and 'referential transparency'

I have a (rather philosophical) question which refers to cyclejs components : Is isolate() referentially transparent?.
Looking at the simplified code, reproduced thereafter, I could not discriminate any source of 'impurity'. Is that because the not simplified code introduces it, or because the function would return two different objects with two different references?
In that case, would not those two objects have the same behaviour (i.e. listening and reacting to the same events on the same targets, and producing different vTree$ but which encapsulate exactly the same sequence?). And if that is so, aren't those two objects essentially the same, i.e. replacing one by the other anywhere in the program should not change anything? Which means isolate is referentially transparent? Where did I go wrong?
Actually if both calls returns different objects which cannot be substituted, how do those objects differ?
function isolate(Component, scope) {
return function IsolatedComponent(sources) {
const {isolateSource, isolateSink} = sources.DOM;
const isolatedDOMSource = isolateSource(sources.DOM, scope);
const sinks = Component({DOM: isolatedDOMSource});
const isolatedDOMSink = isolateSink(sinks.DOM, scope);
return {
DOM: isolatedDOMSink
};
};
}
I could not discriminate any source of 'impurity'. Is that because the not simplified code introduces it, or because the function would return two different objects with two different references?
The simplified code does not introduce impurity. The impurity comes from the fact that the parameter scope defaults to newScope() if it is not specified. The actual implementation of isolate() has:
function isolate(dataflowComponent, scope = newScope()) {
// ...
}
Where newScope() is:
let counter = 0
function newScope() {
return `cycle${++counter}`
}
Meaning, if the scope is not given as argument, it defaults to the next value of a hidden global counter which is incremented every time isolate() is called.
In conclusion, isolate(component, scope) is referentially transparent because we give the scope, but isolate(component) is not.

Are there Rust variable naming conventions for things like Option<T>?

Is there a convention for variable naming in cases like the following? I find myself having to have two names, one the optional and one for the unwrapped.
let user = match optional_user {
Some(u) => {
u
}
None => {
new("guest", "guest").unwrap()
}
};
I'm unsure if there is a convention per say, but I often see (and use) maybe for Options. i.e.
let maybe_thing: Option<Thing> = ...
let thing: Thing = ...
Also, in regards to your use of u and user in this situation, it is fine to use user in both places. i.e.
let user = match maybe_user {
Some(user) => user,
...
This is because the match expression will be evaluated prior to the let assignment.
However (slightly off topic) #Manishearth is correct, in this case it would be nicer to use or_else. i.e.
let user = maybe_user.or_else(|| new("guest", "guest")).unwrap();
I'd recommend becoming familiar with the rest of Option's methods too as they are excellent for reducing match boilerplate.
If you're going to use a variable to initialize another and you don't need to use the first variable anymore, you can use the same name for both variables.
let user = /* something that returns Option<?> */;
let user = match user {
Some(u) => {
u
}
None => {
new("guest", "guest").unwrap()
}
};
In the initializer for the second let binding, the identifier user resolves to the first user variable, rather than the one being defined, because that one is not initialized yet. Variables defined in a let statement only enter the scope after the whole let statement. The second user variable shadows the first user variable for the rest of the block, though.
You can also use this trick to turn a mutable variable into an immutable variable:
let mut m = HashMap::new();
/* fill m */
let m = m; // freeze m
Here, the second let doesn't have the mut keyword, so m is no longer mutable. Since it's also shadowing m, you no longer have mutable access to m (though you can still add a let mut later on to make it mutable again).
Firstly, your match block can be replaced by optional_user.or_else(|| new("guest", "guest")).unwrap()
Usually for destructures where the destructured variable isn't used in a large block, a short name like u is common. However it would be better to call it user if the block was larger with many statements.