Passing custom parameters in render function - datatables

I have below code to create column:
DTColumnBuilder.newColumn(null).withTitle('Validation').renderWith(validationRenderer)
and render function:
function validationRenderer(data, type, full, meta) {
.......
}
Now, I want to pass custom parameters to validationRenderer so that I can access it inside the function, like below:
DTColumnBuilder.newColumn(null).withTitle('Validation').renderWith(validationRenderer('abc'))
function validationRenderer(data, type, full, meta, additionalParam) {
// do something with additionalParam
}
I could not find it in the documentation but there must be something to pass additional parameters in meta as per the reference from here

Yes, you can. Or, better, you technically can, but you may use a clever workaround to handle your issue.
I had this issue today, and found a pretty sad (but working) solution.
Basically, the big problem is that the render function is a parameter passed to the datatable handler, which is (of course) isolated.
In my case, to make a pratical example, I had to add several dynamic buttons, each with a different action, to a dynamic datatable.
Apparently, there was no solution, until I thought the following: the problem seems to be that the renderer function scope is somewhat isolated and unaccessible. However, since the "return" of the function is called only when the datatable effectively renders the field, you may wrap the render function in a custom self-invoking-anonymous-function, providing arguments there to use them once the cell is being rendered.
Here is what I did with my practical example, considering the following points:
The goal was to pass the ID field of each row to several different custom functions, so the problem was passing the ID of the button to call when the button is effectively clicked (since you can't get any external reference of it when it is rendered).
I'm using a custom class, which is the following:
hxDatatableDynamicButton = function(label, onClick, classNames) {
this.label = label;
this.onClick = onClick;
this.classNames = this.classNames || 'col5p text-center';
}
Basically, it just creates an instance that I'm later using.
In this case, consider having an array of 2 different instances of these, one having a "test" label, and the other one having a "test2" label.
I'm injecting these instances through a for loop, hence I need to pass the "i" to my datatable to know which of the buttons is being pressed.
Since the code is actually quite big (the codebase is huge), here is the relevant snippet that you need to accomplish the trick:
scope.datatableAdditionalActionButtons.reverse();
scope._abstractDynamicClick = function(id, localReferenceID) {
scope.datatableAdditionalActionButtons[localReferenceID].onClick.call(null, id);
};
for (var i = 0; i < scope.datatableAdditionalActionButtons.length; i++) {
var _localReference = scope.datatableAdditionalActionButtons[i];
var hax = (function(i){
var _tmp = function (data, type, full, meta) {
var _label = scope.datatableAdditionalActionButtons[i].label;
return '<button class="btn btn-default" ng-click="_abstractDynamicClick('+full.id+', '+i+')">'+_label+'</button>';
}
return _tmp;
})(i);
dtColumns.unshift(DTColumnBuilder.newColumn(null).notSortable().renderWith(hax).withClass(_localReference.classNames));
}
So, where is the trick? the trick is entirely in the hax function, and here is why it works: instead of passing the regular renderWith function prototype, we are using a "custom" render, which has the same arguments (hence same parameters) as the default one. However, it is isolated in a self invoking anonymous function, which allows us to arbitrarely inject a parameter inside it and, so, allows us to distinguish, when rendering, which "i" it effectively is, since the isolated scope of the function is never lost in this case.
Basically, the output is as follow:
And the inspection actually shows that elements are effectively rendered differently, hence each "i" is being rendered properly, while it wouldn't have if the function wouldn't have been wrapped in a self invoking anonymous function:
So, basically, in your case, you would do something like this:
var _myValidator = (function(myAbcParam){
var _validate = function (data, type, full, meta) {
console.log("additional param is: ", myAbcParam); // logs "abc"
return '<button id="'+myAbcParam+'">Hello!</button>'; // <-- renders id ="abc"
}
return _validate ;
})('abc');
DTColumnBuilder.newColumn(null).withTitle('Validation').renderWith(_myValidator);
// <-- note that _myValidator is passed instead of "_myValidator()", since it is already executed and already returns a function.
I know this is not exactly the answer someone may be expecting, but if you need to accomplish something that complex in datatable it really looks like the only possible way to do this is using a self invoking anonymous function.
Hope this helps someone who is still having issues with this.

Related

Blockly How to create a Variable to the workspace (developer variable)

I want to create a Developer Variable to the workspace in Blockly, but I cannot find the necessary function/method.
I do not want to create the variable over a button. The variable should be included even if there is no block in the workspace.
With these two functions I can get the already created variables:
var variables = workspace.getAllVariables();
var dev_var = Blockly.Variables.allDeveloperVariables(workspace);
But what is the setting function?
Developer variables are variables that will never be visible to the user, but will exist in the generated code. If that's what you're looking for: there's no API for it, but here are some things you can do.
If you want to reserve the name so that users can't accidentally override your variable, call yourGenerator.addReservedWords('var1,var2,...'). You can initialize the variable in your wrapper code.
If you really want Blockly to both reserve and declare the variable for you, you could override the init function on your generator.
On the other hand, if what you want is a user-visible variable that always shows up in the toolbox, without the user creating it, you should call yourWorkspace.createVariable('variable_name').
The unit test blocks all assume that the variable unittestResults exists and can be written to. To indicate this, the block definition includes the function getDeveloperVars, which returns an array of strings. Each string is a variable name.Follow this issue in gtihub
Blockly.Blocks['unittest_fail'] = {
// Always assert an error.
init: function() {
this.setColour(65);
this.setPreviousStatement(true);
this.setNextStatement(true);
this.appendDummyInput()
.appendField(new Blockly.FieldTextInput('test name'), 'MESSAGE')
.appendField('fail');
this.setTooltip('Records an error.');
},
getDeveloperVars: function() {
return ['unittestResults'];
}
};
LINK : https://github.com/google/blockly/issues/1535

Components, Isolate function, and 'referential transparency'

I have a (rather philosophical) question which refers to cyclejs components : Is isolate() referentially transparent?.
Looking at the simplified code, reproduced thereafter, I could not discriminate any source of 'impurity'. Is that because the not simplified code introduces it, or because the function would return two different objects with two different references?
In that case, would not those two objects have the same behaviour (i.e. listening and reacting to the same events on the same targets, and producing different vTree$ but which encapsulate exactly the same sequence?). And if that is so, aren't those two objects essentially the same, i.e. replacing one by the other anywhere in the program should not change anything? Which means isolate is referentially transparent? Where did I go wrong?
Actually if both calls returns different objects which cannot be substituted, how do those objects differ?
function isolate(Component, scope) {
return function IsolatedComponent(sources) {
const {isolateSource, isolateSink} = sources.DOM;
const isolatedDOMSource = isolateSource(sources.DOM, scope);
const sinks = Component({DOM: isolatedDOMSource});
const isolatedDOMSink = isolateSink(sinks.DOM, scope);
return {
DOM: isolatedDOMSink
};
};
}
I could not discriminate any source of 'impurity'. Is that because the not simplified code introduces it, or because the function would return two different objects with two different references?
The simplified code does not introduce impurity. The impurity comes from the fact that the parameter scope defaults to newScope() if it is not specified. The actual implementation of isolate() has:
function isolate(dataflowComponent, scope = newScope()) {
// ...
}
Where newScope() is:
let counter = 0
function newScope() {
return `cycle${++counter}`
}
Meaning, if the scope is not given as argument, it defaults to the next value of a hidden global counter which is incremented every time isolate() is called.
In conclusion, isolate(component, scope) is referentially transparent because we give the scope, but isolate(component) is not.

Knockout components using OOP and inheritance

I was hoping I could get some input on how to use Knockout components in an object-oriented fashion using Object.create (or equivalent). I'm also using Postbox and Lodash, in case some of my code seems confusing. I've currently built a bunch of components and would like to refactor them to reduce code redundancy. My components, so far, are just UI elements. I have custom input boxes and such. My initial approach was as follows, with some discretion taken to simplify the code and not get me fired :)
// Component.js
function Component() {
var self = this
self.value = ko.observable()
self.initial = ko.observable()
...
self.value.subscribeTo('revert', function() {
console.log('value reverted')
self.value(self.initial())
}
}
module.exports = Component
// InputBox.js
var Component = require('./Component')
var _ = require('lodash')
function InputBox(params) {
var self = this
_.merge(self, params) // quick way to attach passed in params to 'self'
...
}
InputBox.prototype = Object.create(new Component)
ko.components.register('input-box', InputBox)
Now this kind of works, but the issue I'm having is that when I use the InputBox in my HTML, I pass in the current value as a parameter (and it's also an observable because the value is retrieved from the server and passed down through several parent components before getting to the InputBox component). Then Lodash merges the params object with self, which already has a value observable, so that gets overwritten, as expected. The interesting part for me is that when I use postbox to broadcast the 'revert' event, the console.log fires, so the event subscription is still there, but the value doesn't revert. When I do this in the revert callback, console.log(self.value(), self.initial()), I get undefined. So somehow, passing in the value observable as a parameter to the InputBox viewmodel causes something to go haywire. When the page initially loads, the input box has the value retrieved from the server, so the value observable isn't completely broken, but changing the input field and then hitting cancel to revert it doesn't revert it.
I don't know if this makes much sense, but if it does and someone can help, I'd really appreciate it! And if I can provide more information, please let me know. Thanks!
JavaScript does not do classical inheritance like C++ and such. Prototypes are not superclasses. In particular, properties of prototypes are more like static class properties than instance properties: they are shared by all instances. It is usual in JS to have prototypes that only contain methods.
There are some libraries that overlay a classical-inheritance structure onto JavaScript. They usually use "extends" to create subclasses. I don't use them, so I can't recommmend any in particular, but you might look at Coffeescript if you like the classical-inheritance pattern.
I often hear "favor composition over inheritance," but I generally see a lot of emphasis on inheritance. As an alternative, consider Douglas Crockford's "class-free object-oriented programming", which does away with inheritance entirely.
For what you're trying to do here, you probably want to have InputBox initialize itself with Component, something like:
function InputBox(params) {
var self = this
Component.bind(self)(); // super()
_.merge(self, params) // quick way to attach passed in params to 'self'
...
}
The new, merged, value will not have the subscription from Component, because the subscription is particular to Component's instance of the observable, which will have been overwritten.
To everyone who responded, thank you very much! I've found a solution that works better for me and will share it here in case anyone is interested.
// Component.js (only relevant parts shown)
function Component(params) {
var self = this
_.merge(self, params)
self.value.subscribeTo('some event', function() {
// do some processing
return <new value for self.value>
}
module.exports = Component
// InputBox.js
var Component = require('./component')
function InputBox(params) {
var self = this
Component.call(self, params)
}
By taking this approach, I avoid the headache of using prototypes and worrying about the prototype chain since everything Component does is done directly to the "inheriting" class. Hope this helps someone else!

How to rewrite this in terms of R.compose

var take = R.curry(function take(count, o) {
return R.pick(R.take(count, R.keys(o)), o);
});
This function takes count keys from an object, in the order, in which they appear. I use it to limit a dataset which was grouped.
I understand that there are placeholder arguments, like R.__, but I can't wrap my head around this particular case.
This is possible thanks to R.converge, but I don't recommend going point-free in this case.
// take :: Number -> Object -> Object
var take = R.curryN(2,
R.converge(R.pick,
R.converge(R.take,
R.nthArg(0),
R.pipe(R.nthArg(1),
R.keys)),
R.nthArg(1)));
One thing to note is that the behaviour of this function is undefined since the order of the list returned by R.keys is undefined.
I agree with #davidchambers that it is probably better not to do this points-free. This solution is a bit cleaner than that one, but is still not to my mind as nice as your original:
// take :: Number -> Object -> Object
var take = R.converge(
R.pick,
R.useWith(R.take, R.identity, R.keys),
R.nthArg(1)
);
useWith and converge are similar in that they accept a number of function parameters and pass the result of calling all but the first one into that first one. The difference is that converge passes all the parameters it receives to each one, and useWith splits them up, passing one to each function. This is the first time I've seen a use for combining them, but it seems to make sense here.
That property ordering issue is supposed to be resolved in ES6 (final draft now out!) but it's still controversial.
Update
You mention that it will take some time to figure this out. This should help at least show how it's equivalent to your original function, if not how to derive it:
var take = R.converge(
R.pick,
R.useWith(R.take, R.identity, R.keys),
R.nthArg(1)
);
// definition of `converge`
(count, obj) => R.pick(R.useWith(R.take, R.identity, R.keys)(count, obj),
R.nthArg(1)(count, obj));
// definition of `nthArg`
(count, obj) => R.pick(R.useWith(R.take, R.identity, R.keys)(count, obj), obj);
// definition of `useWith`
(count, obj) => R.pick(R.take(R.identity(count), R.keys(obj)), obj);
// definition of `identity`
(count, obj) => R.pick(R.take(count, R.keys(obj)), obj);
Update 2
As of version 18, both converge and useWith have changed to become binary. Each takes a target function and a list of helper functions. That would change the above slightly to this:
// take :: Number -> Object -> Object
var take = R.converge(R.pick, [
R.useWith(R.take, [R.identity, R.keys]),
R.nthArg(1)
]);

Optimizing a method with boolean flag

I have a method whose purpose is to retrieve collection items.
A collection can contain a mix of items, let's say: pens, pencils, and papers.
The 1st parameter allows me to tell the method to retrieve only the itemTypes I pass (e.g, just pens and pencils).
The 2nd parameter flags the function to use the collection's default item types, instead.
getCollectionItems($itemTypes,$useCollectionDefaultItemTypes) {
foreach() {
foreach() {
foreach() {
// lots of code...
if($useCollectionDefaultItemTypes) {
// get collection's items using collection->itemTypes
}
else {
// get collection's items using $itemTypes
}
// lots of code...
}
}
}
}
What feels odd is that if I set the $useCollectionDefaultItemTypes to true, there is no need for the function to use the first parameter. I was considering refactoring this method into two such as:
getCollectionItems($itemTypes); // get the items using $itemTypes
getCollectionItems(); // get the items using default settings
The problem is that the methods will have lots of duplicate code except for the if-statement area.
Is there a better way to optimize this?
Pass in $itemTypes as null when you're not using it. Have your if statement check if $itemTypes === null; if it is, use default settings.
If this is php, which I assume it is, you can make your method signature function getCollectionItems($itemTypes = null) and then you can call getCollectionItems() and it will call it as if you had typed getCollectionItems(null).
It's generally a bad idea to write methods that use flags like that. I've seen that written in several places (here at #16, Uncle Bob here and elsewhere). It makes the method hard to understand, read, and refactor.
An alternative design would be to use closures. Your code could look something like this:
$specificWayOfProcessing = function($a) {
//do something with each $a
};
getCollectionItems($processer) {
foreach() {
foreach() {
foreach() {
// lots of code...
$processor(...)
// lots of code...
}
}
}
}
getCollectionItems($specificWayOfProcessing);
This design is better because
It's more flexible. What happens when you need to decide between three different things?
You can now test the code inside the loop much easier
It is now easier to read, because the last line tells you that you are "getting collection items using a specific way of processing" - it reads like an English sentence.
Yes, there is a better way of doing this -- though this question is not an optimization question, but a style question. (Duplicated code has little effect on performance!)
The simplest way to implement this along the lines of your original idea is to make the no-argument form of getCollectionItems() define the default arguments, and then call the version of it that requires an argument:
getCollectionItems($itemTypes) {
foreach() {
foreach() {
foreach() {
// lots of code...
// get collection's items using $itemTypes
}
// lots of code...
}
}
}
getCollectionItems() {
getCollectionItems(collection->itemTypes)
}
Depending on what language you are using, you may even be able to collapse these into a single function definition with a default argument:
getCollectionItems($itemTypes = collection->itemTypes) {
foreach() {
foreach() {
foreach() {
// lots of code...
// get collection's items using $itemTypes
}
// lots of code...
}
}
}
That has the advantage of clearly expressing your original idea, which is that you use $itemTypes if provided, and collection->itemTypes if not.
(This does, of course, assume that you're talking about a single "collection", rather than having one of those foreach iterations be iterating over collections. If you are, the idea to use a null value is a good one.)