How do I execute Dynamically (like Eval) in Dart? - dynamic

Since getting started in Dart I've been watching for a way to execute Dart (Text) Source (that the same program may well be generating dynamically) as Code. Like the infamous "eval()" function.
Recently I have caught a few hints that the communication port between Isolates support some sort of "Spawn" that seems like it could allow this "trick". In Ruby there is also the possibility to load a module dynamically as a language feature, perhaps there is some way to do this in Dart?
Any clues or a simple example will be greatly appreciated.
Thanks in advance!

Ladislav Thon provided this answer on the Dart forum:
I believe it's very safe to say that Dart will never have eval. But it will have other, more structured ways of dynamically generating code (code name mirror builders). There is nothing like that right now, though.
There are two ways of spawning an isolate: spawnFunction, which runs an existing function from the existing code in a new isolate, so nothing you are looking for, and spawnUri, which downloads code from given URI and runs it in new isolate. That is essentially dynamic code loading -- but the dynamically loaded code is isolated from the existing code. It runs in a new isolate, so the only means of communicating with it is via message passing (through ports).

You can run a string as Dart code by first constructing a data URI from it and then passing it into Isolate.spawnUri.
import 'dart:isolate';
void main() async {
final uri = Uri.dataFromString(
'''
void main() {
print("Hellooooooo from the other side!");
}
''',
mimeType: 'application/dart',
);
await Isolate.spawnUri(uri, [], null);
}
Note that you can only do this in JIT mode, which means that the only place you might benefit from it is Dart VM command line apps / package:build scripts. It will not work in Flutter release builds.
To get a result back from it, you can use ports:
import 'dart:isolate';
void main() async {
final name = 'Eval Knievel';
final uri = Uri.dataFromString(
'''
import "dart:isolate";
void main(_, SendPort port) {
port.send("Nice to meet you, $name!");
}
''',
mimeType: 'application/dart',
);
final port = ReceivePort();
await Isolate.spawnUri(uri, [], port.sendPort);
final String response = await port.first;
print(response);
}
I wrote about it on my blog.

Eval(), in Ruby at least, can execute anything from a single statement (like an assignment) to complete involved programs. There is a substantial time penalty for executing many small snippets over most any other form of execution that is possible.
Looking at the problem closer, there are at least three different functions that were at the base of the various schemes where eval might be used. Dart handles at least 2 of these in at least minimal ways.
Dart does not, nor does it look like there is any plan to support "general" script execution.
However, the NoSuchMethod method can be used to effectively implement the dynamic "injection" of variables into your local class environment. It replaces an eval() with a string that would look like this: eval( "String text = 'your first name here';" );
The second function that Dart readily supports now is the invocation of a method, that would look like this: eval( "Map map = SomeClass.some_method()" );
After messing about with this it finally dawned on me that a single simple class can be used to store the information needed to invoke a method, for a class, as a string which seems to have general utility. I can replace a big maintenance prone switch statement that might otherwise be necessary to invoke a series of methods. In Ruby this was almost trivial, however in Dart there are some fairly less than intuitive calls so I wanted to get this "trick" in one place, which fits will with doing ordering and filtering on the strings such as you may need.
Here's the code to "accumulate" as many classes (a whole library?) into a map using reflection such that the class.methodName() can be called with nothing more than a key (as a string).
Note: I used a few "helper methods" to do Map & List functions, you will probably want to replace them with straight Dart. However this code is used and tested only using the functions..
Here's the code:
//The used "Helpers" here..
MAP_add(var map, var key, var value){ if(key != null){map[key] = value;}return(map);}
Object MAP_fetch(var map, var key, [var dflt = null]) {var value = map[key];if (value==null) {value = dflt;}return( value );}
class ClassMethodMapper {
Map _helperMirrorsMap, _methodMap;
void accum_class_map(Object myClass){
InstanceMirror helperMirror = reflect(myClass);
List methodsAr = helperMirror.type.methods.values;
String classNm = myClass.toString().split("'")[1]; ///#FRAGILE
MAP_add(_helperMirrorsMap, classNm, helperMirror);
methodsAr.forEach(( method) {
String key = method.simpleName;
if (key.charCodeAt(0) != 95) { //Ignore private methods
MAP_add(_methodMap, "${classNm}.${key}()", method);
}
});
}
Map invoker( String methodNm ) {
var method = MAP_fetch(_methodMap, methodNm, null);
if (method != null) {
String classNm = methodNm.split('.')[0];
InstanceMirror helperMirror = MAP_fetch(_helperMirrorsMap, classNm);
helperMirror.invoke(method.simpleName, []);
}
}
ClassMethodMapper() {
_methodMap = {};
_helperMirrorsMap = {};
}
}//END_OF_CLASS( ClassMethodMapper );
============
main() {
ClassMethodMapper cMM = new ClassMethodMapper();
cMM.accum_class_map(MyFirstExampleClass);
cMM.accum_class_map(MySecondExampleClass);
//Now you're ready to execute any method (not private as per a special line of code above)
//by simply doing this:
cMM.invoker( MyFirstExampleClass.my_example_method() );
}

Actually there some libraries in pub.dev/packages but has some limitations because are young versions, so that I can recommend you this library expressions to dart and flutter.
A library to parse and evaluate simple expressions.
This library can handle simple expressions, but no blocks of code, control flow statements and so on. It supports a syntax that is common to most programming languages.
There I create an example of code to evaluate arithmetic operations and comparations of data.
import 'package:expressions/expressions.dart';
import 'dart:math';
#override
Widget build(BuildContext context) {
final parsing = FormulaMath();
// Expression example
String condition = "(cos(x)*cos(x)+sin(x)*sin(x)==1) && respuesta_texto == 'si'";
Expression expression = Expression.parse(condition);
var context = {
"x": pi / 5,
"cos": cos,
"sin": sin,
"respuesta_texto" : 'si'
};
// Evaluate expression
final evaluator = const ExpressionEvaluator();
var r = evaluator.eval(expression, context);
print(r);
return Scaffold(
body: Container(
margin: EdgeInsets.only(top: 50.0),
child: Column(
children: [
Text(condition),
Text(r.toString())
],
),
),
);
}
I/flutter (27188): true

Related

How can I use Kotlin to handle asynchronous speech recognition?

The Code A is from the artical https://cloud.google.com/speech-to-text/docs/async-recognize
It write with Java, I don't think the following code is a good code, it make the app interrupt.
while (!response.isDone()) {
System.out.println("Waiting for response...");
Thread.sleep(10000);
}
...
I'm a beginner of Kotlin. How can I use Kotlin to write the better code? maybe using coroutines ?
Code A
public static void asyncRecognizeGcs(String gcsUri) throws Exception {
// Instantiates a client with GOOGLE_APPLICATION_CREDENTIALS
try (SpeechClient speech = SpeechClient.create()) {
// Configure remote file request for FLAC
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(AudioEncoding.FLAC)
.setLanguageCode("en-US")
.setSampleRateHertz(16000)
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setUri(gcsUri).build();
// Use non-blocking call for getting file transcription
OperationFuture<LongRunningRecognizeResponse, LongRunningRecognizeMetadata> response =
speech.longRunningRecognizeAsync(config, audio);
while (!response.isDone()) {
System.out.println("Waiting for response...");
Thread.sleep(10000);
}
List<SpeechRecognitionResult> results = response.get().getResultsList();
for (SpeechRecognitionResult result : results) {
// There can be several alternative transcripts for a given chunk of speech. Just use the
// first (most likely) one here.
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s\n", alternative.getTranscript());
}
}
}
You will have to provide some context to understand what you are trying to achieve, but it looks like coroutine is not really necessary here, as longRunningRecognizeAsync is already non-blocking and returns OperationFuture response object. You just need to decide what to do with that response, or just store Future and check it later. There is nothing implicitly wrong with while (!response.isDone()) {}, that's how Java Futures are supposed to work. Also check OperationFuture, if its normal Java Future, it should implement get() method, that will let you wait for result if necessary, without having to do explicit Thread.sleep().

What does {...this.data} mean in vue js? [duplicate]

Let’s say I have an options variable and I want to set some default value.
What’s is the benefit / drawback of these two alternatives?
Using object spread
options = {...optionsDefault, ...options};
Or using Object.assign
options = Object.assign({}, optionsDefault, options);
This is the commit that made me wonder.
This isn't necessarily exhaustive.
Spread syntax
options = {...optionsDefault, ...options};
Advantages:
If authoring code for execution in environments without native support, you may be able to just compile this syntax (as opposed to using a polyfill). (With Babel, for example.)
Less verbose.
Disadvantages:
When this answer was originally written, this was a proposal, not standardized. When using proposals consider what you'd do if you write code with it now and it doesn't get standardized or changes as it moves toward standardization. This has since been standardized in ES2018.
Literal, not dynamic.
Object.assign()
options = Object.assign({}, optionsDefault, options);
Advantages:
Standardized.
Dynamic. Example:
var sources = [{a: "A"}, {b: "B"}, {c: "C"}];
options = Object.assign.apply(Object, [{}].concat(sources));
// or
options = Object.assign({}, ...sources);
Disadvantages:
More verbose.
If authoring code for execution in environments without native support you need to polyfill.
This is the commit that made me wonder.
That's not directly related to what you're asking. That code wasn't using Object.assign(), it was using user code (object-assign) that does the same thing. They appear to be compiling that code with Babel (and bundling it with Webpack), which is what I was talking about: the syntax you can just compile. They apparently preferred that to having to include object-assign as a dependency that would go into their build.
For reference object rest/spread is finalised in ECMAScript 2018 as a stage 4. The proposal can be found here.
For the most part object assign and spread work the same way, the key difference is that spread defines properties, whilst Object.assign() sets them. This means Object.assign() triggers setters.
It's worth remembering that other than this, object rest/spread 1:1 maps to Object.assign() and acts differently to array (iterable) spread. For example, when spreading an array null values are spread. However using object spread null values are silently spread to nothing.
Array (Iterable) Spread Example
const x = [1, 2, null , 3];
const y = [...x, 4, 5];
const z = null;
console.log(y); // [1, 2, null, 3, 4, 5];
console.log([...z]); // TypeError
Object Spread Example
const x = null;
const y = {a: 1, b: 2};
const z = {...x, ...y};
console.log(z); //{a: 1, b: 2}
This is consistent with how Object.assign() would work, both silently exclude the null value with no error.
const x = null;
const y = {a: 1, b: 2};
const z = Object.assign({}, x, y);
console.log(z); //{a: 1, b: 2}
I think one big difference between the spread operator and Object.assign that doesn't seem to be mentioned in the current answers is that the spread operator will not copy the the source object’s prototype to the target object. If you want to add properties to an object and you don't want to change what instance it is of, then you will have to use Object.assign.
Edit: I've actually realised that my example is misleading. The spread operator desugars to Object.assign with the first parameter set to an empty object. In my code example below, I put error as the first parameter of the Object.assign call so the two are not equivalent. The first parameter of Object.assign is actually modified and then returned which is why it retains its prototype. I have added another example below:
const error = new Error();
error instanceof Error // true
const errorExtendedUsingSpread = {
...error,
...{
someValue: true
}
};
errorExtendedUsingSpread instanceof Error; // false
// What the spread operator desugars into
const errorExtendedUsingImmutableObjectAssign = Object.assign({}, error, {
someValue: true
});
errorExtendedUsingImmutableObjectAssign instanceof Error; // false
// The error object is modified and returned here so it keeps its prototypes
const errorExtendedUsingAssign = Object.assign(error, {
someValue: true
});
errorExtendedUsingAssign instanceof Error; // true
See also: https://github.com/tc39/proposal-object-rest-spread/blob/master/Spread.md
There's a huge difference between the two, with very serious consequences. The most upvoted questions do not even touch this, and the information about object spread being a proposal is not relevant in 2022 anymore.
The difference is that Object.assign changes the object in-place, while the spread operator (...) creates a brand new object, and this will break object reference equality.
First, let's see the effect, and then I'll give a real-world example of how important it is to understand this fundamental difference.
First, let's use Object.assign:
// Let's create a new object, that contains a child object;
const parentObject = { childObject: { hello: 'world '} };
// Let's get a reference to the child object;
const childObject = parentObject.childObject;
// Let's change the child object using Object.assign, adding a new `foo` key with `bar` value;
Object.assign(parentObject.childObject, { foo: 'bar' });
// childObject is still the same object in memory, it was changed IN PLACE.
parentObject.childObject === childObject
// true
Now the same exercise with the spread operator:
// Let's create a new object, that contains a child object;
const parentObject = { childObject: { hello: 'world '} };
// Let's get a reference to the child object;
const childObject = parentObject.childObject;
// Let's change the child object using the spread operator;
parentObject.childObject = {
...parentObject.childObject,
foo: 'bar',
}
// They are not the same object in memory anymore!
parentObject.childObject === childObject;
// false
It's easy to see what is going on, because on the parentObject.childObject = {...} we are cleary assigning the value of the childObject key in parentObject to a brand new object literal, and the fact it's being composed by the old childObject content's is irrelevant. It's a new object.
And if you assume this is irrelevant in practice, let me show a real world scenario of how important it is to understand this.
In a very large Vue.js application, we started noticing a lot of sluggishness when typing the name of the customer in an input field.
After a lot of debugging, we found out that each char typed in that input triggered a hole bunch of computed properties to re-evaluate.
This wasn't anticipated, since the customer's name wasn't used at all in those computeds functions. Only other customer data (like age, sex) was being used. What was goin on? Why was vue re-evaluating all those computed functions when the customer's name changed?
Well, we had a Vuex store that did this:
mutations: {
setCustomer(state, payload) {
// payload being { name: 'Bob' }
state.customer = { ...state.customer, ...payload };
}
And our computed were like this:
veryExpensiveComputed() {
const customerAge = this.$store.state.customer.age;
}
So, voilá! When the customer name changed, the Vuex mutation was actually changing it to a new object entirely; and since the computed relied on that object to get the customer age, Vue counted on that very specific object instance as a dependency, and when it was changed to a new object (failing the === object equality test), Vue decided it was time to re-run the computed function.
The fix? Use Object.assign to not discard the previous object, but to change it in place ...
mutations: {
setCustomer(state, payload) {
// payload being same as above: { name: 'Bob' }
Object.assign(state.customer, payload);
}
BTW, if you are in Vue2, you shouldn't use Object.assign because Vue 2 can't track those object changes directly, but the same logic applies, just using Vue.set instead of Object.assign:
mutations: {
setCustomer(state, payload) {
Object.keys(payload).forEach(key => {
Vue.set(state.customer, key, payload[key])
})
}
NOTE: Spread is NOT just syntactic sugar around Object.assign. They operate much differently behind the scenes.
Object.assign applies setters to a new object, Spread does not. In addition, the object must be iterable.
Copy
Use this if you need the value of the object as it is at this moment, and you don't want that value to reflect any changes made by other owners of the object.
Use it for creating a shallow copy of the object
good practice to always set immutable properties to copy - because mutable versions can be passed into immutable properties, copy will ensure that you'll always be dealing with an immutable object
Assign
Assign is somewhat the opposite to copy.
Assign will generate a setter which assigns the value to the instance variable directly, rather than copying or retaining it.
When calling the getter of an assign property, it returns a reference to the actual data.
I'd like to summarize status of the "spread object merge" ES feature, in browsers, and in the ecosystem via tools.
Spec
https://github.com/tc39/proposal-object-rest-spread
This language feature is a stage 4 proposal, which means it's been merged into the ES language spec, but not yet widely implemented.
Browsers: in Chrome, in SF, Firefox soon (ver 60, IIUC)
Browser support for "spread properties" shipped in Chrome 60, including this scenario.
Support for this scenario does NOT work in current Firefox (59), but DOES work in my Firefox Developer Edition. So I believe it will ship in Firefox 60.
Safari: not tested, but Kangax says it works in Desktop Safari 11.1, but not SF 11
iOS Safari: not teseted, but Kangax says it works in iOS 11.3, but not in iOS 11
not in Edge yet
Tools: Node 8.7, TS 2.1
NodeJS has supported since 8.7 (via Kangax). Confirmed on 9.8 when I tested locally.
TypeScript has suported it since 2.1, current is 2.8
Links
Kangax "property spread"
https://davidwalsh.name/merge-objects
Code Sample (doubles as compatibility test)
var x = { a: 1, b: 2 };
var y = { c: 3, d: 4, a: 5 };
var z = {...x, ...y};
console.log(z); // { a: 5, b: 2, c: 3, d: 4 }
Again: At time of writing this sample works without transpilation in Chrome (60+), Firefox Developer Edition (preview of Firefox 60), and Node (8.7+).
Why Answer?
I'm writing this 2.5 years after the original question. But I had the very same question, and this is where Google sent me. I am a slave to SO's mission to improve the long tail.
Since this is an expansion of "array spread" syntax I found it very hard to google, and difficult to find in compatibility tables. The closest I could find is Kangax "property spread", but that test doesn't have two spreads in the same expression (not a merge). Also, the name in the proposals/drafts/browser status pages all use "property spread", but it looks to me like that was a "first principal" the community arrived at after the proposals to use spread syntax for "object merge". (Which might explain why it is so hard to google.) So I document my finding here so others can view, update, and compile links about this specific feature. I hope it catches on. Please help spread the news of it landing in the spec and in browsers.
Lastly, I would have added this info as a comment, but I couldn't edit them without breaking the authors' original intent. Specifically, I can't edit #ChillyPenguin's comment without it losing his intent to correct #RichardSchulte. But years later Richard turned out to be right (in my opinion). So I write this answer instead, hoping it will gain traction on the old answers eventually (might take years, but that's what the long tail effect is all about, after all).
As others have mentioned, at this moment of writing, Object.assign() requires a polyfill and object spread ... requires some transpiling (and perhaps a polyfill too) in order to work.
Consider this code:
// Babel wont touch this really, it will simply fail if Object.assign() is not supported in browser.
const objAss = { message: 'Hello you!' };
const newObjAss = Object.assign(objAss, { dev: true });
console.log(newObjAss);
// Babel will transpile with use to a helper function that first attempts to use Object.assign() and then falls back.
const objSpread = { message: 'Hello you!' };
const newObjSpread = {...objSpread, dev: true };
console.log(newObjSpread);
These both produce the same output.
Here is the output from Babel, to ES5:
var objAss = { message: 'Hello you!' };
var newObjAss = Object.assign(objAss, { dev: true });
console.log(newObjAss);
var _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; };
var objSpread = { message: 'Hello you!' };
var newObjSpread = _extends({}, objSpread, { dev: true });
console.log(newObjSpread);
This is my understanding so far. Object.assign() is actually standardised, where as object spread ... is not yet. The only problem is browser support for the former and in future, the latter too.
Play with the code here
Hope this helps.
The object spread operator (...) doesn't work in browsers, because it isn't part of any ES specification yet, just a proposal. The only option is to compile it with Babel (or something similar).
As you can see, it's just syntactic sugar over Object.assign({}).
As far as I can see, these are the important differences.
Object.assign works in most browsers (without compiling)
... for objects isn't standardized
... protects you from accidentally mutating the object
... will polyfill Object.assign in browsers without it
... needs less code to express the same idea
Too many wrong answers...
I did a search and found a lot of misinformation on this.
Summary
Neither ...spread nor Object.assign is faster. It depends.
Object.assign is almost always faster if side-effects/object mutation is not a consideration.
Performance aside, there is generally no up/downside to using either, until you reach edge cases, such copying objects containing getters/setters, or read-only properties. Read more here.
Performance
Whether Object.assign or ...spread is faster, it depends on what you are trying to combine, and the runtime you are using (implementations and optimisations happen from time to time). For small objects, it does not matter.
For bigger objects, Object.assign is usually better. Especially if you do not need to care about side-effects, the speed gains come from saving time by just adding properties to the first object, rather than copying from two and creating a brand new one. See:
async function retrieveAndCombine() {
const bigPayload = await retrieveData()
const smallPayload = await retrieveData2()
// only the properties of smallPayload is iterated through
// whereas bigPayload is mutated.
return Object.assign(bigPayload, smallPayload)
}
If side-effects is a concern
In cases where side-effects matters, such as if an object to be combined with another is passed in as an argument.
In the example below, bigPayload will be mutated, which is a bad idea if other functions/objects outside of retrieveAndCombine depends on bigPayload. In this case, you can swap the arguments passed to Object.assign, or just use {} as the first argument to create a new object:
async function retrieveAndCombine(bigPayload) {
const smallPayload = await retrieveData2()
// this is slightly more efficient
return Object.assign(smallPayload, bigPayload)
// this is still okay assuming `smallPayload` only has a few properties
return Object.assign({}, smallPayload, bigPayload)
}
Test
See if for yourself, try the code snippet below. Note: It takes awhile to run.
const rand = () => (Math.random() + 1).toString(36).substring(7)
function combineBigObjects() {
console.log('Please wait...creating the test objects...')
const obj = {}
const obj2 = {}
for (let i = 0; i < 100000; i++) {
const key = rand()
obj[rand()] = {
[rand()]: rand(),
[rand()]: rand(),
}
obj2[rand()] = {
[rand()]: rand(),
[rand()]: rand(),
}
}
console.log('Combine big objects using spread:')
console.time()
const result1 = {
...obj,
...obj2,
}
console.timeEnd()
console.log('Combine big objects using assign:')
console.time()
Object.assign({}, obj, obj2)
console.timeEnd()
console.log('Combine big objects using assign, but mutates first obj:')
console.time()
Object.assign(obj, obj2)
console.timeEnd()
}
combineBigObjects()
function combineSmallObjects() {
const firstObject = () => ({ [rand()]: rand() })
const secondObject = () => ({ [rand()]: rand() })
console.log('Combine small objects using spread:')
console.time()
for (let i = 0; i < 500000; i++) {
const finalObject = {
...firstObject(),
...secondObject(),
}
}
console.timeEnd()
console.log('Combine small objects using assign:')
console.time()
for (let i = 0; i < 500000; i++) {
const finalObject = Object.assign({}, firstObject(), secondObject())
}
console.timeEnd()
console.log('Combine small objects using assign, but mutates first obj:')
console.time()
for (let i = 0; i < 500000; i++) {
const finalObject = Object.assign(firstObject(), secondObject())
}
console.timeEnd()
}
combineSmallObjects()
Other answers are old, could not get a good answer.
Below example is for object literals, helps how both can complement each other, and how it cannot complement each other (therefore difference):
var obj1 = { a: 1, b: { b1: 1, b2: 'b2value', b3: 'b3value' } };
// overwrite parts of b key
var obj2 = {
b: {
...obj1.b,
b1: 2
}
};
var res2 = Object.assign({}, obj1, obj2); // b2,b3 keys still exist
document.write('res2: ', JSON.stringify (res2), '<br>');
// Output:
// res2: {"a":1,"b":{"b1":2,"b2":"b2value","b3":"b3value"}} // NOTE: b2,b3 still exists
// overwrite whole of b key
var obj3 = {
b: {
b1: 2
}
};
var res3 = Object.assign({}, obj1, obj3); // b2,b3 keys are lost
document.write('res3: ', JSON.stringify (res3), '<br>');
// Output:
// res3: {"a":1,"b":{"b1":2}} // NOTE: b2,b3 values are lost
Several more small examples here, also for array & object:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_syntax
The ways to (1) create shallow copies of objects and (2) merge multiple objects into a single object have evolved a lot between 2014 and 2018.
The approaches outlined below became available and widely used at different times. This answer provides some historical perspective and is not exhaustive.
Without any help from libraries or modern syntax, you would use for-in loops, e.g.
var mergedOptions = {}
for (var key in defaultOptions) {
mergedOptions[key] = defaultOptions[key]
}
for (var key in options) {
mergedOptions[key] = options[key]
}
options = mergedOptions
2006
jQuery 1.0 has jQuery.extend():
options = $.extend({}, defaultOptions, options)
⋮
2010
Underscore.js 1.0 has _.extend()
options = _.extend({}, defaultOptions, options)
⋮
2014
2ality published an article about Object.assign() coming to ES2015
object-assign published to npm.
var objectAssign = require('object-assign')
options = objectAssign({}, defaultOptions, options)
The Object Rest/Spread Properties syntax proposed for ES2016.
2015
Object.assign is supported by Chrome (45), Firefox (34) and Node.js (4). Polyfill is required for older runtimes though.
options = Object.assign({}, defaultOptions, options)
The Object Rest/Spread Properties proposal reaches stage 2.
2016
The Object Rest/Spread Properties syntax did not get included in ES2016, but proposal reaches stage 3.
2017
The Object Rest/Spread Properties syntax did not get included in ES2017, but is usable in Chrome (60), Firefox (55), and Node.js (8.3). Some transpilation is needed for older runtimes though.
options = { ...defaultOptions, ...options }
2018
The Object Rest/Spread Properties proposal reaches stage 4 and the syntax is included in ES2018 standard.
Object.assign is necessary when the target object is a constant and you want to set multiple properties at once.
For example:
const target = { data: "Test", loading: true }
Now, suppose you need to mutate the target with all properties from a source:
const source = { data: null, loading: false, ...etc }
Object.assign(target, source) // Now target is updated
target = { ...target, ...source) // Error: cant assign to constant
Keep in mind that you are mutating the target obj, so whenever possible use Object.assign with empty target or spread to assign to a new obj.
This is now part of ES6, thus is standardized, and is also documented on MDN:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Spread_operator
It's very convenient to use and makes a lot of sense alongside object destructuring.
The one remaining advantage listed above is the dynamic capabilities of Object.assign(), however this is as easy as spreading the array inside of a literal object. In the compiled babel output it uses exactly what is demonstrated with Object.assign()
So the correct answer would be to use object spread since it is now standardized, widely used (see react, redux, etc), is easy to use, and has all the features of Object.assign()
I'd like to add this simple example when you have to use Object.assign.
class SomeClass {
constructor() {
this.someValue = 'some value';
}
someMethod() {
console.log('some action');
}
}
const objectAssign = Object.assign(new SomeClass(), {});
objectAssign.someValue; // ok
objectAssign.someMethod(); // ok
const spread = {...new SomeClass()};
spread.someValue; // ok
spread.someMethod(); // there is no methods of SomeClass!
It can be not clear when you use JavaScript. But with TypeScript it is easier if you want to create instance of some class
const spread: SomeClass = {...new SomeClass()} // Error
The spread operator spread the Array into the separate arguments of a function.
let iterableObjB = [1,2,3,4]
function (...iterableObjB) //turned into
function (1,2,3,4)
We’ll create a function called identity that just returns whatever parameter we give it.
identity = (arg) => arg
And a simple array.
arr = [1, 2, 3]
If you call identity with arr, we know what’ll happen

Is there any way to iterate all fields of a data class without using reflection?

I know an alternative of reflection which is using javassist, but using javassist is a little bit complex. And because of lambda or some other features in koltin, the javassist doesn't work well sometimes. So is there any other way to iterate all fields of a data class without using reflection.
There are two ways. The first is relatively easy, and is essentially what's mentioned in the comments: assuming you know how many fields there are, you can unpack it and throw that into a list, and iterate over those. Or alternatively use them directly:
data class Test(val x: String, val y: String) {
fun getData() : List<Any> = listOf(x, y)
}
data class Test(val x: String, val y: String)
...
val (x, y) = Test("x", "y")
// And optionally throw those in a list
Although iterating like this is a slight extra step, this is at least one way you can relatively easy unpack a data class.
If you don't know how many fields there are (or you don't want to refactor), you have two options:
The first is using reflection. But as you mentioned, you don't want this.
That leaves a second, somewhat more complicated preprocessing option: annotations. Note that this only works with data classes you control - beyond that, you're stuck with reflection or implementations from the library/framework coder.
Annotations can be used for several things. One of which is metadata, but also code generation. This is a somewhat complicated alternative, and requires an additional module in order to get compile order right. If it isn't compiled in the right order, you'll end up with unprocessed annotations, which kinda defeats the purpose.
I've also created a version you can use with Gradle, but that's at the end of the post and it's a shortcut to implementing it yourself.
Note that I have only tested this with a pure Kotlin project - I've personally had problems with annotations between Java and Kotlin (although that was with Lombok), so I do not guarantee this will work at compile time if called from Java. Also note that this is complex, but avoids runtime reflection.
Explanation
The main issue here is a certain memory concern. This will create a new list every time you call the method, which makes it very similar to the method used by enums.
Local testing over 10000 iterations also show a general consistency of ~200 milliseconds to execute my approach, versus roughly 600 for reflection. However, for one iteration, mine uses ~20 milliseconds, where as reflection uses between 400 and 500 milliseconds. On one run, reflection took 1500 (!) milliseconds, while my approach took 18 milliseconds.
See also Java Reflection: Why is it so slow?. This appears to affect Kotlin as well.
The memory impact of creating a new list every time it's called can be noticeable though, but it'll also be collected so it shouldn't be that big a problem.
For reference, the code used for benchmarking (this will make sense after the rest of the post):
#AutoUnpack data class ExampleDataClass(val x: String, val y: Int, var m: Boolean)
fun main(a: Array<String>) {
var mine = 0L
var reflect = 0L
// for(i in 0 until 10000) {
var start = System.currentTimeMillis()
val cls = ExampleDataClass("example", 42, false)
for (field in cls) {
println(field)
}
mine += System.currentTimeMillis() - start
start = System.currentTimeMillis()
for (prop in ExampleDataClass::class.memberProperties) {
println("${prop.name} = ${prop.get(cls)}")
}
reflect += System.currentTimeMillis() - start
// }
println(mine)
println(reflect)
}
Setting up from scratch
This bases itself around two modules: a consumer module, and a processor module. The processor HAS to be in a separate module. It needs to be compiled separately from the consumer for the annotations to work properly.
First of all, your consumer project needs the annotation processor:
apply plugin: 'kotlin-kapt'
Additionally, you need to add stub generation. It complains it's unused while compiling, but without it, the generator seems to break for me:
kapt {
generateStubs = true
}
Now that that's in order, create a new module for the unpacker. Add the Kotlin plugin if you didn't already. You do not need the annotation processor Gradle plugin in this project. That's only needed by the consumer. You do, however, need kotlinpoet:
implementation "com.squareup:kotlinpoet:1.2.0"
This is to simplify aspects of the code generation itself, which is the important part here.
Now, create the annotation:
#Retention(AnnotationRetention.SOURCE)
#Target(AnnotationTarget.CLASS)
annotation class AutoUnpack
This is pretty much all you need. The retention is set to source because it has no value at runtime, and it only targets compile time.
Next, there's the processor itself. This is somewhat complicated, so bear with me. For reference, this uses the javax.* packages for annotation processing. Android note: this might work assuming you can plug in a Java module on a compileOnly scope without getting the Android SDK restrictions. As I mentioned earlier, this is mainly for pure Kotlin; Android might work, but I haven't tested that.
Anyways, the generator:
Because I couldn't find a way to generate the method into the class without touching the rest (and because according to this, that isn't possible), I'm going with an extension function generation approach.
You'll need a class UnpackCodeGenerator : AbstractProcessor(). In there, you'll first need two lines of boilerplate:
override fun getSupportedAnnotationTypes(): MutableSet<String> = mutableSetOf(AutoUnpack::class.java.name)
override fun getSupportedSourceVersion(): SourceVersion = SourceVersion.latest()
Moving on, there's the processing. Override the process function:
override fun process(annotations: MutableSet<out TypeElement>, roundEnv: RoundEnvironment): Boolean {
// Find elements with the annotation
val annotatedElements = roundEnv.getElementsAnnotatedWith(AutoUnpack::class.java)
if(annotatedElements.isEmpty()) {
// Self-explanatory
return false;
}
// Iterate the elements
annotatedElements.forEach { element ->
// Grab the name and package
val name = element.simpleName.toString()
val pkg = processingEnv.elementUtils.getPackageOf(element).toString()
// Then generate the class
generateClass(name,
if (pkg == "unnamed package") "" else pkg, // This is a patch for an issue where classes in the root
// package return package as "unnamed package" rather than empty,
// which breaks syntax because "package unnamed package" isn't legal.
element)
}
// Return true for success
return true;
}
This just sets up some of the later framework. The real magic happens in the generateClass function:
private fun generateClass(className: String, pkg: String, element: Element){
val elements = element.enclosedElements
val classVariables = elements
.filter {
val name = if (it.simpleName.contains("\$delegate"))
it.simpleName.toString().substring(0, it.simpleName.indexOf("$"))
else it.simpleName.toString()
it.kind == ElementKind.FIELD // Find fields
&& Modifier.STATIC !in it.modifiers // that aren't static (thanks to sebaslogen for issue #1: https://github.com/LunarWatcher/KClassUnpacker/issues/1)
// Additionally, we have to ignore private fields. Extension functions can't access these, and accessing
// them is a bad idea anyway. Kotlin lets you expose get without exposing set. If you, by default, don't
// allow access to the getter, there's a high chance exposing it is a bad idea.
&& elements.any { getter -> getter.kind == ElementKind.METHOD // find methods
&& getter.simpleName.toString() ==
"get${name[0].toUpperCase().toString() + (if (name.length > 1) name.substring(1) else "")}" // that matches the getter name (by the standard convention)
&& Modifier.PUBLIC in getter.modifiers // that are marked public
}
} // Grab the variables
.map {
// Map the name now. Also supports later filtering
if (it.simpleName.endsWith("\$delegate")) {
// Support by lazy
it.simpleName.subSequence(0, it.simpleName.indexOf("$"))
} else it.simpleName
}
if (classVariables.isEmpty()) return; // Self-explanatory
val file = FileSpec.builder(pkg, className)
.addFunction(FunSpec.builder("iterator") // For automatic unpacking in a for loop
.receiver(element.asType().asTypeName().copy()) // Add it as an extension function of the class
.addStatement("return listOf(${classVariables.joinToString(", ")}).iterator()") // add the return statement. Create a list, push an iterator.
.addModifiers(KModifier.PUBLIC, KModifier.OPERATOR) // This needs to be public. Because it's an iterator, the function also needs the `operator` keyword
.build()
).build()
// Grab the generate directory.
val genDir = processingEnv.options["kapt.kotlin.generated"]!!
// Then write the file.
file.writeTo(File(genDir, "$pkg/${element.simpleName.replace("\\.kt".toRegex(), "")}Generated.kt"))
}
All of the relevant lines have comments explaining use, in case you're not familiar with what this does.
Finally, in order to get the processor to process, you need to register it. In the module for the generator, add a file called javax.annotation.processing.Processor under main/resources/META-INF/services. In there you write:
com.package.of.UnpackCodeGenerator
From here, you need to link it using compileOnly and kapt. If you added it as a module to your project, you can do:
kapt project(":ClassUnpacker")
compileOnly project(":ClassUnpacker")
Alternative source setup:
Like I mentioned earlier, I bundled this into a jar for convenience. It's under the same license as SO uses (CC-BY-SA 3.0), and it contains the exact same code as in the answer (although compiled into a single project).
If you want to use this one, just add the Jitpack repo:
repositories {
// Other repos here
maven { url 'https://jitpack.io' }
}
And hook it up with:
kapt 'com.github.LunarWatcher:KClassUnpacker:v1.0.1'
compileOnly "com.github.LunarWatcher:KClassUnpacker:v1.0.1"
Note that the version here may not be up to date: the up to date list of versions is available here. The code in the post still aims to reflect the repo, but versions aren't really important enough to edit every time.
Usage
Regardless of which way you ended up using to get the annotations, the usage is relatively easy:
#AutoUnpack data class ExampleDataClass(val x: String, val y: Int, var m: Boolean)
fun main(a: Array<String>) {
val cls = ExampleDataClass("example", 42, false)
for(field in cls) {
println(field)
}
}
This prints:
example
42
false
Now you have a reflection-less way of iterating fields.
Note that local testing has been done partially with IntelliJ, but IntelliJ doesn't seem to like me - I've had various failed builds where gradlew clean && gradlew build from a command line oddly works fine. I'm not sure whether this is a local problem, or if this is a general problem, but you might have some issues like this if you build from IntelliJ.
Also, you might get errors if the build fails. The IntelliJ linter builds on top of the build directory for some sources, so if the build fails and the file with the extension function isn't generated, that'll cause it to appear as an error. Building usually fixes this when I tested (with both modules and from Jitpack).
You'll also likely have to enable the annotation processor setting if you use Android Studio or IntelliJ.
here is another idea, that i came up with, but am not satisfied with...but it has some pros and cons:
pros:
adding/removing fields to/from the data class causes compiler errors at field-iteration sites
no boiler-plate code needed
cons:
won't work if default values are defined for arguments
declaration:
data class Memento(
val testType: TestTypeData,
val notes: String,
val examinationTime: MillisSinceEpoch?,
val administeredBy: String,
val signature: SignatureViewHolder.SignatureData,
val signerName: String,
val signerRole: SignerRole
) : Serializable
iterating through all fields (can use this directly at call sites, or apply the Visitor pattern, and use this in the accept method to call all the visit methods):
val iterateThroughAllMyFields: Memento = someValue
Memento(
testType = iterateThroughAllMyFields.testType.also { testType ->
// do something with testType
},
notes = iterateThroughAllMyFields.notes.also { notes ->
// do something with notes
},
examinationTime = iterateThroughAllMyFields.examinationTime.also { examinationTime ->
// do something with examinationTime
},
administeredBy = iterateThroughAllMyFields.administeredBy.also { administeredBy ->
// do something with administeredBy
},
signature = iterateThroughAllMyFields.signature.also { signature ->
// do something with signature
},
signerName = iterateThroughAllMyFields.signerName.also { signerName ->
// do something with signerName
},
signerRole = iterateThroughAllMyFields.signerRole.also { signerRole ->
// do something with signerRole
}
)

What is this anti-pattern called (using parent scopes to pass state)?

I'm trying to describe to a colleague issues I have with how their code is structured, and I'm looking for the name of the anti-pattern he's implemented (bonus points for the software principals it violates). I'm using JS to demonstrate, but this isn't JS specific.
function x() {
var a, b, c;
var doWork = function(){
a = 1;
b = 2;
addAB();
return c;
};
var addAB = function(){
c = a + b;
};
var result = doWork();
}
He's passing information into and out of functions/methods using the parent scope. It makes understanding the code very difficult.
I don't know that there is an official name for it but, the issue you are describing is creating functions with side effects.
You don't want to have any function that modifies anything outside of its own scope. Having a shared member (in this case a, b, & c) that can be modified by any other function can lead to unknown and/or inconsistent states and/or behaviors.
I believe that your concerns aren't applicable to JavaScript (and many other programming languages). Your code and your team mates are using closures:
Closures are functions that refer to independent (free) variables
(variables that are used locally, but defined in an enclosing scope).
In other words, these functions 'remember' the environment in which
they were created.
In JavaScript and many other languages which can create closures is very common to access parent scope's references and it provides more power to code rather than pain. Obviously, a wrongly used tool produces pain, but I should analyze your mates' code to be sure that it's not that you're just against closures.
In summary, closures aren't an anti-pattern. They're a language feature.
For example, your code could be an actual use case like DOM event handling:
var text = "";
document.getElementById("someButton").addEventListener(function() {
text = document.getElementById("someInput").value;
});
And some developers implement something like private functions defining them inside a constructor function:
function A() {
this.text = "";
var that = this;
function fillText() {
that.text = "hello world";
}
fillText();
}
var a = new A();
console.log(a.text); // "hello world"

How to register component interface in wxwebconnect?

I'm doing an experiment with wxWebConnect test application, incorporating the xpcom tutorial at "http://nerdlife.net/building-a-c-xpcom-component-in-windows/"
I adapt MyComponent class as necessary to compile together with testapp.exe (not as separate dll), and on MyApp::OnInit I have the following lines:
ns_smartptr<nsIComponentRegistrar> comp_reg;
res = NS_GetComponentRegistrar(&comp_reg.p);
if (NS_FAILED(res))
return false;
ns_smartptr<nsIFactory> prompt_factory;
CreateMyComponentFactory(&prompt_factory.p);
nsCID prompt_cid = MYCOMPONENT_CID;
res = comp_reg->RegisterFactory(prompt_cid,
"MyComponent",
"#mozilla.org/mycomp;1",
prompt_factory);
Those lines are copied from GeckoEngine::Init(), using the same mechanism to register PromptService, etc. The code compiles well and testapp.exe is running as expected.
I put javascript test as below :
try {
netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect");
const cid = "#mozilla.org/mycomp;1";
obj = Components.classes[cid].createInstance();
alert(typeof obj);
// bind the instance we just created to our interface
alert(Components.interfaces.nsIMyComponent);
obj = obj.QueryInterface(Components.interfaces.nsIMyComponent);
} catch (err) {
alert(err);
return;
}
and get the following exception:
Could not convert JavaScript argument arg 0 [nsISupport.QueryInterface]
The first alert says "object", so the line
Components.classes[cid].createInstance()
is returning the created instance.
The second alert says "undefined", so the interface nsIMyComponent is not recognized by XULRunner.
How to dynamically registering nsIMyComponent interface in wxWebConnect environment ?
Thx
I'm not sure what is happening here. The first thing I would check is that your component is scriptable (I assume it is, since the demo you copy from is). The next thing I would check is whether you can instantiate other, standard XULRunner components and get their interface (try something like "alert('Components.interfaces.nsIFile');" - at least in my version of wxWebConnect this shows an alert box with string "nsIFile".
Also, I think it would be worth checking the Error Console to make sure there are no errors or warnings reported. A magic string to do that (in Javascript) is:
window.open('chrome://global/content/console.xul', '', 'chrome,dialog=no,toolbar,resizable');