How to find object like in C++ with findChild(), but from javascript?
As on the QML-side you usually have no access to the parent/child-relationships of the Qt/C++-side, but only to the visual parent/child-relationship, you will need to resort to C++.
You could e.g. create a object, that exposes a method which takes a QObject, and a name and calls the function findChild of this object.
If you only want to find a visual child, you might just implement a recursive bfs over the visual tree in JS and call this.
But as I said in my comment: If you need this, you probably messed up at some other place and should rather think about a way to do it, without findChild(). Using it is not recommended in C++, and is certainly not so in QML.
As it is a recursive search it will do its best in killing your performance. A UI contains easily a thousand elements and you would need to compare the strings all the time. Further you would lie about the dependencies by sneakingly access things you don't tell anyone you are depending on.
The objectName and the object that calls findChild might not be in the same logical part of your code, so it is easily broken, if someone might change this object or the objectName, and you are searching for a name that does not exist anymore.
Additionally, if you found an object with the name - and possibly the right type - there is still no guarantee, it is the right object, as the objectNames are not necessarily unique.
All in all, it is not the best design, to access objects like that - tough it might be possible.
Disclaimer: I have not tried out my proposed solution, as I don't want to waste my time on it.
What are the differences between these two? Why would you pick one over the other, is it just personal preference, or is there an actual reason behind why you would use either a built-in function or whatever .length is.
I think using *.length over *.length() or len(*) is kind of a historical artifact, which was probably done to make getting the length of an array as fast as possible. Arrays after all, are a very basic data structure in many languages, and getting the length of one is an extremely common operation. And accessing a property is much faster than calling a method.
Nowadays a compiler could probably optimize that kind of thing out, but back then I think there was a pull towards ease-of-implementation which guided many languages to simply have *.length as a property.
However, in any OOP language at least, it's more consistent to have *.length(), because while arrays have immutable lengths, and can afford to have *.length exposed as a constant value, other data structures which you can add or remove values would not be able to do this.
This question already has an answer here:
Generic Structs with Go
(1 answer)
Closed 8 months ago.
I know that Go doesn't have classes in the traditional OOP sense, but Go does provide a notion of interfaces that allows you do most of the OOP things you'd want to do.
BUT, does Go allow for something like creating a templated class? For instance, I'm reading through the source code for the container/list package. It defines a list and the list's associated methods. But in all methods, the values contained in the list are of type interface{} -- so, of any type. Is there any way to create a list that is constrained to only hold values of a particular type? int, string, Fruit... whatever.
Newer than gotgo, there's a code-generation-based package called "gen".
http://clipperhouse.github.io/gen/
gen is an attempt to bring some generics-like functionality to Go, with inspiration from C#’s Linq, JavaScript’s Array methods and the underscore library. Operations include filtering, grouping, sorting and more.
The pattern is to pass func’s as you would pass lambdas in Linq or functions in JavaScript.
Basically what #FUZxxl said.
Generics? Not at this time.
Templates? Not quite. There are third-party projects like gotgo which aim to add template
support by pre-processing files. However, gotgo is quite dead as far as I know.
Your best option is to use interfaces or reflection for the time being.
If you really want to use reflection, note that the reflect package offers a way to fill a typed function variable with generic (reflected) content. You can use this to use types the compiler
can check with your reflection based solutions.
The browser-based software StudyTRAX ( http://wiki.studytrax.com ), used for research data management, allows for custom form and form variable management via JavaScript. However, a StudyTRAX "variable" (essentially, a representation of both an element of a form [HTML properties included] and its corresponding parameter, with some data typing/etc.) must be referred to with #<varname>, while regular JavaScript variables will just be <varname>.
Is this sort of thing done to make parsing easier, or is it just to distinguish between the two so that researchers who aren't so technologically-inclined won't have as much trouble figuring out what they're doing? Given the nature of JavaScript, I would think the StudyTRAX "variables" are just regular JavaScript objects defined in such a way to make form design and customization simpler, and thus the latter would make more sense, but am I wrong?
Also, I know that there are other programming languages that do require specific variable prefixes (though I can't think of some off the top of my head at the moment); what is/was the usual reasoning for that choice in language design?
Two part answer, StudyTRAX is almost certainly using a preprocessor to do some magic. JavaScript makes this relativity easy, but not as easy as a Lisp would. You still need to parse the code. By prefixing, the parser can ignore a lot of the complicated syntax of JavaScript and get to the good part without needing a "picture perfect" compiler. Actually, a lot of templeting systems do this. It is an implementation of Lisp's quasi-quote (see Greenspun's Tenth Rule).
As for prefixes in general, the best way to understand them is to try to write a parser for a language without them. For very dynamic and pure languages like Lisp and JavaScript where everything is a List / object it is not too bad. When you get languages where methods are distinct from objects, or functions are not first class the parser begins having to ask itself what type of thing doe "foo" refer to? An annoying example from Ruby: an unprefixed identifier is either a local variable or a method implicitly on self. In Rails there are a few functions that are implemented with method_missing. Person.find_first_by_rank works fine, but
Class Person < ActiveRecord::Base
def promotion(name)
p = find_first_by_rank
[...]
end
end
gives an error because find_first_by_rank looks like it might be a local variable and Ruby is scared to call method_missing on something that might just be a misspelled local variable.
Now imagine trying to distinguish between instance variables (prefix-#), class-variables (prefix-##), global variables (prefix-$), Constants (first letter Capitol), method names and local variables (no prefix small case) by context alone.
(From a Compiler & Language Hobbyst Designer).
Your question is more especific to the "StudyTRAX" software.
In early days of programming, variables in Basic used prefixes as $ (for strings, "a$"), to difference from numeric values. Today, some programming languages such as PHP prefixes variables with "$". COBNOL used variables starting with A to I, for integers, and later letters for floats.
Transforming, and later, executing some code, its a complex task, that's why many programmers, use shortcuts like adding prefixes or suffixes to programming languages.
In many Collegues or Universities, exist specialized classes / courses for transforming code from a programming language, to something that the computer does, like "Compilers", "Automatons", "Language Design", because its not an easy task.
Perl requires different variable prefixes, depending on the type of data:
$scalar = 4.2;
#array = (1, 4, 9, 16);
%map = ("foo" => 42, "bar" => 17, "baz" => 137);
As I understand it, this is so the reader can immediately identify what kind of object they're dealing with. It's not a matter of whether the reader is technologically inclined or not: if you reduce the programmer's cognitive load, he can use his brainpower for more important things than figuring out fiddly syntactic details.
Whether Perl's design is successful in this respect is another question, but I believe that's the reasoning behind the feature.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What features do you wish were in common languages? More precisely, I mean features which generally don't exist at all but would be nice to see, rather than, "I wish dynamic typing was popular."
I've often thought that "observable" would make a great field modifier (like public, private, static, etc.)
GameState {
observable int CurrentScore;
}
Then, other classes could declare an observer of that property:
ScoreDisplay {
observe GameState.CurrentScore(int oldValue, int newValue) {
...do stuff...
}
}
The compiler would wrap all access to the CurrentScore property with notification code, and observers would be notified immediately upon the value's modification.
Sure you can do the same thing in most programming languages with event listeners and property change handlers, but it's a huge pain in the ass and requires a lot of piecemeal plumbing, especially if you're not the author of the class whose values you want to observe. In which case, you usually have to write a wrapper subclass, delegating all operations to the original object and sending change events from mutator methods. Why can't the compiler generate all that dumb boilerplate code?
I guess the most obvious answer is Lisp-like macros. Being able to process your code with your code is wonderfully "meta" and allows some pretty impressive features to be developed from (almost) scratch.
A close second is double or multiple-dispatch in languages like C++. I would love it if polymorphism could extend to the parameters of a virtual function.
I'd love for more languages to have a type system like Haskell. Haskell utilizes a really awesome type inference system, so you almost never have to declare types, yet it's still a strongly typed language.
I also really like the way you declare new types in Haskell. I think it's a lot nicer than, e.g., object-oriented systems. For example, to declare a binary tree in Haskell, I could do something like:
data Tree a = Node a (Tree a) (Tree a) | Nothing
So the composite data types are more like algebraic types than objects. I think it makes reasoning about the program a lot easier.
Plus, mixing in type classes is a lot nicer. A type class is just a set of classes that a type implements -- sort of like an interface in a language like Java, but more like a mixin in a language like Ruby, I guess. It's kind of cool.
Ideally, I'd like to see a language like Python, but with data types and type classes like Haskell instead of objects.
I'm a big fan of closures / anonymous functions.
my $y = "world";
my $x = sub { print #_ , $y };
&$x( 'hello' ); #helloworld
and
my $adder = sub {
my $reg = $_[0];
my $result = {};
return sub { return $reg + $_[0]; }
};
print $adder->(4)->(3);
I just wish they were more commonplace.
Things from Lisp I miss in other languages:
Multiple return values
required, keyword, optional, and rest parameters (freely mixable) for functions
functions as first class objects (becoming more common nowadays)
tail call optimization
macros that operate on the language, not on the text
consistent syntax
To start things off, I wish the standard for strings was to use a prefix if you wanted to use escape codes, rather than their use being the default. E.g. in C# you can prefix with # for a raw string. Similarly, Python has the r prefix. I'd rather use #/r when I don't want a raw string and need escape codes.
More powerful templates that are actually designed to be used for metaprogramming, rather than C++ templates that are really designed for relatively simple generics and are Turing-complete almost by accident. The D programming language has these, but it's not very mainstream yet.
immutable keyword. Yes, you can make immutable objects, but that's lot pain in most of the languages.
class JustAClass
{
private int readonly id;
private MyClass readonly obj;
public MyClass
{
get
{
return obj;
}
}
}
Apparently it seems JustAClass is an immutable class. But that's not the case. Because another object hold the same reference, can modify the obj object.
So it's better to introduce new immutable keyword. When immutable is used that object will be treated immutable.
I like some of the array manipulation capabilities found in the Ruby language. I wish we had some of that built into .Net and Java. Of course, you can always create such a library, but it would be nice not to have to do that!
Also, static indexers are awesome when you need them.
Type inference. It's slowly making it's way into the mainstream languages but it's still not good enough. F# is the gold standard here
I wish there was a self-reversing assignment operator, which rolled back when out of scope. This would be to replace:
type datafoobak = item.datafoobak
item.datafoobak = 'tootle'
item.handledata()
item.datafoobak = datafoobak
with this
item.datafoobar #=# 'tootle'
item.handledata()
One could explicitely rollback such changes, but they'd roll back once out of scope, too. This kind of feature would be a bit error prone, maybe, but it would also make for much cleaner code in some cases. Some sort of shallow clone might be a more effective way to do this:
itemclone = item.shallowclone
itemclone.datafoobak='tootle'
itemclone.handledata()
However, shallow clones might have issues if their functions modified their internal data...though so would reversible assignments.
I'd like to see single-method and single-operator interfaces:
interface Addable<T> --> HasOperator( T = T + T)
interface Splittable<T> --> HasMethod( T[] = T.Split(T) )
...or something like that...
I envision it as being a typesafe implementation of duck-typing. The interfaces wouldn't be guarantees provided by the original class author. They'd be assertions made by a consumer of a third-party API, to provide limited type-safety in cases where the original authors hadn't anticipated.
(A good example of this in practice would be the INumeric interface that people have been clamboring for in C# since the dawn of time.)
In a duck-typed language like Ruby, you can call any method you want, and you won't know until runtime whether the operation is supported, because the method might not exist.
I'd like to be able to make small guarantees about type safety, so that I can polymorphically call methods on heterogeneous objects, as long as all of those objects have the method or operator that I want to invoke.
And I should be able to verify the existence of the methods/operators I want to call at compile time. Waiting until runtime is for suckers :o)
Lisp style macros.
Multiple dispatch.
Tail call optimization.
First class continuations.
Call me silly, but I don't think every feature belongs in every language. It's the "jack of all trades, master of none" syndrome. I like having a variety of tools available, each one of which is the best it can be for a particular task.
Functional functions, like map, flatMap, foldLeft, foldRight, and so on. Type system like scala (builder-safety). Making the compilers remove high-level libraries at compile time, while still having them if you run in "interpreted" or "less-compiled" mode (speed... sometimes you need it).
There are several good answers here, but i will add some:
1 - The ability to get a string representation for the current and caller code, so that i could output a variable name and its value easily, or print the name of the current class, function or a stack trace at any time.
2 - Pipes would be nice too. This feature is common in shells, but uncommon in other types of languages.
3 - The ability to delegate any number of methods to another class easily. This looks like inheritance, but even in the presence of inheritance, once in a while we need some kind of wrapper or stub which cannot be implemented as a child class, and forwarding all methods requires a lot of boilerplate code.
I'd like a language that was much more restrictive and was designed around producing good, maintainable code without any trickiness. Also, it should be designed to give the compiler the ability to check as much as possible at compile time.
Start with a newish VM based heavily OO language.
Remove complexities like Operator Overloading and multiple inheritance if they exist.
Force all non-final variables to Private.
Members should default to "Final" but should have a "Variable" tag to override it. (This may require built-in support for the builder pattern to be fully effective).
Variables should not allow a "Null" value by default, but variables and parameters should have a "nullable" tag that indicates that null is acceptable for that variable.
It would also be nice to be able to avoid some common questionable patterns:
Some built-in way to simplify IOC/DI to eliminate singletons,
Java--eliminate checked exceptions so people stop putting in empty catches.
Finally focus on code readability:
Named Parameters
Remove the ability to create methods more than, say, 100 lines long.
Add some complexity analysis to help detect complicated methods and classes.
I'm sure I haven't named 1/10 of the items possible, but basically I'm talking about something that compiles to the same bytecode as C# or Java, but is so restrictive that a programmer can hardly help but write good code.
And yes, I know there are lint-type tools that will do some of this, but I've never seen them on any project I've worked on (and they wouldn't physically run on the code I'm working on now, for instance) so they aren't being very helpful, and I would love to see a compile actually fail when you type in a 101 line method...