Where is the PyQt5 documentation for classes, methods and modules? - pyqt5

I'm trying to learn PyQt5 and I am finding it very difficult since I can't just guess what methods are available. I've just spent an entire week trying to find a method to simulate a button push. I eventually found the solution ( QPushButton.animateClick() ) only after stumbling across an example someone left out there (how did this person know this?). It's very difficult to develop without some reference to what's available for tools!
Riverbank has a version of what I'm looking for but it is not complete making it virtually useless.

pyqt5 being a qt5 binding has almost all the functionalities (there are minimal known incompatibilities) so the qt5 documentation: https://doc.qt.io/ is valid for pyqt5 except for small exceptions.
Although the target of the documentation is c++ the description of the classes and methods are generic, so they also validly apply for pyqt5, on the other hand many of the examples are written in c++ but the translation to python in many cases is trivial .
So to avoid doing a double task it seems that Riverbank Computing Limited only documents the exceptions indicated in the pyqt5 docs: https://www.riverbankcomputing.com/static/Docs/PyQt5/
The next part of my answer will propose tips to handle the Qt documentation.
The Qt documentation also has an easy to understand structure, for example let's analyze the QPushButton class (https://doc.qt.io/qt-5/qpushbutton.html):
At the top there is a table:
This table indicates how to include the class in a C++ project, how to add it to qmake, from which class it inherits, and which classes inherit from it. From the above, relevant information for PyQt5 can be extracted, such as to which sub-module the class belongs to: In this case we use QT += widgets that inform us that it belongs to the QtWidgets sub-module, in general if Qt += submodulefoo belongs to QtSubModuleFoo (camelcase)
If you want to know all the methods of the QPushButton class, you must use the "List of all members, including inherited members" link below the table, in this case the link will be https://doc.qt.io/qt-5/qpushbutton-members.html where is the complete list of all class methods, enumerations, etc.
Other tips to understand the conversion between Qt/C++ and PyQt5/Python are:
Some methods use pointers to receive information such as:
void QLayout::getContentsMargins(int *left, int *top, int *right, int *bottom) const
bool QProcess::startDetached(qint64 *pid = nullptr), etc
those transformed to PyQt5 as:
lay = QtWidgets.QXLayout()
left, top, right, bottom = lay.getContentsMargins()
process = QProcess()
# ...
ok, pid = process.startDetached()
Some methods collide with reserved words such as exec , raise, print, etc so to avoid incompatibilities, the underscore is added at the end: exec_, raise_, print_, etc
In Qt, the Q_SLOT and Q_SIGNAL that are translated into python are used through the #pyqtSlot and #pyqtSignal decorators.
In conclusion, my recommendation is that you use the Qt and PyQt5 documentation at the same time to know all the functionalities, in addition there are many Q&A in SO about translations from one language to another so you could learn from there.
The Qt documentation can also be consulted using the Qt Assistant tool.

The main PyQt5 documentation is on the official website:
https://www.riverbankcomputing.com/static/Docs/PyQt5/
But it's still incomplete, and most parts refer to the official Qt documentation:
https://doc.qt.io/qt-5/
While that's C++ oriented, consider that almost every module, class and function behave exactly in the same way as it does in python, so it's usually better to use that.
Consider that:
in the function lists you'll always see the returned type on the left of each function;
"void" means that the function returns None;
when overriding some existing method (expecially private and virtual), you always have to return the expected types listed for that function;
function arguments are usually written in the form [const] Type argName=default: you can usually ignore the "const" part (it's a C++ term), while the argName for keyword arguments might be different in PyQt;
some functions could have different names, since they're reserved on python (print, raise, etc); in those cases, an underscore is always appended;
some positional or keyword arguments might be different, or the return type signature might; that's because in C++ you can use a pointer to a variable as an argument, and the function will change that variable using the pointer (this is an oversimplification);
all "properties" are not python properties, and they are only accessible through their parenthesis functions, such as self.width() an self.setWidth();
some methods have different overrides, in some cases python has special cases with different arguments that are not available in C++, and viceversa; also, some methods don't exist at all in one case or the other;
My suggestion is to always use the official documentation, it's only a matter of habit to get used to the C++ references (and you'll see that it is educational too); whenever some doubt raises, check the PyQt documentation to see what's different and use the help command in the python shell.

Related

Enforcing API boundaries at the Module (Distribution?) level

How do I structure Raku code so that certain symbols are public within the the library I am writing, but not public to users of the library? (I'm saying "library" to avoid the terms "distribution" and "module", which the docs sometimes use in overlapping ways. But if there's a more precise term that I should be using, please let me know.)
I understand how to control privacy within a single file. For example, I might have a file Foo.rakumod with the following contents:
unit module Foo;
sub private($priv) { #`[do internal stuff] }
our sub public($input) is export { #`[ code that calls &private ] }
With this setup, &public is part of my library's public API, but &private isn't – I can call it within Foo, but my users cannot.
How do I maintain this separation if &private gets large enough that I want to split it off into its own file? If I move &private into Bar.rakumod, then I will need to give it our (i.e., package) scope and export it from the Bar module in order to be able to use it from Foo. But doing so in the same way I exported &public from Foo would result in users of my library being able to use Foo and call &private – exactly the outcome I am trying to avoid. How do maintain &private's privacy?
(I looked into enforcing privacy by listing Foo as a module that my distribution provides in my META6.json file. But from the documentation, my understanding is that provides controls what modules package managers like zef install by default but do not actually control the privacy of the code. Is that correct?)
[EDIT: The first few responses I've gotten make me wonder whether I am running into something of an XY problem. I thought I was asking about something in the "easy things should be easy" category. I'm coming at the issue of enforcing API boundaries from a Rust background, where the common practice is to make modules public within a crate (or just to their parent module) – so that was the X I asked about. But if there's a better/different way to enforce API boundaries in Raku, I'd also be interested in that solution (since that's the Y I really care about)]
I will need to give it our (i.e., package) scope and export it from the Bar module
The first step is not necessary. The export mechanism works just as well on lexically scoped subs too, and means they are only available to modules that import them. Since there is no implicit re-export, the module user would have to explicitly use the module containing the implementation details to have them in reach. (As an aside, personally, I pretty much never use our scope for subs in my modules, and rely entirely on exporting. However, I see why one might decide to make them available under a fully qualified name too.)
It's also possible to use export tags for the internal things (is export(:INTERNAL), and then use My::Module::Internals :INTERNAL) to provide an even stronger hint to the module user that they're voiding the warranty. At the end of the day, no matter what the language offers, somebody sufficiently determined to re-use internals will find a way (even if it's copy-paste from your module). Raku is, generally, designed with more of a focus on making it easy for folks to do the right thing than to make it impossible to "wrong" things if they really want to, because sometimes that wrong thing is still less wrong than the alternatives.
Off the bat, there's very little you can't do, as long as you're in control of the meta-object protocol. Anything that's syntactically possible, you could in principle do it using a specific kind of method, or class, declared using that. For instance, you could have a private-class which would be visible only to members of the same namespace (to the level that you would design). There's Metamodel::Trusting which defines, for a particular entity, who it does trust (please bear in mind that this is part of the implementation, not spec, and then subject to change).
A less scalable way would be to use trusts. The new, private modules would need to be classes and issue a trusts X for every class that would access it. That could include classes belonging to the same distribution... or not, that's up to you to decide. It's that Metamodel class above who supplies this trait, so using it directly might give you a greater level of control (with a lower level of programming)
There is no way to enforce this 100%, as others have said. Raku simply provides the user with too much flexibility for you to be able to perfectly hide implementation details externally while still sharing them between files internally.
However, you can get pretty close with a structure like the following:
# in Foo.rakumod
use Bar;
unit module Foo;
sub public($input) is export { #`[ code that calls &private ] }
# In Bar.rakumod
unit module Bar;
sub private($priv) is export is implementation-detail {
unless callframe(1).code.?package.^name eq 'Foo' {
die '&private is a private function. Please use the public API in Foo.' }
#`[do internal stuff]
}
This function will work normally when called from a function declared in the mainline of Foo, but will throw an exception if called from elsewhere. (Of course, the user can catch the exception; if you want to prevent that, you could exit instead – but then a determined user could overwrite the &*EXIT handler! As I said, Raku gives users a lot of flexibility).
Unfortunately, the code above has a runtime cost and is fairly verbose. And, if you want to call &private from more locations, it would get even more verbose. So it is likely better to keep private functions in the same file the majority of the time – but this option exists for when the need arises.

How can I have a "private" Erlang module?

I prefer working with files that are less than 1000 lines long, so am thinking of breaking up some Erlang modules into more bite-sized pieces.
Is there a way of doing this without expanding the public API of my library?
What I mean is, any time there is a module, any user can do module:func_exported_from_the_module. The only way to really have something be private that I know of is to not export it from any module (and even then holes can be poked).
So if there is technically no way to accomplish what I'm looking for, is there a convention?
For example, there are no private methods in Python classes, but the convention is to use a leading _ in _my_private_method to mark it as private.
I accept that the answer may be, "no, you must have 4K LOC files."
The closest thing to a convention is to use edoc tags, like #private and #hidden.
From the docs:
#hidden
Marks the function so that it will not appear in the
documentation (even if "private" documentation is generated). Useful
for debug/test functions, etc. The content can be used as a comment;
it is ignored by EDoc.
#private
Marks the function as private (i.e., not part of the public
interface), so that it will not appear in the normal documentation.
(If "private" documentation is generated, the function will be
included.) Only useful for exported functions, e.g. entry points for
spawn. (Non-exported functions are always "private".) The content can
be used as a comment; it is ignored by EDoc.
Please note that this answer started as a comment to #legoscia's answer
Different visibilities for different methods is not currently supported.
The current convention, if you want to call it that way, is to have one (or several) 'facade' my_lib.erl module(s) that export the public API of your library/application. Calling any internal module of the library is playing with fire and should be avoided (call them at your own risk).
There are some very nice features in the BEAM VM that rely on being able to call exported functions from any module, such as
Callbacks (funs/anonymous funs), MFA, erlang:apply/3: The calling code does not need to know anything about the library, just that it's something that needs to be called
Behaviours such as gen_server need the previous point to work
Hot reloading: You can upgrade the bytecode of any module without stopping the VM. The code server inside the VM maintains at most two versions of the bytecode for any module, redirecting external calls (those with the Module:) to the most recent version and the internal calls to the current version. That's why you may see some ?MODULE: calls in long-running servers, to be able to upgrade the code
You'd be able to argue that these points'd be available with more fine-grained BEAM-oriented visibility levels, true. But I don't think it would solve anything that's not solved with the facade modules, and it'd complicate other parts of the VM/code a great deal.
Bonus
Something similar applies to records and opaque types, records only exist at compile time, and opaque types only at dialyzer time. Nothing stops you from accessing their internals anywhere, but you'll only find problems if you go that way:
You insert a new field in the record, suddenly, all your {record_name,...} = break
You use a library that returns an opaque_adt(), you know that it's a list and use like so. The library is upgraded to include the size of the list, so now opaque_adt() is a tuple() and chaos ensues
Only those functions that are specified in the -export attribute are visible to other modules i.e "public" functions. All other functions are private. If you have specified -compile(export_all) only then all functions in module are visible outside. It is not recommended to use -compile(export_all).
I don't know of any existing convention for Erlang, but why not adopt the Python convention? Let's say that "library-private" functions are prefixed with an underscore. You'll need to quote function names with single quotes for that to work:
-module(bar).
-export(['_my_private_function'/0]).
'_my_private_function'() ->
foo.
Then you can call it as:
> bar:'_my_private_function'().
foo
To me, that communicates clearly that I shouldn't be calling that function unless I know what I'm doing. (and probably not even then)

Finding the Pharo documentation for the compile & evaluate methods, etc. in the compiler class

I've got an embarrassingly simple question here. I'm a smalltalk newbie (I attempt to dabble with it every 5 years or so), and I've got Pharo 6.1 running. How do I go about finding the official standard library documentation? Especially for the compiler class? Things like the compile and evaluate methods? I don't see how to perform a search with the Help Browser, and the method comments in the compiler class are fairly terse and cryptic. I also don't see an obvious link to the standard library API documentation at: http://pharo.org/documentation. The books "Pharo by Example" and "Deep into Pharo" don't appear to cover that class either. I imagine the class is probably similar for Squeak and other smalltalks, so a link to their documentation for the compiler class could be helpful as well.
Thanks!
There are several classes that collaborate in the compilation of a method (or expression) and, given your interest in the subject, I'm tempted to stimulate you even further in their study and understanding.
Generally speaking, the main classes are the Scanner, the Parser, the Compiler and the Encoder. Depending on the dialect these may have slightly different names and implementations but the central idea remains the same.
The Scanner parses the stream of characters of the source code and produces a stream of tokens. These tokens are then parsed by the Parser, which transforms them into the nodes of the AST (Abstract Syntax Tree). Then the Compiler visits the nodes of the AST to analyze them semantically. Here all variable nodes are classified: method arguments, method temporaries, shared, block arguments, block temporaries, etc. It is during this analysis where all variables get bound in their corresponding scope. At this point the AST is no longer "abstract" as it has been annotated with binding information. Finally, the nodes are revisited to generate the literal frame and bytecodes of the compiled method.
Of course, there are lots of things I'm omitting from this summary (pragmas, block closures, etc.) but with these basic ideas in mind you should now be ready to debug a very simple example. For instance, start with
Object compile: 'm ^3'
to internalize the process.
After some stepping into and over, you will reach the first interesting piece of code which is the method OpalCompiler >> #compile. If we remove the error handling blocks this methods speaks for itself:
compile
| cm |
ast := self parse.
self doSemanticAnalysis.
self callPlugins.
cm := ast generate: self compilationContext compiledMethodTrailer
^cm
First we have the #parse message where the parse nodes are created. Then we have the semantic analysis I mentioned above and finally #generate: produces the encoding. You should debug each of these methods to understand the compilation process in depth. Given that you are dealing with a tree be prepared to navigate thru a lot of visitors.
Once you become familiar with the main ideas you may want to try more elaborated -yet simple- examples to see other objects entering the scene.
Here are some simple facts:
Evaluation in Smalltalk is available everywhere: in workspaces, in
the Transcript, in Browsers, inspectors, the debugger, etc.
Basically, if you are allowed to edit text, most likely you will
also be allowed to evaluate it.
There are 4 evaluation commands
Do it (evaluates without showing the answer)
Print it (evaluates and prints the answer next to the expression)
Inspect it (evaluates and opens an inspector on the result)
Debug it (opens a debugger so you can evaluate your expression step by step).
Your expression can contain any literal (numbers, arrays, strings, characters, etc.)
17 "valid expression"
Your expression can contain any message.
3 + 4.
'Hello world' size.
1 bitShift: 28
Your expression can use any Global variable
Object new.
Smalltalk compiler
Your expression can reference self, super, true, nil, false.
SharedRandom globalGenerator next < 0.2 ifTrue: [nil] ifFalse: [self]
Your expression can use any variables declared in the context of the pane where you are writing. For example:
If you are writing in a class browser, self will be bound to the current class
If you are writing in an inspector, self is bound to the object under inspection. You can also use its instances variables in the expression.
If you are in the debugger, your expression can reference self, the instance variables, message arguments, temporaries, etc.
Finally, if you are in a workspace (a.k.a. Playground), you can use any temporaries there, which will be automatically created and remembered, without you having to declare them.
As near as I can tell, there is no API documentation for the Pharo standard library, like you find with other programming languages. This seems to be confirmed on the Pharo User's mailing list: http://forum.world.st/Essential-Documentation-td4916861.html
...there is a draft version of the ANSI standard available: http://wiki.squeak.org/squeak/uploads/172/standard_v1_9-indexed.pdf
...but that doesn't seem to cover the compiler class.

ClassImp preprocessor macro in ROOT - Is it really needed?

Do I really have to use the ClassImp macro to benefit the automatic dictionary and streamer generation in ROOT? Some online tutorials and examples mention it but I noticed that simply adding the ClassDef(MyClass, <ver>) macro to MyClass.h and processing it with rootcint/rootcling already generates most of such code.
I did look at Rtypes.h where these macros are defined but to follow preprocessor macros calling each other is not easy and so, it would be nice if experts could confirm the role of ClassImp. I am specifically interested in recent versions of ROOT >= 5.34
Here is the answer I got on roottalk mailing list confirming that the usage of ClassImp is essentially outdated.
ClassImp is used to register in the TClass the name of the source file
for the class. This was used in particular by THtml (which has now
been deprecated in favor of Doxygen). So unless you code/framework
needs to know the name of the source files, it is no longer necessary
to have ClassImp.
ClassDef is necessary for class inheriting from TObject (or from any
classes that has a ClassDef). In the other cases, it provide
accelerator that makes the I/O slightly faster (and thus is
technically not compulsory in this case). It also assign a version
number to the schema layout which simplifies writing schema evolution
rules (on the other hand, there is other alternative to assign a
version number to the schema layout).
What exactly are you trying to do? The ClassImp and ClassDef macros add members to the class that provide Run-Time Type Information and allow the class to be written to root files. If you are not interested in that, then don't bother with these macros.
I never use them.

Why some java methods in core libraries end with numbers?

It's common in a lot of classes in JDK, just a few examples:
java.util.Properties
load0
store0
java.lang.Thread
start0
stop0
setPriority0
Usually they are private native methods (like in Thread class), but sometimes they are just private (Properties class)
I'm just curious if anybody know if there is any history behind that.
I believe they are named like that because equivalent functions with same names exist in the code and just to distinguish between native helper functions and public functions they decided to suffix them with 0.
in java.util.Properties both load, store and load0, store0 exist.
The 0 after the method name is done so to distinguish between public and private methods having same name .
Start function will call the start0 function.
Those functions which ends with 0 is private method.
And those which are not ending with number is public.
You can check in any of the library.
The use of zero suffixes on method names is just a convention to deal with cases where you have a public API method and a corresponding private method. In the Java SE libraries, this is commonly used for the native methods that provide the underlying functionality implemented by the classes. (You can see what is going on by looking at the OpenJDK source code.)
But your questions are:
Why some java methods in core libraries end with numbers?
Because someone thought it would be a good idea. It is not strictly necessary since they typically could have overloaded the public methods instead. And since the zero suffix matters are private, the naming of methods should not be relevant beyond the class and its native implementation.
I'm just curious if anybody know if there is any history behind that.
There is no mention of this convention in the original Java Style Guide. In fact, I think it predates Java. I vaguely recall seeing it in C libraries in 4.x BSD Unix. That was the mid 1980's. And I wouldn't be surprised if they adopted it from somewhere else.