Using micropython to initialize a UART bus, and I'm getting an error "missing 1 requires positional arguments" - uart

I have the following code I am trying to run on an ESP-WROOM-32:
from machine import UART
def do_uart_things():
uart = UART.init(baudrate=9600, bits=8, parity=None, stop=1, rx=34,tx=35)
do_uart_things()
I am attempting to initialize a uart bus according to the documentation: https://docs.micropython.org/en/latest/library/machine.UART.html. The documentation suggests that only baudrate, bits, parity, and stop are required, however I get the "1 additional positional arguments required" error. I can not figure out why it is giving this error.
I am also assuming that the rx and tx parameters are automatically converted to the correct type of pin, as needed by the UART class, rather than me having to manually manage it.
I have managed to get slightly similar code working:
from machine import UART
def do_uart_things():
uart = UART(1,9600)
uart.init(baudrate=9600, bits=8, parity=None, stop=1, rx=34,tx=35)
#Pin numbers taken from ESP data sheet--they might not be correctly formatted
do_uart_things()
Which has me thinking the documentation is unintentionally misleading, and the leading example is not intended as an "initialize it this way OR this way," but rather requires both things be done.
Am I correct in thinking the latter code example is the correct way to use the micropython UART functionalities? I am also open to referrals to any good examples of UART and I2C usage in micropython, since I've found the documentation to be a little shy of great...

"UART objects can be created and initialised using:..." can be a little misleading. They meant that the object can only be created by using the constructor, however it can be initialised either with the constructor, or later, after the object has been created, but using the init method on it.
As you see, the class constructor needs a first parameter id, whereas the method init() does not. So you can use the constructor
uart = UART(1,baudrate=9600, bits=8, parity=None, stop=1, rx=34,tx=35)
but you cannot use UART.init() as this is not a constructor but a method, so it needs to operate on an instance, not a class.

Related

how does JVM know the current method through PC

In 8086, we can know the next instruction to execute through CS:PC, where PC is the offset in the current code segement(CS).
However, I'm not sure how JVM knows which instruction to execute.
PC register in JVM only indicates the offset in the current method, but how does it know which method it's in?
Thanks!
I notice the codes for each method start from 0, like thisenter image description here
So, if there are many methods in a class, how can I know which method the current frame is in?
I'm new to Java, so my question may be silly and my explaination is wrong. Thanks for bear with me!
OK, so I assume that you are asking about the JVM in relation to the Java Virtual Machine Specification (JVMS). The most directly relevant part of the spec says this:
2.5.1. The pc Register
The Java Virtual Machine can support many threads of execution at once
(JLS §17). Each Java Virtual Machine thread has its own pc (program
counter) register. At any point, each Java Virtual Machine thread is
executing the code of a single method, namely the current method
(§2.6) for that thread. If that method is not native, the pc register
contains the address of the Java Virtual Machine instruction currently
being executed. If the method currently being executed by the thread
is native, the value of the Java Virtual Machine's pc register is
undefined. The Java Virtual Machine's pc register is wide enough to
hold a returnAddress or a native pointer on the specific platform.
Note the emphasized sentence. It says the address of the instruction being executed. It does not say the instruction's offset from the start of the method's code segment ... as you seem to be saying.
Furthermore, there is no obvious reference to a register holding a pointer to the current method. And the section describing the call stack doesn't mention any pointer to the current method in the stack frame.
Having said all of that, the JVM specification is really a behavioral specification that JVM implementations need to conform to. It doesn't directly mandate that the specified behavior must be implemented in any particular way.
So while it seems to state that the abstract JVM has a register called a PC that contains an "address", it doesn't state categorically what an address means in this context. For instance, it does not preclude the possibility that the interpreter represents the "address" in the PC as a tuple consisting of a method address and a bytecode offset within the method. Or something else. All that really matters is that the JVM implementation can somehow use the PC to get the bytecode instruction to be executed.

How to use dexlib2 to instrument certain methods, especially allocating registers to add new instructions?

I'm using dexlib2 to programmatically instrument some methods in a dex file, for example, if I find some instructions like this:
invoke-virtual {v8, v9, v10}, Ljava/lang/Class;->getMethod(Ljava/lang/String;[Ljava/lang/Class;)Ljava/lang/reflect/Method;
I'd like to insert an instruction before it, and therefore at runtime I can know the exact arguments of Class.getMethod().
However, now I run into some questions about how to allocate registers to be used in my inserted monitoring instruction?
I know of two ways, but either way has its problems:
I can use DexRewriter to increase the registerCount of this method (e.g from .register 6 to .register 9), so that I can have extra (3) registers to be used. But first this is restricted by 16 registers; second when I increase the registerCount, the parameters will be passed into the last ones, and therefore I have to rewrite all instructions in this method that use parameters, which is tiring.
Or I can reuse registers. This way I have to analysis the liveness of every registers, while dexlib2 seems does not have existing API to construct CFG and def-use chain, which means I have to write it myself.
Besides, I doubt whether by this way I can get enough available registers.
So am I understanding this problem right? are there any existing tools/algorithms to do this? Or any advice that I can do it in a better way?
Thanks.
A few points:
You're not limited to 16 registers in the method. Most instructions can only address the first 16 registers, but there are mov instructions that can can use to swap values out with higher registers
If you can get away with not having to allocate any new registers, your life will be much easier. One approach is to create a new static method with your instrumented logic, and then add a call to that static method with the appropriate values from the target method.
One approach I've seen used is to increase the register count, and then add a series of move instructions at the beginning of the method to move all the parameter registers back down to the same registers they were in before you incremented the register count. This makes it so that you don't have to rewrite all the existing instructions, and guarantees that the new registers at the end of the range are unused. The main annoyance with this approach is when the new registers are v16 or higher, you'll have to do some swaps before and after the point at where they're used to get the value back down into a low register, and then restore whatever was in that register afterward.
You may code like this:
if (it.opcode == Opcode.INVOKE_VIRTUAL || it.opcode == Opcode.INVOKE_STATIC) { logger.warn("${it.opcode.name} ${(it as DexBackedInstruction35c).reference}") }
Format of Opcode.INVOKE_VIRTUAL is Format35c, so the type of instruction is DexBackedInstruction35c.

Why do we need a "receiver" class in the Command design pattern

I am learning command design pattern. As far as I know, four terms always associated with the command pattern are command, receiver, invoker and client.
A concrete command class has an execute() method and the invoker has a couple of commands. The invoker decides when to call the execute() method of a command.
When the execute() method is called, it calls a method of the receiver. Then, the receiver does the work.
I don't understand why do we need the receiver class? We can do the work inside execute() method, it seems that the receiver class is redundant.
Thank in advance.
Design patterns are used to solve software problems.
You have to understand the problem before trying to understand the solution (in this case Command pattern)
The problems which command pattern apply are in the context of an object A (client) invoking a method in an object B (receiver), so the Receiver is part of the problem, not part of the solution.
The solution or idea that command pattern offers is to encapsulate the method invocation from A to B in an object (Command), in fact this is close to the formal pattern definition. When you manage a request as an object you are able to solve some problems or to implement some features. (you also will need other pieces like the one called Invoker)
This list can give you some good examples of what kind of problems o features are suitable for command pattern.
note: Comamnd pattern is not necesary about decoupling, in fact the most common example pattern immplementation, the client needs to make a new instance of the receiver so we cannot talk about decoupling here.
Imagine a class that can do couple of things, like Duck, it can eat and quack. Duck is a receiver in this example. To apply command pattern here, you need to be able to wrap eating and quacking into a command. They should be separate classes that derive from Command base class with execute() method because Duck can have only single execute() method. So EatCommand.execute() calls Duck.eat() and QuackCommand.execute() calls Duck.quack().
The goal of the command pattern is to decouple the invoker from the receiver.
The receiver must do the work ,not the command itself , the command just knows what is the receiver method to call, or the command can execute other commands . With the command pattern the invoker doesnt know what is being called expect for the command.
So a command can be reused by many invokers to execute the same action on the receiver.
Short answer is depends. This is not based on my opinion alone. From GOF, Command Pattern, Implementation (page 238)
"How intelligent should a command be? A command can have a wide range of abilities. At one extreme it merely defines a binding between a receiver and the actions that carry out the request. At the other extreme it implements everything itself without delegating to a receiver at all. The latter extreme is useful when you want to define commands that are independent of existing classes, when no suitable receiver exists, or when a command knows its receiver implicitly. For example, a command that creates another application window may be just as capable of creating the window as any other object."
So I do not think one should create a receiver class just for the sake of it, or because most example say so. Create it only if there is a real need. One such case is when a class that acts as a receiver already exists as a separate class. If you have to write the code that is going to be invoked/executed and see no reason to create a separate class for that, then I do not see any fault in adding the invoker code to Command itself.

Why does the JVM have both `invokespecial` and `invokestatic` opcodes?

Both instructions use static rather than dynamic dispatch. It seems like the only substantial difference is that invokespecial will always have, as its first argument, an object that is an instance of the class that the dispatched method belongs to. However, invokespecial does not actually put the object there; the compiler is the one responsible for making that happen by emitting the appropriate sequence of stack operations before emitting invokespecial. So replacing invokespecial with invokestatic should not affect the way the runtime stack / heap gets manipulated -- though I expect that it will cause a VerifyError for violating the spec.
I'm curious about the possible reasons behind making two distinct instructions that do essentially the same thing. I took a look at the source of the OpenJDK interpreter, and it seems like invokespecial and invokestatic are handled almost identically. Does having two separate instructions help the JIT compiler better optimize code, or does it help the classfile verifier prove some safety properties more efficiently? Or is this just a quirk in the JVM's design?
Disclaimer: It is hard to tell for sure since I never read an explicit Oracle statement about this, but I pretty much think this is the reason:
When you look at Java byte code, you could ask the same question about other instructions. Why would the verifier stop you when pushing two ints on the stack and treating them as a single long right after? (Try it, it will stop you.) You could argue that by allowing this, you could express the same logic with a smaller instruction set. (To go further with this argument, a byte cannot express too many instructions, the Java byte code set should therefore cut down wherever possible.)
Of course, in theory you would not need a byte code instruction for pushing ints and longs to the stack and you are right about the fact that you would not need two instructions for INVOKESPECIAL and INVOKESTATIC in order to express method invocations. A method is uniquely identified by its method descriptor (name and raw argument types) and you could not define both a static and a non-static method with an identical description within the same class. And in order to validate the byte code, the Java compiler must check whether the target method is static nevertheless.
Remark: This contradicts the answer of v6ak. However, a methods descriptor of a non-static method is not altered to include a reference to this.getClass(). The Java runtime could therefore always infer the appropriate method binding from the method descriptor for a hypothetical INVOKESMART instruction. See JVMS §4.3.3.
So much for the theory. However, the intentions that are expressed by both invocation types are quite different. And remember that Java byte code is supposed to be used by other tools than javac to create JVM applications, as well. With byte code, these tools produce something that is more similar to machine code than your Java source code. But it is still rather high level. For example, byte code still is verified and the byte code is automatically optimized when compiled to machine code. However, the byte code is an abstraction that intentionally contains some redundancy in order to make the meaning of the byte code more explicit. And just like the Java language uses different names for similar things to make the language more readable, the byte code instruction set contains some redundancy as well. And as another benefit, verification and byte code interpretation/compilation can speed up since a method's invocation type does not always need to be inferred but is explicitly stated in the byte code. This is desirable because verification, interpretation and compilation are done at runtime.
As a final anecdote, I should mention that a class's static initializer <clinit> was not flagged static before Java 5. In this context, the static invocation could also be inferred by the method's name but this would cause even more run time overhead.
There are the definitions:
http://docs.oracle.com/javase/specs/jvms/se5.0/html/Instructions2.doc6.html#invokestatic
http://docs.oracle.com/javase/specs/jvms/se5.0/html/Instructions2.doc6.html#invokespecial
There are significant differences. Say we want to design an invokesmart instruction, which would choose smartly between inkovestatic and invokespecial:
First, it would not be a problem to distinguish between static and virtual calls, since we can't have two methods with same name, same parameter types and same return type, even if one is static and second is virtual. JVM does not allow that (for a strange reason). Thanks raphw for noticing that.
First, what would invokesmart foo/Bar.baz(I)I mean? It may mean:
A static method call foo.Bar.baz that consumes int from operand stack and adds another int. // (int) -> (int)
An instance method call foo.Bar.baz that consumes foo.Bar and int from operand stack and adds int. // (foo.Bar, int) -> (int)
How would you choose from them? There may exist both methods.
We may try to solve it by requiring foo/Bar.baz(Lfoo/Bar;I) for the static call. However, we may have both public static int baz(Bar, int) and public int baz(int).
We may say that it does not matter and possibly disable such situation. (I don't think that it is a good idea, but just to imagine.) What would it mean?
If the method is static, there are probably no additional restrictions. On the other hand, if the method is not static, there are some restrictions: "Finally, if the resolved method is protected (§4.6), and it is either a member of the current class or a member of a superclass of the current class, then the class of objectref must be either the current class or a subclass of the current class."
There are some further differences, see the note about ACC_SUPER.
It would mean that all the referenced classes must be loaded before bytecode verification. I hope this is not necessary now, but I am not 100% sure.
So, it would mean very inconsistent behavior.

STM32 programming tips and questions

I could not find any good document on internet about STM32 programming. STM's own documents do not explain anything more than register functions. I will greatly appreciate if anyone can explain my following questions?
I noticed that in all example programs that STM provides, local variables for main() are always defined outside of the main() function (with occasional use of static keyword). Is there any reason for that? Should I follow a similar practice? Should I avoid using local variables inside the main?
I have a gloabal variable which is updated within the clock interrupt handle. I am using the same variable inside another function as a loop condition. Don't I need to access this variable using some form of atomic read operation? How can I know that a clock interrupt does not change its value in the middle of the function execution? Should I need to cancel clock interrupt everytime I need to use this variable inside a function? (However, this seems extremely ineffective to me as I use it as loop condition. I believe there should be better ways of doing it).
Keil automatically inserts a startup code which is written in assembly (i.e. startup_stm32f4xx.s). This startup code has the following import statements:
IMPORT SystemInit
IMPORT __main
.In "C", it makes sense. However, in C++ both main and system_init have different names (e.g. _int_main__void). How can this startup code can still work in C++ even without using "extern "C" " (I tried and it worked). How can the c++ linker (armcc --cpp) can associate these statements with the correct functions?
you can use local or global variables, using local in embedded systems has a risk of your stack colliding with your data. with globals you dont have that problem. but this is true no matter where you are, embedded microcontroller, desktop, etc.
I would make a copy of the global in the foreground task that uses it.
unsigned int myglobal;
void fun ( void )
{
unsigned int myg;
myg=myglobal;
and then only use myg for the rest of the function. Basically you are taking a snapshot and using the snapshot. You would want to do the same thing if you are reading a register, if you want to do multiple things based on a sample of something take one sample of it and make decisions on that one sample, otherwise the item can change between samples. If you are using one global to communicate back and forth to the interrupt handler, well I would use two variables one foreground to interrupt, the other interrupt to foreground. yes, there are times where you need to carefully manage a shared resource like that, normally it has to do with times where you need to do more than one thing, for example if you had several items that all need to change as a group before the handler can see them change then you need to disable the interrupt handler until all the items have changed. here again there is nothing special about embedded microcontrollers this is all basic stuff you would see on a desktop system with a full blown operating system.
Keil knows what they are doing if they support C++ then from a system level they have this worked out. I dont use Keil I use gcc and llvm for microcontrollers like this one.
Edit:
Here is an example of what I am talking about
https://github.com/dwelch67/stm32vld/tree/master/stm32f4d/blinker05
stm32 using timer based interrupts, the interrupt handler modifies a variable shared with the foreground task. The foreground task takes a single snapshot of the shared variable (per loop) and if need be uses the snapshot more than once in the loop rather than the shared variable which can change. This is C not C++ I understand that, and I am using gcc and llvm not Keil. (note llvm has known problems optimizing tight while loops, very old bug, dont know why they have no interest in fixing it, llvm works for this example).
Question 1: Local variables
The sample code provided by ST is not particularly efficient or elegant. It gets the job done, but sometimes there are no good reasons for the things they do.
In general, you use always want your variables to have the smallest scope possible. If you only use a variable in one function, define it inside that function. Add the "static" keyword to local variables if and only if you need them to retain their value after the function is done.
In some embedded environments, like the PIC18 architecture with the C18 compiler, local variables are much more expensive (more program space, slower execution time) than global. On the Cortex M3, that is not true, so you should feel free to use local variables. Check the assembly listing and see for yourself.
Question 2: Sharing variables between interrupts and the main loop
People have written entire chapters explaining the answers to this group of questions. Whenever you share a variable between the main loop and an interrupt, you should definitely use the volatile keywords on it. Variables of 32 or fewer bits can be accessed atomically (unless they are misaligned).
If you need to access a larger variable, or two variables at the same time from the main loop, then you will have to disable the clock interrupt while you are accessing the variables. If your interrupt does not require precise timing, this will not be a problem. When you re-enable the interrupt, it will automatically fire if it needs to.
Question 3: main function in C++
I'm not sure. You can use arm-none-eabi-nm (or whatever nm is called in your toolchain) on your object file to see what symbol name the C++ compiler assigns to main(). I would bet that C++ compilers refrain from mangling the main function for this exact reason, but I'm not sure.
STM's sample code is not an exemplar of good coding practice, it is merely intended to exemplify use of their standard peripheral library (assuming those are the examples you are talking about). In some cases it may be that variables are declared external to main() because they are accessed from an interrupt context (shared memory). There is also perhaps a possibility that it was done that way merely to allow the variables to be watched in the debugger from any context; but that is not a reason to copy the technique. My opinion of STM's example code is that it is generally pretty poor even as example code, let alone from a software engineering point of view.
In this case your clock interrupt variable is atomic so long as it is 32bit or less so long as you are not using read-modify-write semantics with multiple writers. You can safely have one writer, and multiple readers regardless. This is true for this particular platform, but not necessarily universally; the answer may be different for 8 or 16 bit systems, or for multi-core systems for example. The variable should be declared volatile in any case.
I am using C++ on STM32 with Keil, and there is no problem. I am not sure why you think that the C++ entry points are different, they are not here (Keil ARM-MDK v4.22a). The start-up code calls SystemInit() which initialises the PLL and memory timing for example, then calls __main() which performs global static initialisation then calls C++ constructors for global static objects before calling main(). If in doubt, step through the code in the debugger. It is important to note that __main() is not the main() function you write for your application, it is a wrapper with different behaviour for C and C++, but which ultimately calls your main() function.