I am learning stm32f4.
Why do we have GPIO port output data register (GPIOx_ODR) when GPIO port bit set/reset register (GPIOx_BSRR) still exists?
Main reason is to have atomic access to GPIOs.
In case of ODR register, if you want change only one bit then you need to use read - modify - write method which is non atomic, is slow and also unsafe if you want to control some GPIOS from different threads or also from interrupt handler, then can happen race condition.
Usage of BSRR register is atomic and this has some advantage, you can with single write set or clear certain output(s) without reading and modifying before write. It is faster and is thread safe.
Disadvantage of using BSRR is only f you want only toggle one bit without knowing actual state of certain bit. (to keep atomicity, you need remember actual value)
Related
Some Vulkan objects (eg vkPipelines, vkCommandBuffers) are able to be created/allocated in arrays (using size + pointer parameters). At a glance, this appears to be done to make it easier to code using common usage patterns. But in some cases (eg: when creating a C++ RAII wrapper), it's nicer to create them one at a time. It is, of course, simple to achieve this.
However, I'm wondering whether there are any significant downsides to doing this?
(I guess this may vary depending on the actual object type being created - but I didn't think it'd be a good idea to ask the same question for each object)
Assume that, in both cases, objects are likely to be created in a first-created-last-destroyed manner, and that - while the objects are individually created and destroyed - this will likely happen in a loop.
Also note:
vkCommandBuffers are also deallocated in arrays.
vkPipelines are destroyed individually.
Are there any reasons I should modify my RAII wrapper to allow for array-based creation/destruction? For example, will it save memory (significantly)? Will single-creation reduce performance?
Remember that vkPipeline creation does not require external synchronization. That means that the process is going to handle its own mutexes and so forth. As such, it makes sense to avoid locking those internal mutexes whenever possible.
Also, the process is slow. So being able to batch it up and execute it into another thread is very useful.
Command buffer creation doesn't have either of these concerns. So there, you should feel free to allocate whatever CBs you need. However, multiple creation will never harm performance, and it may help it. So there's no reason to avoid it.
Vulkan is an API designed around modern graphics hardware. If you know you want to create a certain number of objects up front, you should use the batch functions if they exist, as the driver may be able to optimize creation/allocation, resulting in potentially better performance.
There may (or may not) be better performance (depending on driver and the type of your workload). But there is obviously potential for better performance.
If you create one or ten command buffers in you application then it does not matter.
For most cases it will be like less than 5 %. So if you do not care about that (e.g. your application already runs 500 FPS), then it does not matter.
Then again, C++ is a versatile language. I think this is a non-problem. You would simply have a static member function or a class that would construct/initialize N objects (there's probably a pattern name for that).
The destruction may be trickier. You can again have static member function that would destroy N objects. But it would not be called automatically and it is annoying to have null/husk objects around. And the destructor would still be called on VK_NULL_HANDLE. There is also a problem, that a pool reset or destruction would invalidate all the command buffer C++ objects, so there's probably no way to do it cleanly/simply.
I am working with contiki and trying to understand the terminology used in it.
I am observing certain words such as yield, stackless here and there on internet. Some examples
PROCESS_EVENT_CONTINUE : This event is sent by the kernel to a process that is waiting in a PROCESS_YIELD() statement.
PROCESS_YIELD(); // Wait for any event, equivalent to PROCESS_WAIT_EVENT().
PROCESS_WAIT_UNTIL(); // Wait for a given condition; may not yield the process.
Does yielding a process means, executing a process in contiki. Also what does it mean that contiki is stackless.
Contiki uses so-called protothreads (a Contiki-specific term) in order to support multiple application-level processes in this OS. Protothread is just a fancy name for a programming abstraction known as coroutine in computer science.
"Yield" in this context is short for "yield execution" (i.e. give up execution). It means "let other protothreads to execute until an event appears that is addressed to the current protothread". Such events can be generated both by other protothreads and by interrupt handler functions. The "wait" macros are similar, but allow to yield and wait for specific events or conditions.
Contiki protothreads are stackless in the sense that they all share the same global stack of execution, as opposed to "real" threads which typically get their own stack space. As a consequence, the values local variables are not preserved in Contiki protothreads across yields. For example, doing this is undefined behavior:
int i = 1;
PROCESS_YIELD();
printf("i=%d\n", i); // <- prints garbage
The traditional Contiki way how deal with this limitation is to declare all protothread-local variables as static:
static int i = 1;
PROCESS_YIELD();
printf("i=%d\n", i);
Other options is to use global variables, of course, but having a lot of global variables is bad programming style. The benefit of static variables declared inside protothread functions is that they are hidden from other functions (including other protothreads), even though at the low level they are allocated in the global static memory region.
In the general case, to "Yield" in any OS means to synchronously invoke the scheduler (i.e. on demand rather then through interrupt) in order to give the opportunity of control to some other thread. In an RTOS such a feature would only affect threads of the same priority, and may be used in addition or instead of pre-emptive round-robin scheduling is required. Most RTOS do not have an explicit yield function, or in some cases (such as VxWorks) the same effect can be achieved using a zero length delay.
In a cooperative scheduler such as that in Contiki, such a function is necessary to allow other threads to run in an otherwise non-blocking thread. A thread always has control until it calls a bocking or yielding function.
The cooperative nature of Contiki's scheduler mean that it cannot be classified as an RTOS. It may be possible to achieve real-time behaviour suitable to a specific application, but only through careful and appropriate application design, rather the through intrinsic scheduler behaviour.
Question
Is there a way to programmatically set what FPGA variables I am reading from or writing to so that I can generalize my main simulation loop for every object that I want to run? The simulation loops for each object are identical except for which FPGA variables they read and write. Details follow.
Background
I have a code that uses LabVIEW OOP to define a bunch of things that I want to simulate. Each thing then has an update method that runs inside of a Timed Loop on an RT controller, takes a cluster of inputs, and returns a cluster of outputs. Some of these inputs come from an FPGA, and some of the outputs are passed back to the FPGA for some processing before being sent out to hardware.
My problem is that I have a separate simulation VI for every thing in my code, since different values are read from and returned to the FPGA for each thing. This is a pain for maintainability and seems to cry out for a better method. The problem is illustrated below. The important parts are the FPGA input and output nodes (change for every thing), and the input and output clusters for the update method (always the same).
Is there some way to define a generic main simulation VI and then programmatically (maybe with properties stored in my things) tell it which specific inputs and outputs to use from the FPGA?
If so then I think the obvious next step would be to make the main simulation loop a public method for my objects and just call that method for each object that I need to simulate.
Thanks!
The short answer is no. Unfortunately once you get down to the hardware level with LabVIEW FPGA things begin to get very static and rely on hard-coded IO access. This is typically handled exactly how you have presented your current approach. However, you may be able encapsulate the IO access with a bit of trickery here.
Consider this, define the IO nodes on your diagram as interfaces and abstract them away with a function (or VI or method, whichever term you prefer). You can implement this with either a dynamic VI call or an object oriented approach.
You know the data types defined by your interface are well known because you are pushing and pulling them from clusters that do not change.
By abstracting away the hardware IO with a method call you can then maintain a library of function calls that represent unique hardware access for every "thing" in your system. This will encapsulate changes to the hardware IO access within a piece of code dedicated to that job.
Using dynamic VI calls is ugly but you can use the properties of your "things" to dictate the path to the exact function you need to call for that thing's IO.
An object oriented approach might have you create a small class hierarchy with a root object that represents generic IO access (probably doing nothing) with children overriding a core method call for reading or writing. This call would take your FPGA reference in and spit out the variables every hardware call will return (or vice versa for a read). Under the hood it is taking care of deciding exactly which IO on the FPGA to access. Example below:
Keep in mind that this is nowhere near functional, I just wanted you to see what the diagram might look like. The approach will help you further generalize your main loop and allow you to embed it within a public call as you had suggested.
This looks like an [object mapping] problem which LabVIEW doesn't have great support for, but it can be done.
My code maps one cluster to another assuming the control types are the same using a 2 column array as a "lookup."
Good day all,
I'm having a hell of a time figuring out which multithreading approach to utilize in my current work project. Since I've never written a multithreaded app in my life, this is all confusing and very overwhelming. Without further ado, here's my background story:
I've been assigned to take over work on a control application for a piece of test equipment in my companies R&D lab. The program has to be able to send and receive serial communications with three different devices semi-concurrently. The original program was written in VB 6 (no multithreading) and I did plan on just modding it to work with the newer products that need to be tested until it posed a safety hazard when the UI locked up due to excessive serial communications during a test. This resulted in part of the tester hardware blowing up, so I decided to try rewriting the app in VB.Net as I'm more comfortable with it to begin with and because I thought multithreading might help solve this problem.
My plan was to send commands to the other pieces of equipment from the main app thread and spin the receiving ends off into their own threads so that the main thread wouldn't lock up when timing is critical. However, I've yet to come to terms with my options. To add to my problems, I need to display the received communications in separate rich text boxes as they're received while the data from one particular device needs to be parsed by the main program, but only the text that results from the most current test (I need the text box to contain all received data though).
So far, I've investigated delegates, handling the threads myself, and just began looking into BackgroundWorkers. I tried to use delegates earlier today, but couldn't figure out a way to update the text boxes. Would I need to use a call back function to do this since I can't do it in the body of the delegate function itself? The problem I see with handling threads myself is figuring out how to pass data back and forth between the thread and the rest of the program. BackgroundWorkers, as I said, I just started investigating so I'm not sure what to think about them yet.
I should also note that the plan was for the spawned threads to run continuously until somehow triggered to stop. Is this possible with any of the above options? Are there other options I haven't discovered yet?
Sorry for the length and the fact that I seem to ramble disjointed bits of info, but I'm on a tight deadline and stressed out to the point I can't think straight! Any advice/info/links is more than appreciated. I just need help weighing the options so I can pick a direction and move forward. Thanks to everybody who took the time to read this mess!
OK, serial ports, inter-thread comms, display stuff in GUI components like RichTextBox, need to parse incoming data quickly to decode the protocol and fire into a state-machine.
Are all three serial ports going to fire into the same 'processControl' state-machine?
If so, then you should probably do this by assembling event/data objects and queueing them to the state-machine run by one thread,(see BlockingCollection). This is like hugely safer and easier to understand/debug than locking up the state-engine with a mutex.
Define a 'comms' class to hold data and carry it around the system. It should have a 'command' enum so that threads that get one can do the right thing by switching on the enum. An 'Event' member that can be set to whatever is used by the state-engine. A 'bool loadChar(char inChar)' that can have char-by-char data thrown into it and will return 'true' only if a complete, validated protocol-unit has been assembled, checked and parsed into data mambers. A 'string textify()' method that dumps info about the contained data in text form. A general 'status' string to hold text stuff. An 'errorMess' string and Exception member.
You probably get the idea - this comms class can transport anything around the system. It's encapsulated so that a thread can use it's data and methods without reference to any other instance of comms - it does not need any locking. It can be queued to work threads on a Blocking Collection and BeginInvoked to the GUI thread for displaying stuff.
In the serialPort objects, create a comms at startup and load a member with the serialPort instance. and, when the DataReceived event fires, get the data from the args a char at a time and fire into the comms.loadChar(). If the loadChar call returns true, queue the comms instance to the state-machine input BlockingCollection and then immediately create another comms and start loading up the new one with data. Just keep doing that forever - loading up comms instances with chars until they have a validated protocol unit and queueing them to the state-machine. It may be that each serial port has its own protocol - OK, so you may need three comms descendants that override the loadChar to correctly decode their own protocol.
In the state-machine thread, just take() comms objects from the input and do the state-engine thing, using the current state and the Event from the comms object. If the SM action routine decides to display something, BeginInvoke the comms to the GUI thread with the command set to 'displaySomeStuff'. When the GUI thread gets the comms, it can case-switch on the command to decide what to display/whatever.
Anyway, that's how I build all my process-control type apps. Data flows around the system in 'comms' object instances, no comms object is ever operated on by more than one thead at a time. It's all done by message-passing on either BlockingCollection, (or similar), queues or BeginInvoke() if going to the GUI thread.
The only locks are in the queues and so are encapsulated. There are no explicit locks at all. This means there can be no explicit deadlocks at all. I do get headaches, but I don't get lockups.
Oh - don't go near 'Thread.Join()'.
I have a numeric control( not Indicator) and a for loop(limit 5)
I need to display the [current loop Index+ value in the numeric control] in the Numeric control. I'm new to LabVIEW. Is there any idea to do this?
To write a value to a control, you need to create a local variable from it (right-click on the control's terminal on the block diagram and choose Create > Local Variable). To have it update each iteration of your For loop, put the local variable terminal inside the For loop and wire whatever you want displayed to that terminal. I'm not sure if this is going to be a good user interface design, but it's the answer to your question.
You can also use local variables to write to indicators from more than one place in your block diagram, and to read from indicators or controls. You can have more than one local variable terminal for any given control or indicator. Each local variable terminal is either for reading or writing - right-click on the local variable and choose Change to Read or Change to Write.
You should be careful about using local variables to pass data around, because program flow will no longer be controlled by data flow as it is when you pass data along a wire, and this could give you unpredictable behaviour (race conditions). Writing in one place and reading in multiple places is OK if the readers only need to know the current value at the time they execute, and so is writing to an indicator from multiple places where the indicator is only being used to display information to the user.
Is there any specific reason you need to update a control that often?
If it needs to be updated that regular it might be better to alter it into an indicator.
If you update a control that often the user will have the feeling he's not in 'control'.
As mentioned aleady you can use local variables and proerty nodes to set the value of your control or indicator. If you are trying to persist data there is a much better way.
Google "functional global" or "labview 2 style global". The basic pattern is to use a while loop hard coded to stop after one iteration. Add an unitialized shift register. Add a case structure inside the loop. Use a control (boolean, enum, or string) to select on the case structure. Drop a control/indicator pair of the same datatype on your VI. Wire the indicator to the outter-output of the right shifter on the outside of the loop. Place the control INSIDE the loop in the "set" (usually true, non-default) case and wire it out of the case into the input of the right shifter. Go to the other empty case(s) and wire the inner-output of the left shifter through the cases to the terminal that connects to the inner-input.
Becuase you did not wire the outter-input of the left shifter it is an "unitialized shift register". It will persist data from the last call to the VI. This is like declaring a variable on the heap in a c function and having the last assigned value available to you at the next function call.
The three main benefits are preservation of data flow, thread saftey, and performance. You get data flow by adding error IO to your VI. Thread saftey is ensured becasue the VI's execution is guaranteed to be atomic. Perfomance is improved becasue LV data wants to live on a wire. Every time you write data to a control's proerty node the LV runtime writes that data to the UI thread. I think there is a similar threading based performance hit for locals too but I'm not sure.
Per the first comment...
Copied here from the link for your benefit (yes you Mr Reader).
Problem:
I am considering using local or global variables; in what thread do variables execute?
Solution:
A common misunderstanding is that local and global variable operations execute in the UI thread, or require a thread swap to the UI thread - this is not true. The following describes the behavior of local and global variable write and read operations:
Write:
When you write to a local or global variable, LabVIEW does not switch to the user interface thread immediately. LabVIEW instead writes the value to the transfer buffer, which is a protected area of memory. The user interface updates at the next scheduled update time. It is possible to update a variable multiple times before a single thread switch or user interface update occurs. This is possible because variables operate solely in the execution thread.
Read:
When you read from a local or global variable, the operation will occur in the thread which the VI executes, thus, you can be sure it does not occur in the UI thread by setting the execution system in the VI properties to standard. There is a thread protection mechanism to make sure that no writer of the global is changing the data while you are reading it, but this is done via a mutex, and not by going to the UI thread. However, if the global variable panel is opened, then a message is posted to redraw the global control, and the redraw will happen in the UI thread.
nekomatic is correct. The thread swap does not occur when you write to locals.
I agree with Ton. If you are changing the value of a control programatically, then you should consider whether it should be an indicator, or maybe have a pseudo-indicator of the control.
It would be a good idea to post an isolated version of your code so we can understand what exactly is going on.
If you wanted to maintain dataflow to control the program flow, you could instead use a property node of the control and set the "Value" property.
To create the property node, right click on the control's terminal on the block diagram, and select Create » Property Node » Value. Now you can adhere to dataflow programming by using error wires to control the flow of the program.
Again, to re-emphasize Ton's point - If you are going to change the value of a control frequently, it might be worth changing it into an indicator instead.