STM32 IAR no ITM trace output unless printf is included in the code - embedded

I have a small project that I added my own custom debug functions to so I could have some extra functionality. They have been working great, and use the following method to send the data:
while(*bp)
ITM_SendChar(*bp++);
I finally got around to switching over all the printf statements to use my own function and all the output just stopped. A little playing around and I figured out as long as I have one single printf function compiled in, no matter where, ITM_SendChar works right out the gate.
It would seem there is some functionality compiled in when printf is used in the project that allows ITM_SendChar to work.
It is not a huge deal but I am fairly curious as to why this is. Is there perhaps another way to initialize the ITM (Instrumentation Trace Macrocell) system without having to stick in a dummy printf?

I came across the same problem and I tried everything with correct configuration of the ITM Registers. But I couldn't figure it out.
My solution to not use memory intense printf, is to use putchar:
while(*bp)
putchar(*bp++);
It even works, when I just have one putchar somewhere in the code which outputs one character and then output the other things with ITM_SendChar()
I assume the IAR automatically adds some configuration function as soon as the putchar function is included.

Related

How can i restart the flow graph of gnuradio, after head block stop(hang) it?

I'm working with gnuradio 3.10.4 and usrp B200mini.
My flowgraph is very simple:
usrp source -> head block -> file sink
I want to store a fixed amount of data to file sink, then reconfigure usrp and start it to store again.
My Python program likes:
tb.start()
tb.wait()
tb.lock()
...reconfigure usrp...
tb.unlock()
tb.start()
...
But the second time when tb.start() is used, the file can be created successfully but no data is written to it.
Can anyone tell me what's wrong with the program or provide any relevant docmutation becaouse I find little about it.
Thanks for your support.
When you're not sure how to get a block to do what you want, or if it can, it can be useful to consult the source code of the block, because GNU Radio blocks are not always thoroughly documented.
Starting from this wiki page on Head we can see all the code. It's C++, but fairly simple, and you can ignore all the setup and just look at the lines that seem to be doing the work.
In head_impl::work in head_impl.cc, we can see that the way the block works is counting the number of items it has passed in d_ncopied_items and comparing that against d_nitems (the value you provided). There's nothing here that restarts the count.
We have to also check the header file, head_impl.h, because code may be there too. And there we find what you need:
void reset() override { d_ncopied_items = 0; }
So, call reset() on the head block and it will forget about how many items it has already copied.

Cannot use my gain block from the example. How to?

I am trying to make a custom block for my x310 and use it.
So far, I'm stuck at the example FPGA image compilation because I can't use the custom block gain.
I've followed step by step the "Building an FPGA Image with OOT Blocks" tutorial and successfully compiled and uploaded the image to my x310. A uhd_usrp_probe returned the expected "0/Block#0" linked back and forth to the SEP4 Block. But a warning from RFNOC:BLOCK_FACTORY states "could not find block with Noc-ID 0xb16, 0xffff"
I proceeded anyway after compiling a custom C++ program based on the rfnoc_radio_loopback example in order to make use of the gain block,
I added this line in the includes:
#include <rfnoc::gain::gain_block_control.hpp>
And these two lines after the radio_block_control instancing:
uhd::rfnoc::block_id_t gain_id(0, "Block", 0);
rfnoc::example::gain_block_control::sptr gain_ctrl = graph->get_block<rfnoc::example::gain_block_control>(gain_id);
The program compiles fine but running it returns a LookupError stating "This device doesn't have a block of type rfnoc::example::gain_block_control with ID: 0/Block#0"
I tend to believe the lookup error is clear but I don't know what to do instead.
I first tried to use the block with gnuradio-companion but was not able to generate the block at all. I am sure I am missing something but I have no idea what (apart from actual brain cells).
What is wrong with my C++?
Is it possible to generate a gain block in gnuradio-companion and if yes how?
Do you know of some tutorial that explains the different procedures on how to use a custom block?
There is an example application (rfnoc-example/apps/init_gain_block.cpp) that will test the functionality of the block for you. You can compile/run that to see if your block is working.
If you are seeing uhd_usrp_probe return 0/Block#0 instead of 0/Gain#0, then the .so file is not being picked up properly. The easiest way to test this is to LD_PRELOAD the DLL like this:
LD_PRELOAD=/path/to/librfnoc-example.so uhd_usrp_probe
What this will do is force a preload of the DLL containing the block controller (which will make sure it is registered). You should be seing 0/Gain#0 as the block ID now.

Can FsXaml be used in an F# interpreted script?

I just converted one of the FsXaml demo programs to an interpreted F# script so I could experiment with it and learn. It wouldn't run, and the interpreter gave me the following error message:
System.NotSupportedException: The invoked member is not supported in a
dynamic assembly.
at System.Reflection.Emit.InternalAssemblyBuilder.GetManifestResourceStream(String
name)
at FsXaml.InjectXaml.from(String file, Object root)
at
FsXaml.App.InitializeComponent() at FsXaml.App..ctor()
at
FSI_0002.main[a](a argv)
in C:\Users\bobmc\OneDrive\FSharp\Learning\WPFExamples\FsXaml\demos\WpfSimpleMvvmApplication\WPFApp.fsx:line 104
at .$FSI_0002.main#() in
C:\Users\bobmc\OneDrive\FSharp\Learning\WPFExamples\FsXaml\demos\WpfSimpleMvvmApplication\WPFApp.fsx:line
109
Can I use the F# interpreter with FsXaml? Thanks to all for your help.
Unfortunately, WPF and scripts don't play well together.
The exception occurs within the WPF runtime itself - FsXaml.InjectXaml is using a XamlObjectWriter to populate the type with the contents from the XAML file. This type doesn't work if you're using a dynamic assembly (like FSI), which unfortunately means that FsXaml will likely never be able to work from FSI.
That being said, even if there was a way around this, it'd be of very limited use. WPF also has restrictions that make it not play well with a scripting scenario, such as the "only one application can ever be created within a given AppDomain" restriction. That one makes it so closing the "main" (first) window makes it so you can never open another one. As such, I haven't prioritized trying to make this work in FSI.
I'd be happy to accept contributions if somebody has an idea of how to make FsXaml play more nicely within the context of FSI, but at this point, I don't see a good solution for that usage scenario.
Edit: FsXaml 3.1.6 now includes functionality to make this a lot easier. It works well, provided you don't close the main window, or you use dialogs. There is a demo application/script illustrating this.

How to find the size of a reg in verilog?

I was wondering if there were a way to compute the size of a reg in Verilog. I researched it quite a bit, and found $size(a), but it's only in SystemVerilog, and it won't work in my verilog program.
Does anyone know an alternative for this??
I also wanted to ask as a side note; I'm having some trouble with my test bench in the sense that when I update a value in the file, that change is not taken in consideration when I simulate. I've been told I might have been using an old test bench but the one I am continuously simulating is the only one available in this project.
EDIT:
To give you an idea of what's the problem: in my code there is a "start" signal and when it is set to 1, the operation starts. Otherwise, it stays idle. I began writing the test bench with start=0, tested it and simulated it, then edited the test bench by setting start to 1. But when I simulate it, the start signal remains 0 in the waveform. I tried to check whether I was using another test bench, but it is the only test bench I am using in this project.
Given that I was on a deadline, I worked on the code so that it would adapt to the "frozen" test bench. I am getting now all the results I want, but I wanted to test some other features of my code, so I created a new project and copy pasted the code in new files (including the same test bench). But when I ran a simulation, the waveform displayed wrong results (even though I was using the exact same code in all modules and test bench). Any idea why?
Any help would be appreciated :)
There is a standardised way to do this, but it requires you to use the VPI, which I don't think you get on Modelsim's student edition. In short, you have to write C code, and dynamically link it to the simulator. In the C code, you can get object properties using routines such as vpi_get. Useful properites might be vpiSize, which is what you want, vpiLeftRange, vpiRightRange, and so on.
Having said all that, Verilog is essentially a static language, and objects have to be declared with a static width using constant expressions. Having a run-time method to determine an object's size is therefore of pretty limited value (since you should already know it), and may not solve whatever problem you actually have. Your question would make more sense for VHDL (and SystemVerilog?), which are much more dynamic.
Note on Icarus: the developers have pushed lots of SystemVerilog stuff back into the main language. If you take advantge of this you may find that your code is not portable.
Second part of your question: you need to be specific on what your problem actually is.

Scripting Language with "edit and continue" or "hot swap" support? ( Maybe possible in Lua?)

I am making my existing .Net Application Scriptable for non programming users. I added lua, it works like a charm. Then I added debug functionality(pause/continue/step) via debug.sethook. It works also like a charm.
Now I realize that my Application needs edit and continue feature like Visual Studio has it. You pause the execution can edit the code and then continue from your current state with changes applied. This feature is very important to me. I thought this would be easy to do for scripting languages.
Everywhere I read that scripting languages can do this. But even after spending hours of searching I haven't found an Lua implementation yet. It hasn't to be Lua but hot swapping code in Lua would be my first choice.
How can the ability for the user be offered to pause and edit the script and than continue the execution with changes applied?
NOTE: It doesn't have to be Lua every scripting language would be okay
Update
#Schollii
Here is an example:
function doOnX()
if getValue() == "200" then
value = getCalculation()
doSomething() -- many function calls, each can take about 2s
doSomething()
doSomething()
print(value)
doX(value)
end
end
doOnX()
Thank you for your suggestions. This is how it might work:
I will use https://github.com/frabert/NetLua Its a very cool, well written 100% C# Lua Interpreter. It is generating an AST tree first and then directly executing it.
The parser needs to be modified. In Parser.cs public Ast.Block ParseString(string Chunk) there is a parseTree generated first. parseTree.tokens[i].locations are containing the exact position of each token. The Irony.Parsing.ParseTree is then parsed again and is converted to a NetLua.Ast.Block but the location information is missed. I will need to change that so later I will know which Statement is in which line.
Now each Statement from the AST tree is directly executed via EvalBlock. Debug functionality (like I have in my C Binding lua Interpreter DynamicLua via debug.setHook) needs to be added. This can be done in LuaInterpreter.cs internal static LuaArguments EvalBlock(`. Pause/continue/step functions should be no problem. I also can now add current line Highlighting because each statement contains position line information.
When the execution is paused and the code was edited the current LuaContxct is saved. It contains all variables. Also the last Statement with the last execution line is saved.
Now the code String is parsed again to a new AST tree. It gets executed. But all statements are skipped until the saved statement with the line statement is reached. The saved LuaContext is restored and execution can continue with all changes applied.
New variables could be also added after the last executed line, because a new NetLua.Ast.Assignment Statement could just add a new variable to the current LuaContext and everything should just work fine.
Will this work?
I think this is quite challenging and triicky to do right.
Probably the only way you could do that is if you recompile the chunk of code completely. In a function this would mean the whole function regardless of where edit is in function. Then call the function again. Clearly the function must be re-entrant else its side effects (like having incremented a global or upvalue) would have to be undone which isn't possible. If it is not reentrant it will still work just not give expected results (for example if the function increments a global variable by 1 calling it again will result in the global variable having been increased by 2 once the function finally returns).
But finding the lines in the script where the chunknstarts and ends would be tricky if truly generic solution. For specific solution you would have to post specific examples of scripts you want to run and examples of lines you would want to edit. If the whole user script gets recompiled and rerun then this is not a problem, but the side effects is still an issue, examples could help there too.