Cannot use my gain block from the example. How to? - gnuradio

I am trying to make a custom block for my x310 and use it.
So far, I'm stuck at the example FPGA image compilation because I can't use the custom block gain.
I've followed step by step the "Building an FPGA Image with OOT Blocks" tutorial and successfully compiled and uploaded the image to my x310. A uhd_usrp_probe returned the expected "0/Block#0" linked back and forth to the SEP4 Block. But a warning from RFNOC:BLOCK_FACTORY states "could not find block with Noc-ID 0xb16, 0xffff"
I proceeded anyway after compiling a custom C++ program based on the rfnoc_radio_loopback example in order to make use of the gain block,
I added this line in the includes:
#include <rfnoc::gain::gain_block_control.hpp>
And these two lines after the radio_block_control instancing:
uhd::rfnoc::block_id_t gain_id(0, "Block", 0);
rfnoc::example::gain_block_control::sptr gain_ctrl = graph->get_block<rfnoc::example::gain_block_control>(gain_id);
The program compiles fine but running it returns a LookupError stating "This device doesn't have a block of type rfnoc::example::gain_block_control with ID: 0/Block#0"
I tend to believe the lookup error is clear but I don't know what to do instead.
I first tried to use the block with gnuradio-companion but was not able to generate the block at all. I am sure I am missing something but I have no idea what (apart from actual brain cells).
What is wrong with my C++?
Is it possible to generate a gain block in gnuradio-companion and if yes how?
Do you know of some tutorial that explains the different procedures on how to use a custom block?

There is an example application (rfnoc-example/apps/init_gain_block.cpp) that will test the functionality of the block for you. You can compile/run that to see if your block is working.
If you are seeing uhd_usrp_probe return 0/Block#0 instead of 0/Gain#0, then the .so file is not being picked up properly. The easiest way to test this is to LD_PRELOAD the DLL like this:
LD_PRELOAD=/path/to/librfnoc-example.so uhd_usrp_probe
What this will do is force a preload of the DLL containing the block controller (which will make sure it is registered). You should be seing 0/Gain#0 as the block ID now.

Related

How can i restart the flow graph of gnuradio, after head block stop(hang) it?

I'm working with gnuradio 3.10.4 and usrp B200mini.
My flowgraph is very simple:
usrp source -> head block -> file sink
I want to store a fixed amount of data to file sink, then reconfigure usrp and start it to store again.
My Python program likes:
tb.start()
tb.wait()
tb.lock()
...reconfigure usrp...
tb.unlock()
tb.start()
...
But the second time when tb.start() is used, the file can be created successfully but no data is written to it.
Can anyone tell me what's wrong with the program or provide any relevant docmutation becaouse I find little about it.
Thanks for your support.
When you're not sure how to get a block to do what you want, or if it can, it can be useful to consult the source code of the block, because GNU Radio blocks are not always thoroughly documented.
Starting from this wiki page on Head we can see all the code. It's C++, but fairly simple, and you can ignore all the setup and just look at the lines that seem to be doing the work.
In head_impl::work in head_impl.cc, we can see that the way the block works is counting the number of items it has passed in d_ncopied_items and comparing that against d_nitems (the value you provided). There's nothing here that restarts the count.
We have to also check the header file, head_impl.h, because code may be there too. And there we find what you need:
void reset() override { d_ncopied_items = 0; }
So, call reset() on the head block and it will forget about how many items it has already copied.

Anylogic: truncated class error during optimization

I state that I am a beginner, here is my problem.
My model works perfectly when I run it with a normal simulation. Now I'm trying to optimize some parameters using the optimization experiment, I've followed all the steps of the official tutorial, but it doesn't work because I get "Exception during discrete event execution:
Truncated class file". The strange thing is that, looking into the console displaying the error, I see that some lines are referred to an old version of my model, for example:
java.lang.ClassFormatError: Truncated class file
at coffe_maker.Main._m1_1_delayTime_xjal(Main.java:14070)
The current model's name is coffee_maker_v2_6 so I don't understand why I get this kind of error, do you know if it is normal? What am I doing wrong?
The most likely cause is that you have Java code left in an 'unused' configuration of a Delay block's "Delay time" expression (e.g., it now has a static value but you had Java code in the now-switched-out dynamic value).
Unfortunately, AnyLogic sometimes still includes the switched-out code in the compiled class, and this can sometimes cause strange runtime errors such as that one.
If this does look to be the case, temporarily switch to the offending switched-out configuration and delete it before switching back to the correct one.
I have resolved: the problem was that, in every delay block of my model, the delay time was linked with a database reference (type code), now I am trying by tyiping the probability distributions in the delays directly and now the optimization works

Gnuradio software source block

I'm currently trying to do some real-time signal-processing and I would like to use "gnuradio". I will be processing multiple channels of EEG which come in trough a custom interface (namely "Lab Streaming Layer"; LSL) in python.
Now my question is if there is an existing block already where you can kind of "push" samples into the signal-processing-graph during run-time? The only blocks I've found so far offer support for audio hardware, TCP-streams and files.
You will have to write your own block; that can be done in Python or C++, whatever is better for your case.
The GNU Radio Guided Tutorials (you should really read them in order from 1 to 5, at least) do explain how to do that.
Because we all know that people are lazy at reading, here's a rough preview of what you'll learn:
make a new Out-of-tree module: gr_modtool newmod sensorinterface, change into the newly generated directory: cd gr-sensorinterface
add a new source block: gr_modtool add eeg_sensor_source; the block type you'll want is "source"; you will be asked to fill in some block details.
edit the generated source file (in lib/ or python/, depending on which language you chose:
add a proper io signature: your output will probably have the size of float
edit the central work function; add code to get new samples, and copy those to the output_items buffer.
The guided tutorials are really nice!
The most flexible method is to write your own GNU Radio block, but there are several options for getting data into a flow graph without using any custom blocks. (Naming from the Python perspective.)
gnuradio.blocks.message_source, which takes data from a gnuradio.gr.msg_queue.
You can use a gnuradio.blocks.file_descriptor_source where the file descriptor is one end of a pipe.

How to find the size of a reg in verilog?

I was wondering if there were a way to compute the size of a reg in Verilog. I researched it quite a bit, and found $size(a), but it's only in SystemVerilog, and it won't work in my verilog program.
Does anyone know an alternative for this??
I also wanted to ask as a side note; I'm having some trouble with my test bench in the sense that when I update a value in the file, that change is not taken in consideration when I simulate. I've been told I might have been using an old test bench but the one I am continuously simulating is the only one available in this project.
EDIT:
To give you an idea of what's the problem: in my code there is a "start" signal and when it is set to 1, the operation starts. Otherwise, it stays idle. I began writing the test bench with start=0, tested it and simulated it, then edited the test bench by setting start to 1. But when I simulate it, the start signal remains 0 in the waveform. I tried to check whether I was using another test bench, but it is the only test bench I am using in this project.
Given that I was on a deadline, I worked on the code so that it would adapt to the "frozen" test bench. I am getting now all the results I want, but I wanted to test some other features of my code, so I created a new project and copy pasted the code in new files (including the same test bench). But when I ran a simulation, the waveform displayed wrong results (even though I was using the exact same code in all modules and test bench). Any idea why?
Any help would be appreciated :)
There is a standardised way to do this, but it requires you to use the VPI, which I don't think you get on Modelsim's student edition. In short, you have to write C code, and dynamically link it to the simulator. In the C code, you can get object properties using routines such as vpi_get. Useful properites might be vpiSize, which is what you want, vpiLeftRange, vpiRightRange, and so on.
Having said all that, Verilog is essentially a static language, and objects have to be declared with a static width using constant expressions. Having a run-time method to determine an object's size is therefore of pretty limited value (since you should already know it), and may not solve whatever problem you actually have. Your question would make more sense for VHDL (and SystemVerilog?), which are much more dynamic.
Note on Icarus: the developers have pushed lots of SystemVerilog stuff back into the main language. If you take advantge of this you may find that your code is not portable.
Second part of your question: you need to be specific on what your problem actually is.

Is there a way to mix MonoTouch and Objective-C?

I'd like to know if there is a way to mix C# and Obj-C code in one project. Specifically, I'd like to use Cocos2D for my UI in Obj-C and call some MonoTouch C#-Library that does some computations and get some values back. Is there a way to do this? Or maybe the other way around, i. e. building in MonoTouch and calling Cocos2D-functions?
Thanks.
The setup that you describe is possible, but the pipeline is not as smooth as it is when you do your entire project in MonoTouch. This is in fact how we bootstrapped MonoTouch: we took an existing Objective-C sample and we then replaced the bits one by one with managed code.
We dropped those samples as they bitrot.
But you can still get this done, use the mtouch's --xcode command line option to generate a sample program for you, and then copy the bits that you want from the generated template.m into your main.m. Customize the components that you want, and just start the XCode project from there.
During your development cycle, you will continue to use mtouch --xcode
Re: unknown (google):
We actually did this as described.
See this page for a quick start, but the last code segment on that page is wrong, because it's omitting the "--xcode"-parameter.
http://monotouch.net/Documentation/XCode
What you have to do to embed your Mono-EXE/DLL into an Objective-C program is to compile your source with SharpDevelop, then run mtouch with these parameters:
/Developer/MonoTouch/usr/bin/mtouch --linksdkonly --xcode=output_dir MyMonoAssembly.exe
This only works with the full version of MonoTouch. The trial does not allow to use the "--xcode"-argument . The "--linksdkonly"-argument is needed if you want mtouch to keep unreferenced classes in the compiled output, otherwise it strips unused code.
Then mtouch compiles your assembly into native ARM-code (file extension .s) and also generates a XCode template which loads the Mono-Runtime and your code inside the XCode/ObjC-program. You can now use this template right away and include your Obj-C-code or extract the runtime loading code from the "main.m"-file and insert it into your existing XCode-project. If you use an existing project you also have to copy all .exe/.dll/.s files from the xcode-output-dir that mtouch made.
Now you have your Mono-Runtime and assembly loaded in an XCode-project. To communicate with your assembly, you have to use the Mono-Embedding-API (not part of MonoTouch, but Mono). These are C-style API calls. For a good introduction see this page.
Also the Mono-Embedding-API documentation might be helpful.
What you have to do now in your Obj-C-code is to make Embedding-API calls. These steps might involve: Get the application domain, get the assembly, get the image of the assembly, locate the class you want to use, instantiate an object from that class, find methods in class, call methods on object, encapsulate method arguments in C-arrays and pass them to the method-call, get and extract method return values.
There are examples for this on the embedding-api-doc-page above.
You just have to be careful with memory consumption of your library, as the mono runtime takes some memory as well.
So this is the way from Obj-C to C#. If you want to make calls from C#/Mono into your Obj-C-program, you have to use the MonoTouch-bindings, which are described here.
You could also use pure C-method calls from the embedding/P/Invoke-API.
Hope this gets you started.
Over the weekend it emerged that someone has been porting Cocos2D to .NET, so you could also do the whole work on .NET:
http://github.com/city41/CocosNet
Cocos2D started as a Python project, that later got ported to Objective-C, and now there is an active effort to bring it to C#. It is not finished, but the author is accepting patches and might be a better way forward.
Calling Objective-C from MonoTouch definitely looks possible. See the Objective-C selector examples
What library are you calling? Perhaps there's an Objective-C equivalent.