Why do GNU Radio complex blocks (not custom) have different itemsizes? - gnuradio

I am running GNU Radio 3.7.13.4 and working in GNU Radio Companion on Ubuntu 18.04
I have a very simple flowgraph where I have a source of type complex (I've tried both a signal source and a constant source) which I connect to a transcendental block (of type complex) and then output to a sink of type complex (I don't think anything after the transcendental block matters).
I have tried the transcendental block with functions "sin", "cos" and "exp". When I execute the flowgraph, I get the error:
ValueError: itemsize mismatch: sig_source_c0:0 using 8, transcendental0:0 using 16
The transcendental block is meant to take any cmath functions so I thought perhaps there were different function names for the complex and float cases? Something like "ccos" or csin" but I haven't seen any on the list of available functions.
I have seen similar questions where people are creating custom blocks and OOT modules and seeing this problem. They have often used the wrong datatype (numpy complex 32 instead of 64).
I am not using any custom blocks. This problem is with stock/shipped GR blocks.
Any help is appreciated!

Related

Problem with type FreeRV while adding new distribution

I'm trying to add a new discrete distribution to PyMC3 (a Wallenius non-central hypergeometric) by wrapping Agner Fogs c++ version of it (https://www.agner.org/random/).
I have successfully put the relevant functions in a c++ extension and added broadcasting so that it behaves as scipy's distributions. (For now broadcasting is done in Python. .. will later try the xtensor-python bindings for more performant vectorization in c++.)
I'm running into the following problem: when I instantiate an RV of the new distribution in a model context, I'm getting a "TypeError: an integer is required (got type FreeRV)" from where "value" is passed to the logp() function of the new distribution.
I understand that PyMC3 might need to connect RVs to the functions, but I find no way to cast them into something my new functions can work with.
Any hints on how to resolve this or general info on adding new distributions to PyMC3 or the internal workings of distributions would be extremely helpful.
Thanks in advance!
Jan
EDIT: I noticed that FreeRV inherits from theanos TensorVariable, so I tried calling .eval(). This leads to another error along the lines that no input is connected. (I don't have the exact error message right now).
One thing which puzzles me is why logp is called at instantiation of the variable when setting up the model ...

Semantics of GCC hot attribute

Assume I have a compilation unit consisting of three functions, A, B, and C. A is invoked once from a function external to the compilation unit (e.g. it's an entry point or callback); B is invoked many times by A (e.g. it's invoked in a tight loop); C is invoked once by each invocation of B (e.g. it's a library function).
The entire path through A (passing through B and C) is performance-critical, though the performance of A itself is non-critical (as most time is spent in B and C).
What is the minimal set of functions which one should annotate with __attribute__ ((hot)) to effect more aggressive optimization of this path? Assume we cannot use -fprofile-generate.
Equivalently: Does __attribute__ ((hot)) mean "optimize the body of this function", "optimize calls to this function", "optimize all descendant calls this function makes", or some combination thereof?
The GCC info page does not clearly address these questions.
Official documentation:
hot
The hot attribute on a function is used to inform the compiler that the function is a hot spot of the compiled program. The function is optimized more aggressively and on many target it is placed into special subsection of the text section so all hot functions appears close together improving locality.
When profile feedback is available, via -fprofile-use, hot functions are automatically detected and this attribute is ignored.
The hot attribute on functions is not implemented in GCC versions earlier than 4.3.
The hot attribute on a label is used to inform the compiler that path following the label are more likely than paths that are not so annotated. This attribute is used in cases where __builtin_expect cannot be used, for instance with computed goto or asm goto.
The hot attribute on labels is not implemented in GCC versions earlier than 4.8.
2007:
__attribute__((hot))
Hint that the marked function is "hot" and should be optimized more aggresively and/or placed near other "hot" functions (for cache locality).
Gilad Ben-Yossef:
As their name suggests, these function attributes are used to hint the compiler that the corresponding functions are called often in your code (hot) or seldom called (cold).
The compiler can then order the code in branches, such as if statements, to favour branches that call these hot functions and disfavour functions cold functions, under the assumption that it is more likely that that the branch that will be taken will call a hot function and less likely to call a cold one.
In addition, the compiler can choose to group together functions marked as hot in a special section in the generated binary, on the premise that since data and instruction caches work based on locality, or the relative distance of related code and data, putting all the often called function together will result in better caching of their code for the entire application.
Good candidates for the hot attribute are core functions which are called very often in your code base. Good candidates for the cold attribute are internal error handling functions which are called only in case of errors.
So, according to these sources, __attribute__ ((hot)) means:
optimize calls to this function
optimize the body of this function
put body of this function to .hot section (to group all hot code in one location)
After source code analysis we can say that "hot" attribute is checked with (lookup_attribute ("hot", DECL_ATTRIBUTES (current_function_decl)); and when it is true, the functions's node->frequency is set to NODE_FREQUENCY_HOT (predict.c, compute_function_frequency()).
If the function has frequency as NODE_FREQUENCY_HOT,
If there is no profile information and no likely/unlikely on branches, maybe_hot_frequency_p will return true for the function (== "...frequency FREQ is considered to be hot."). This turns value of maybe_hot_bb_p into true for all Basic Blocks (BB) in the function ("BB can be CPU intensive and should be optimized for maximal performance.") and maybe_hot_edge_p true for all edges in function. In turn in non -Os-modes these BB and edges and also loops will be optimized for speed, not for size.
For all outbound call edges from this function, cgraph_maybe_hot_edge_p will return true ("Return true if the call can be hot."). This flag is used in IPA (ipa-inline.c, ipa-cp.c, ipa-inline-analysis.c) and influence inline and cloning decisions

Fortran: Whole array operations in Fixed Form Source

I am getting "Segmentation Fault" error over and over again, while using my subroutines (I have put all of them in MODULEs) with a code written in Fixed Form Source (during fortran77 days).
The original make file (Linux platform) is a mess, it compiles only ".f" source, so I had to change the extensions of my files from ".f90" to ".f", and have left first 7 columns blank in my modules.
My modules extensively use whole array operations and operations on array-sections, and I declare the variables in F90 style, many of them are assumed-size arrays.
My question:- although the compiler compiles these modules (having whole-array/array-section operations) without any warning/error, however is this "segmentation fault" due to the usage of modules with whole-array/array-section operations (kept in .f files) in a legacy code?
For example I have written following code in "algebra.f" module:
function dyad_vv(v1,v2) !dyadic product of two vectors
real*8, dimension(:)::v1,v2
real*8, dimension(size(v1,1),size(v2,1))::dyad_vv
integer i,j
do i=1,size(v1,1)
do j=1,size(v2,1)
dyad_vv(i,j)=v1(i)*v2(j)
end do
end do
end function
!==================================
function dot_mv(m,v) !dot product of a matrix and a vector
real*8, dimension(:,:)::m
real*8, dimension(:)::v
real*8, dimension(size(m,1))::dot_mv
integer i,j
do i=1,size(m,1)
dot_mv(i)=0.0000D0
do j=1,size(v,1)
dot_mv(i)=dot_mv(i)+m(i,j)*v(j)
end do
end do
end function
!==================================
function dot_vm(v,m) !dot product of a vector and a matrix
real*8, dimension(:)::v
real*8, dimension(:,:)::m
real*8, dimension(size(m,2))::dot_vm
integer i,j
do i=1,size(m,2)
dot_vm(i)=0.0000D0
do j=1,size(v,1)
dot_vm(i)=dot_vm(i)+v(j)*m(j,i)
end do
end do
end function
To expand a little on my already over-long comment:
Segmentation faults in Fortran programs generally arise from either (a) attempting to access an array element outside the array bounds, or (b) mismatching procedure actual arguments and dummy arguments.
Fortunately, intelligent use of your compiler can help you spot both these situations. For (a) you need to switch on run-time array bounds checking, and for (b) you need to switch on compile-time subroutine interface checking. Your compiler manual will tell you what flags you need to set.
One of the advantages of modern Fortran, in particular of modules is that you get procedure interface checking for free as it were, the compiler takes care, at compile time, of checking that dummy and actual arguments match.
So I don't think your problem stems directly from writing modern Fortran in fixed-source form. But I do think that writing modern Fortran in fixed-source form to avoid re-writing your makefile and to avoid upgrading some FORTRAN77 is a sufficiently perverse activity that you will find it painful in the short run, and regret it in the long run as you continue to develop degraded code.
Face up to it, refactor now.

Are local variables in Fortran 77 static or stack dynamic?

For my programming languages class one hw problem asks:
Are local variables in FORTRAN static or stack dynamic? Are local variables that are INITIALIZED to a default value static or stack dynamic? Show me some code with an explanation to back up your answer. Hint: The easiest way to check this is to have your program test the history sensitivity of a subprogram. Look at what happens when you initialize the local variable to a value and when you don’t. You may need to call more than one subprogram to lock in your answer with confidence.
I wrote a few subroutines:
- create a variable
- print the variable
- initialize the variable to a value
- print the variable again
Each successive call to the subroutine prints out the same random value for the variable when it is uninitialized and then it prints out the initialized value.
What is this random value when the variable is uninitialized?
Does this mean Fortran uses the same memory location for each call to the subroutine or it dynamically creates space and initializes the variable randomly?
My second subroutine also creates a variable, but then calls the first subroutine. The result is the same except the random number printed of the uninitialized variable is different. I am very confused. Please help!
Thank you so much.
In Fortran 77 & 90/95/2003, if you want the value of a variable local to a subroutine preserved across subroutine calls, you should declare it the "save" attribute, e.g., (using Fortran 90 style):
integer, save :: counter
OR
integer :: counter
save :: counter
.
Or, if you want the "save" behavior to apply to all variables just include in the subroutine a simple
save
statement without any variables.
In Fortran 90, a variable initialization in a declaration,
integer :: counter = 0
automatically acquires the save attribute. I don't think that this was the case in Fortran 77.
This is one area in which experiments could be misleading -- they will tell you what a particular compiler does, but perhaps not what the Fortran 77 language standard is, nor what other compilers did. Many old Fortran 77 compilers didn't place local variables on the stack and implicitly all variables had the save attribute, without the programming having used that declaration. This, for example, was the case with the popular DEC Fortran compilers. It is common for legacy Fortran 77 programs that were used only with a particular compiler of this type to malfunction with a modern compiler because programmers forgot to use the save attribute on variables that needed it. Originally this didn't cause a problem because all variables effectively had the save attribute. Most modern compilers place local variables without save on the stack, and these programs frequently malfunction because some variables that need "save" "forget" their values across subroutine calls. This can be fixed by identifying the problem variables and adding save (work), adding a save statement to every subroutine (less work), or many compilers have an option (e.g., -fno-automatic in gfortran) to restore the old behavior (easy).
It seems a peculiar question -- you won't find out about "Fortran 77" but about a particular compiler. And why use Fortran 77 instead of Fortran 95/2003? Does the prof. think Fortran stopped in 1977?
To amplify on one point that #MSB made;
Fortran standards do not tell compiler-writers how to implement the standards, they are concerned with the behaviour of programs visible to the programmer. So the answer to the question is 'it all depends on the compiler'. And OP does not tell us which compiler(s) (s)he is using.
Furthermore, if you trawl back through the mists of time to examine all the FORTRAN77 compilers ever written, I am confident that you will find a wide variety of different implementations of the features you are interested in, many of them tied to quite esoteric hardware architectures.

Writing a TemplateLanguage/VewEngine

Aside from getting any real work done, I have an itch. My itch is to write a view engine that closely mimics a template system from another language (Template Toolkit/Perl). This is one of those if I had time/do it to learn something new kind of projects.
I've spent time looking at CoCo/R and ANTLR, and honestly, it makes my brain hurt, but some of CoCo/R is sinking in. Unfortunately, most of the examples are about creating a compiler that reads source code, but none seem to cover how to create a processor for templates.
Yes, those are the same thing, but I can't wrap my head around how to define the language for templates where most of the source is the html, rather than actual code being parsed and run.
Are there any good beginner resources out there for this kind of thing? I've taken a ganer at Spark, which didn't appear to have the grammar in the repo.
Maybe that is overkill, and one could just test-replace template syntax with c# in the file and compile it. http://msdn.microsoft.com/en-us/magazine/cc136756.aspx#S2
If you were in my shoes and weren't a language creating expert, where would you start?
The Spark grammar is implemented with a kind-of-fluent domain specific language.
It's declared in a few layers. The rules which recognize the html syntax are declared in MarkupGrammar.cs - those are based on grammar rules copied directly from the xml spec.
The markup rules refer to a limited subset of csharp syntax rules declared in CodeGrammar.cs - those are a subset because Spark only needs to recognize enough csharp to adjust single-quotes around strings to double-quotes, match curley braces, etc.
The individual rules themselves are of type ParseAction<TValue> delegate which accept a Position and return a ParseResult. The ParseResult is a simple class which contains the TValue data item parsed by the action and a new Position instance which has been advanced past the content which produced the TValue.
That isn't very useful on it's own until you introduce a small number of operators, as described in Parsing expression grammar, which can combine single parse actions to build very detailed and robust expressions about the shape of different syntax constructs.
The technique of using a delegate as a parse action came from a Luke H's blog post Monadic Parser Combinators using C# 3.0. I also wrote a post about Creating a Domain Specific Language for Parsing.
It's also entirely possible, if you like, to reference the Spark.dll assembly and inherit a class from the base CharGrammar to create an entirely new grammar for a particular syntax. It's probably the quickest way to start experimenting with this technique, and an example of that can be found in CharGrammarTester.cs.
Step 1. Use regular expressions (regexp substitution) to split your input template string to a token list, for example, split
hel<b>lo[if foo]bar is [bar].[else]baz[end]world</b>!
to
write('hel<b>lo')
if('foo')
write('bar is')
substitute('bar')
write('.')
else()
write('baz')
end()
write('world</b>!')
Step 2. Convert your token list to a syntax tree:
* Sequence
** Write
*** ('hel<b>lo')
** If
*** ('foo')
*** Sequence
**** Write
***** ('bar is')
**** Substitute
***** ('bar')
**** Write
***** ('.')
*** Write
**** ('baz')
** Write
*** ('world</b>!')
class Instruction {
}
class Write : Instruction {
string text;
}
class Substitute : Instruction {
string varname;
}
class Sequence : Instruction {
Instruction[] items;
}
class If : Instruction {
string condition;
Instruction then;
Instruction else;
}
Step 3. Write a recursive function (called the interpreter), which can walk your tree and execute the instructions there.
Another, alternative approach (instead of steps 1--3) if your language supports eval() (such as Perl, Python, Ruby): use a regexp substitution to convert the template to an eval()-able string in the host language, and run eval() to instantiate the template.
There are sooo many thing to do. But it does work for on simple GET statement plus a test. That's a start.
http://github.com/claco/tt.net/
In the end, I already had too much time in ANTLR to give loudejs' method a go. I wanted to spend a little more time on the whole process rather than the parser/lexer. Maybe in version 2 I can have a go at the Spark way when my brain understands things a little more.
Vici Parser (formerly known as LazyParser.NET) is an open-source tokenizer/template parser/expression parser which can help you get started.
If it's not what you're looking for, then you may get some ideas by looking at the source code.