Encapsulation of a VHDL module in Ise XiliniX - module

I have a vhdl module called 'inner_module', with some input and output ports, e.g.
entity inner_module is
port (input1, input2 : in std_logic;
output1, output2 : out std_logic);
end inner_module;
and I want to include (encapsulate?) it in another module, called 'outer_module', that is a sort of interface to 'inner_module', such that I don't have to deal with all its details.
Suppose that 'outer_module' has input and output ports, as in
entity outer_module is
port(outer_input1: in std_logic;
outer_output1: out std_logic);
end outer_module;
which are elaborated and appropriately feed to inner_module with the architecture part of outer_module. Same for the inner outputs, which are elaborated in order to evaluate outer_output1.
Let's say that signals input1 and output1 are meant to drive an external evm, e.g. a dac evm, that is connected to my main EVM (virtex 6).
After checking syntax, synthetizing, ... I have to associate the ports to the pin (with I/O pin planning), but the only port that can be associated are the ones from the top module, and I don't have access to signal input1 and output1.
I can add input1 and output1 in the entity declaration of outer_module, but I'd like to "hide" the fact that I use these signals in order to drive the dac evm (it could be a lot of signals), and simply have the interface with the previous entity declaration for outer_module. I'd like to associated the signals input1 and output1 to the correct pins but without doing this "from the top module".
Is it possible? Any ideas or references on how to do that? Or do I always have to include all the signals to be associated to pins in the top module?

I can think of a possible solution to this, but I don't think it is a good idea. I think what you are trying to avoid something that you should not. By not passing the "hidden" I/O from your lower level block through the higher level block, you are essentially asking for a global signal/port that you can access from anywhere. There is not a physical reason why you should not be able to do this (i.e. you should be able to connect any signal in your design to an FPGA pin), but it is not how someone would expect your VHDL design to work. "No one" likes "global signals" or variables, because you lose the ability to trace where they came from and where they go.
When you look at your I/O of your top level design, you should be thinking of the I/O as the pins of your target device. Anything that talks with the outside world in your design needs to have a input or output in your top level design.
It is not uncommon for top level designs to be very large for this reason - there are typically lots of interconnect. SDRAM interfaces can quickly blow up the number of signals you have in your top level. Here are some things that you can try to reduce the noise and clutter:
Utilize records to group signals of similar purpose/function when connecting between internal blocks. In your design, the inner_module could have an output port, which has a record type, that contains all the signals that you need to output to your DAC EVM in the top level. I would avoid using records in the port map of your top level design. This may cause confusion with the tools, but this could also be my superstition.
Utilize vectors, arrays (or arrays of arrays) in order to manage multi-dimensional data and/or busses. This can greatly reduce clutter. Instead of having bit0, bit1, bit2, you can have a signal (a vector) called bits and all the elements of the signal will be treated in the same way.
Put modules that talk to the physical/external (i.e. a SDRAM interface) in the top level of your design (or one level down), instead of inside the block that needs to use the interface. This avoids needing to bring the external interface signals through (potentially) many layers of modules to get to where your interface is instanciated.
Utilize a good VHDL editor (I like emacs with vhdlmode) which can greatly reduce the number of copy/paste errors you run into by being able to copy an entity (port map) and paste it as an instanciation, as a list of signals, as a component, etc.

Use record types for your top level ports. Declare the records in a package that can be used at both ends of the connection (e.g. in the FPGA and the simulation model for your DAC).
You can hide the details of the actual signals in the record; if you need to update the record, the top level design needs no changes.
As a port can have only one direction (mode, either in, out or inout) it is common to use a pair of records, one containing all the input signals, another for the outputs.
At the outermost level you may need some experimentation to get the tools (UCF files since you mention Xilinx) to correctly connect FPGA pins to record compponents...

Related

Merge signal in flat sequance

I have three inputs in merge signals in different time, the out put of merge signals appeared to wait for all signals and outputted them. what I want is to have an output for every signal (on current output) as soon as it inputted.
For example: if I write (1) in initial value. 5,5,5 in all three numeric. with 3 sec time delay, I will have 6,7,and 16 in target 1, target 2, and target 3. And over all 16 on current output. I don't want that to appear at once on current output. I want to have as it appears in target with same time layout.
please see attached photo.
can anyone help me with that.
thanks.
All nodes in LabVIEW fire when all their inputs arrive. This language uses synchronous data flow, not asynchronous (which is the behavior you were describing).
The output of Merge Signals is a single data structure that contains all the input signals — merged, like the name says. :-)
To get the behavior you want, you need some sort of asynchronous communication. In older versions of LabVIEW, I would tell you to create a queue refnum and go look at examples of a producer/consumer pattern.
But in LabVIEW 2016 and later, right click on each of the tunnels coming out of your flat sequence and chose “Create>>Channel Writer...”. In the dialog that appears, choose the Messenger channel. Wire all the outputs of the new nodes together. This creates an asynchronous wire, which draws very differently from your regular wires. On the wire, right click and choose “Create>>Channel Reader...”. Put the reader node inside a For Loop and wire a 3 to the N terminal. Now you have the behavior that as each block finishes, it will send its data to the loop.
Move the Write nodes inside the Flat Sequence if you want to guarantee the enqueue order. If you wait and do the Writes outside, you’ll sometimes get out-of-order data (I.e. when the data generation nodes happen to run quickly).
Side note: I (and most LabVIEW architects) would strongly recommend you avoid using sequence structures as much as possible. They’re a bad habit to get into — lots of writings online about their disadvantages.

Can we use GPIO_PinAFConfig function to make a pin as output on STM32L1?

I'm using stm32l100rc board. I need to make a pin output/input several times so can I use GPIO_PinAFConfig function to do that? or do I have to initialize whole GPIO_InitTypeDef structure for it.
A GPIO can be configured as either an input, output, or one of possibly several alternate functions. As its name suggests, GPIO_PinAFConfig sets one of those alternate functions, so would not achieve your aim at all.
If switching between input and output using the standard peripheral library, there will be some redundancy - register values that do not change, or which are mutually exclusive to input or output. If you need the switch to be as fast as possible (and we are talking tens or hundreds of nanoseconds here), then direct register access might make significant savings by changing only those registers necessarily necessary.

What is the input Range for the osmocom Sink?

I'm using a HackRF One device and its corresponding osmocom Sink block inside of gnuradio-companion. Because the input to this block is Complex (i.e. a pair of Floats), I could conceivably send it an enormously large value. At some point the osmocom Sink will hit a maximum value and stop driving the attached HackRF to output stronger signals.
I'm trying to figure out what that maximum value is.
I've looked through the documentation at a number of different sites, for both the HackRF One and the osmocom source and can't find an answer. I tried looking through the source code itself, but couldn't see any clear indication there, although I may have missed something there.
http://sdr.osmocom.org/trac/wiki/GrOsmoSDR
https://github.com/osmocom/gr-osmosdr
I also thought of deriving the value empirically, but didn't trust my equipment to get a precise measure of when the block started to hit the rails.
Any ideas?
Thanks
Friedman
I'm using a HackRF One device and its corresponding osmocom Sink block inside of gnuradio-companion. Because the input to this block is Complex (i.e. a pair of Floats), I could conceivably send it an enormously large value.
No, the complexes z must meet
because the osmocom sink/the underlying drivers and devices map that -1 – +1 range to the range of the I and Q DAC values.
You're right, though, it's hard to measure empirically, because typically, the output amplifiers go into nonlinearity close to the maximum DAC outputs, and on top of that, everything is frequency-dependent, so e.g. 0.5+j0.5 at 400 MHz doesn't necessarily produce the same electrical field strength as 0.5+j0.5 at 1GHz.
That's true for all non-calibrated SDR devices (which, aside from your typical multi-10k-Dollar Signal Generator, is everything, unless you calibrate for all frequencies of interest yourself).

Layered and Pipe-and-Filter

I'm a bit confused in which situations these patterns should be used, because in some sense, they seem similar to me?
I understand that Layered is used when system is complex, and can be divided by its hierarchy, so each layer has a function on different level of hierarchy, and uses the functions on the lower level, while in the same time exposes its function to higher level.
On the other hand, Pipe-and-Filter is based on independent components that process data, and can be connected by pipes so they make a whole that executes the complete algorithm.
But if the hierarchy does not exist, it all comes to question if order of the modules can be changed?
And an example that confuses me is compiler. It is an example of pipe-and-filter architecture, but the order of some modules is relevant, if I'm not wrong?
Some example to clarify things would be nice, to remove my confusion. Thanks in advance...
Maybe it is too late to answer but I will try anyway.
The main difference between the two architectural styles are the flow of data.
On one hand, for Pipe-and-Filter, the data are pushed from the first filter to the last one.
And they WILL be pushed, otherwise, the process will not be deem success.
For example, in car manufacturing factory, each station is placed after one another.
The car will be assembled from the first station to the last.
If nothing goes wrong, you will get a complete car at the end.
And this is also true for compiler example. You get the binary code after from the last compiling process.
On the other hand, Layered architecture dictates that the components are grouped in so-called layers.
Typically, the client (the user or component that accesses the system) can access the system only from the top-most layer. He also does not care how many layers the system has. He cares only about the outcome from the layer that he is accessing (which is the top-most one).
This is not the same as Pipe-and-Filter where the output comes from the last filter.
Also, as you said, the components in the same layer are using "services" from the lower layers.
However, not all services from the lower layer must be accessed.
Nor that the upper layer must access the lower layer at all.
As long as the client gets what he wants, the system is said to work.
Like TCP/IP architecture, the user is using a web browser from application layer without any knowledge how the web browser or any underlying protocols work.
To your question, the "hierarchy" in layered architecture is just a logical model.
You can just say they are packages or some groups of components accessing each other in chain.
The key point here is that the results must be returned in chain from the last component back to the first one (where the client is accessing) too.
(In contrast to Pipe-and-Filter where the client gets the result from the last component.)
1.) Layered Architecture is hierarchical architecture, it views the entire system as -
hierarchy of structures
The software system is decomposed into logical modules at different levels of hierarchy.
where as
2.) Pipe and Filter is a Data-Flow architecture, it views the entire system as -
series of transformations on successive sets of data
where data and operations on it are independent of each other.

disambiguating HPCT artificial intelligence architecture

I am trying to construct a small application that will run on a robot with very limited sensory capabilities (NXT with gyroscope/ultrasonic/touch) and the actual AI implementation will be based on hierarchical perceptual control theory. I'm just looking for some guidance regarding the implementation as I'm confused when it comes to moving from theory to implementation.
The scenario
My candidate scenario will have 2 behaviors, one is to avoid obstacles, second is to drive in circular motion based on given diameter.
The problem
I've read several papers but could not determine how I should classify my virtual machines (layers of behavior?) and how they should communicating to lower levels and solving internal conflicts.
These are the list of papers I've went through to find my answers but sadly could not
pct book
paper on multi-legged robot using hpct
pct alternative perspective
and the following ideas are the results of my brainstorming:
The avoidance layer would be part of my 'sensation layer' and that is because it only identifies certain values like close objects e.g. ultrasonic sensor specific range of values. The other second layer would be part of the 'configuration layer' as it would try to detect the pattern in which the robot is driving like straight line, random, circle, or even not moving at all, this is using the gyroscope and motor readings. 'Intensity layer' represents all sensor values so it's not something to consider as part of the design.
Second idea is to have both of the layers as 'configuration' because they would be responding to direct sensor values from 'intensity layer' and they would be represented in a mesh-like design where each layer can send it's reference values to the lower layer that interface with actuators.
My problem here is how conflicting behavior would be handled (maneuvering around objects and keep running in circles)? should be similar to Subsumption where certain layers get suppressed/inhibited and have some sort of priority system? forgive my short explanation as I did not want to make this a lengthy question.
/Y
Here is an example of a robot which implements HPCT and addresses some of the issues relevant to your project, http://www.youtube.com/watch?v=xtYu53dKz2Q.
It is interesting to see a comparison of these two paradigms, as they both approach the field of AI at a similar level, that of embodied agents exhibiting simple behaviors. However, there are some fundamental differences between the two which means that any comparison will be biased towards one or the other depending upon the criteria chosen.
The main difference is of biological plausibility. Subsumption architecture, although inspired by some aspects of biological systems, is not intended to theoretically represent such systems. PCT, on the hand, is exactly that; a theory of how living systems work.
As far as PCT is concerned then, the most important criterion is whether or not the paradigm is biologically plausible, and criteria such as accuracy and complexity are irrelevant.
The other main difference is that Subsumption concerns action selection whereas PCT concerns control of perceptions (control of output versus control of input), which makes any comparison on other criteria problematic.
I had a few specific comments about your dissertation on points that may need
clarification or may be typos.
"creatures will attempt to reach their ultimate goals through
alternating their behaviour" - do you mean altering?
"Each virtual machine's output or error signal is the reference signal of the machine below it" - A reference signal can be a function of one or more output signals from higher-level systems, so more strictly this would be, "Each virtual machine's output or error signal contributes to the reference signal of a machine at a lower level".
"The major difference here is that Subsumption does not incorporate the ideas of 'conflict' " - Well, it does as the purpose of prioritising the different layers, and sub-systems, is to avoid conflict. Conflict is implicit, as there is not a dedicated system to handle conflicts.
"'reorganization' which require considering the goals of other layers." This doesn't quite capture the meaning of reorganisation. Reorganisation happens when there is prolonged error in perceptual control systems, and is a process whereby the structure of the systems changes. So rather than just the reference signals changing the connections between systems or the gain of the systems will change.
"Design complexity: this is an essential property for both theories." Rather than an essential property, in the sense of being required, it is a characteristic, though it is an important property to consider with respect to the implementation or usability of a theory. Complexity, though, has no bearing on the validity of the theory. I would say that PCT is a very simple theory, though complexity arises in defining the transfer functions, but this applies to any theory of living systems.
"The following step was used to create avoidance behaviour:" Having multiple nodes for different speeds seem unnecessarily complex. With PCT it should only be necessary to have one such node, where the distance is controlled by varying the speed (which could be negative).
Section 4.2.1 "For example, the avoidance VM tries to respond directly to certain intensity values with specific error values." This doesn't sound like PCT at all. With PCT, systems never respond with specific error (or output) values, but change the output in order to bring the intensity (in this case) input in to line with the reference.
"Therefore, reorganisation is required to handle that conflicting behaviour. I". If there is conflict reorganisation may be necessary if the current systems are not able to resolve that conflict. However, the result of reorganisation may be a set of systems that are able to resolve conflict. So, it can be possible to design systems that resolve conflict but do not require reorganisation. That is usually done with a higher-level control system, or set of systems; and should be possible in this case.
In this section there is no description of what the controlled variables are, which is of concern. I would suggest being clear about what are goal (variables) of each of the systems.
"Therefore, the designed behaviour is based on controlling reference values." If it is only reference values that are altered then I don't think it is accurate to describe this as 'reorganisation'. Such a node would better be described as a "conflict resolution" node, which should be a higher-level control system.
Figure 4.1. The links annotated as "error signals" are actually output signals. The error signals are the links between the comparator and the output.
"the robot never managed to recover from that state of trying to reorganise the reference values back and forth." I'd suggest the way to resolve this would be to have a system at a level above the conflicted systems, and takes inputs from one or both of them. The variable that it controls could simply be something like, 'circular-motion-while-in-open-space', and the input a function of the avoidance system perception and then a function of the output used as the reference for the circular motion system, which may result in a low, or zero, reference value, essentially switching off the system, thus avoiding conflict, or interference. Remember that a reference signal may be a weighted function of a number of output signals. Those weights, or signals, could be negative so inhibiting the effect of a signal resulting in suppression in a similar way to the Subsumption architecture.
"In reality, HPCT cannot be implemented without the concept of reorganisation because conflict will occur regardless". As described above HPCT can be implemented without reorganisation.
"Looking back at the accuracy of this design, it is difficult to say that it can adapt." Provided the PCT system is designed with clear controlled variables in mind PCT is highly adaptive, or resistant to the effects of disturbances, which is the PCT way of describing adaption in the present context.
In general, it may just require clarification in the text, but as there is a lack of description of controlled variables in the model of the PCT implementation and that, it seems, some 'behavioural' modules used were common to both implementations it makes me wonder whether PCT feedback systems were actually used or whether it was just the concept of the hierarchical architecture that was being contrasted with that of the Subsumption paradigm.
I am happy to provide more detail of HPCT implementation though it looks like this response is somewhat overdue and you've gone beyond that stage.
Partial answer from RM of the CSGnet list:
https://listserv.illinois.edu/wa.cgi?A2=ind1312d&L=csgnet&T=0&P=1261
Forget about the levels. They are just suggestions and are of no use in building a working robot.
A far better reference for the kind of robot you want to develop is the CROWD program, which is documented at http://www.livingcontrolsystems.com/demos/tutor_pct.html.
The agents in the CROWD program do most of what you want your robot to do. So one way to approach the design is to try to implement the control systems in the CROWD programs using the sensors and outputs available for the NXT robot.
Approach the design of the robot by thinking about what perceptions should be controlled in order to produce the behavior you want to see the robot perform. So, for example, if one behavior you want to see is "avoidance" then think about what avoidance behavior is (I presume it is maintaining a goal distance from obstacles) and then think about what perception, if kept under control, would result in you seeing the robot maintain a fixed distance from objects. I suspect it would be the perception of the time delay between sending and receiving of the ultrasound pulses.Since the robot is moving in two-space (I presume) there might have to be two pulse sensors in order to sense the two D location of objects.
There are potential conflicts between the control systems that you will need to build; for example, I think there could be conflicts between the system controlling for moving in a circular path and the system controlling for avoiding obstacles. The agents in the CROWD program have the same problem and sometimes get into dead end conflicts. There are various ways to deal with conflicts of this kind;for example, you could have a higher level system monitoring the error in the two potentially conflicting systems and have it make reduce the the gain in one system or the other if the conflict (error) persists for some time.