I tested and understood SDO rx and tx with the LPC11Cxx demo. But this demo stack has only SDO functions and a driver API. I want to implement PDO for the same. What would be some sample code or implementation steps or functions?
I want to send 68 (ADC data) bytes of data from a slave node to the master node at regular intervals. How can I do that?
For the above task, is SDO better than PDO? How many PDOs do I need to send 64 bytes of data? How can I set PDO mapping and parameters? What is the difference between a master node and a slave node? How do I differentiate from code?
I'm not sure of your example, but if you can send an SDO over the CAN bus, the you should be able to use the PDO, albeit more complicated.
The general steps are:
1. Define your PDO. You are creating a mapping between a PDO and one or more data objects in your node. For example, on my system, I created a Transmit PDO that sets the motor position and velocity (two objects) that responds with another PDO (a receive PDO) that contains motor current, position and status. This is the definition of the PDO.
To use your PDO, send out a PDO message with the COBid you defined in step 1. Fore me, I send PDO 0x201 with the position and velocity. The node will receive this and set the values you give to the object mapping you define. Note the node does NOT act on the data yet.
After you've sent as many PDO's you need (for example, I send PDO for position/velocity to 7 nodes on a bus to control 7 motors), you then send a SYNC. This causes the nodes to act on the PDO data you've sent - ie move the motors.
Each node will respond with a transmit PDO to send back whatever you define. My nodes send position, status, and current.
Repeat as needed.
Google "CANOpen momento dupin" for some example in the document. You'll have to read the doc on your nodes to understand how they are defined, or the chapter in the embedded canopen book. I have some old code you can look at the was provided to me from a vendor. That might be a good source as well. Nodes don't have to support PDO mapping but I think most do.
Related
In SysML, when modeling a message, I'm having trouble understanding what element type should be used to define it, its elements, and a port that it flows through.
I'm assuming it is either:
a raw Block
the more specialized InterfaceBlock
Both can type a proxy port (formerly flow port, if I understand correctly), or type most other properties in other blocks as one builds up a full message interface or port system (either straight ports or nested ports). If the base message definition is a normal block, then when do you create a flow property that gets typed by that block, so that something can actually flow from one task to another through the port?
An Interface Block should occur somewhere in there, in order to type the port, right? Does that mean I use it to define a message directly, or does that depend on my port scheme (i.e. whether I nest ports and to what level)?
I guess this boils down to confusion over when you are defining a thing (i.e. a class/block) and when you are defining that this thing is a quantity that flows in your model (a flow of some kind - the message passes from one task or piece of hardware to another).
P.S. I'm using MagicDraw as the SysML tool, but I don't think that should impact the core answer.
The answer, as developed by my team:
Use a full port for the raw network interface, the physical layer.
Use a block to type the network interface, including:
Flow properties that represent physical items flowing out of the port, such as total electrical current (power).
Nested full port elements for physical nested ports, such as pins that make up a physical ethernet port. Type with another block.
Nested <> elements for logical/abstract data flows across the network interface, such as sockets/connections
Use an Interface Block to type each logical connection (nested proxy port) with an Interface Block containing the following:
Flow properties that represent blocks of data, such as messages, which are sent as a group across the connection
Value properties that define characteristics of that connection, such as source and destination IP addresses and port numbers, comm loss and retry info, etc. Note some of this may be better served as meta data in tags as part of a separate stereotype.
Type the data flow properties of the connections with a ValueType whose attributes are the individual data elements of that data block (i.e. the message elements).
Create a new stereotype with a custom name something like "Data Element" and add tags for any meta-data that is needed about each data element, such as length (in bits or bytes), underlying type in the message, any unit or scaling factors, position in the message, etc.
You can even create a generic table at this point which will list every data element in a given message, or in all messages, and add all the relevant Data Element tags as columns, and use it as a current specification for each message and data element of each message, and allow much easier editing of all the information directly in that table.
Why use ValueTypes for data blocks that flow across Proxy Ports? Because then they will show up as Information Flow items instead of Item Flow items across a connector between two Proxy Ports on an Internal Block Diagram (IBD). I.e. when I send a physical item, typed by a Block, it is sent as an Item Flow, but when I send a logical item, such as data, it is typed by a ValueType, and sent as an Information Flow.
This is a starting point - we found issues with nesting the valuetype definitions initially, so opted for a much flatter message definition that contained all the aspects of a message in a single ValueType, rather than nesting them. I'm sure there are ways around this, but how complicated do you want to get?
I have a question regarding the suggested implementation that is in binance documentation. The guidelines are avaliable on the link:
How to manage a local order book correctly
If I need a constant stream of #depth data, why do I need first four steps they suggest. Why would I buffer the stream first and then take snapshot just to determine which data to throw away and then continue listening to stream? I don't understand the logical need for those steps if they are even needed for my use case (which is tracking the real time order book data)
If you take a snapshot and then start listening to the stream you may miss an event
between getting the snapshot and starting the stream. This'll mean your local order book will be invalid (and you definitely don't want this in a trading application).
The idea behind taking the snapshot after is that you are guaranteed to have all the events after your snapshot. A side effect of this approach is that you may also have some from before your snapshot. So you can discard the few (if any) you don't need based on their lastUpdateId.
I'm not sure what language you're using to manage one but if you want a java implementation let me know and i'll push mine to github so you can use it.
Question
I want to pass data between applications, in a publish-subscribe manner. Data may be produced at a much higher rate than consumed and messages get lost, which is not a problem. Imagine a fast sensor and a slow sensor data processor. For that, I use redis pub/sub and wrote a class which acts as a subscriber, receives every message and puts that into a buffer. The buffer is overwritten when a new message comes in or nullified when the message is requested by the "real" function. So when I ask this class, I immediately get a response (hint that my function is slower than data comes in) or I have to wait (hint that my function is faster than the data).
This works pretty good for the case that data comes in fast. But for data which comes in relatively seldom, let's say every five seconds, this does not work: imagine my consumer gets launched slightly after the producer, the first message is lost and my consumer needs to wait nearly five seconds, until it can start working.
I think I have to solve this with Redis tools. Instead of a pub/sub, I could simply use the get/set methods, thus putting the cache functionality into Redis directly. But then, my consumer would have to poll the database instead of the event magic I have at the moment. Keys could look like "key:timestamp", and my consumer now has to get key:* and compare the timestamps permamently, which I think would cause a lot of load. There is no natural possibility to sleep, since although I don't care about dropped messages (there is nothing I can do about), I do care about delay.
Does someone use Redis for a similar thing and could give me a hint about clever use of Redis tools and data structures?
edit
Ideally, my program flow would look like this:
start the program
retrieve key from Redis
tell Redis, "hey, notify me on changes of key".
launch something asynchronously, with a callback for new messages.
By writing this, an idea came up: The publisher not only publishes message on topic key, but also set key message. This way, an application could initially get and then subscribe.
Good idea or not really?
What I did after I got the answer below (the accepted one)
Keyspace notifications are really what I need here. Redis acts as the primary source for information, my client subscribes to keyspace notifications, which notify the subscribers about events affecting specific keys. Now, in the asynchronous part of my client, I subscribe to notifications about my key of interest. Those notifications set a key_has_updates flag. When I need the value, I get it from Redis and unset the flag. With an unset flag, I know that there is no new value for that key on the server. Without keyspace notifications, this would have been the part where I needed to poll the server. The advantage is that I can use all sorts of data structures, not only the pub/sub mechanism, and a slow joiner which misses the first event is always able to get the initial value, which with pub/sib would have been lost.
When I need the value, I obtain the value from Redis and set the flag to false.
One idea is to push the data to a list (LPUSH) and trim it (LTRIM), so it doesn't grow forever if there are no consumers. On the other end, the consumer would grab items from that list and process them. You can also use keyspace notifications, and be alerted each time an item is added to that queue.
I pass data between application using two native redis command:
rpush and blpop .
"blpop blocks the connection when there are no elements to pop from any of the given lists".
Data are passed in json format, between application using list as queue.
Application that want send data (act as publisher) make a rpush on a list
Application that want receive data (act as subscriber) make a blpop on the same list
The code shuold be (in perl language)
Sender (we assume an hash pass)
#Encode hash in json format
my $json_text = encode_json \%$hash_ref;
#Connect to redis and send to list
my $r = Redis->new(server => "127.0.0.1:6379");
$r->rpush("shared_queue","$json_text");
$r->quit;
Receiver (into a infinite loop)
while (1) {
my $r = Redis->new(server => "127.0.0.1:6379");
my #elem =$r->blpop("shared_queue",0);
#Decode hash element
my $hash_ref=decode_json($elem\[1]);
#make some stuff
}
I find this way very usefull for many reasons:
The element are stored into list, so temporary disabling of receiver has no information loss. When recevier restart, can process all items into the list.
High rate of sender can be handled with multiple instance of receiver.
Multiple sender can send data on unique list. In ths case should be easily implmented a data collector
Receiver process that act as daemon can be monitored with specific tools (e.g. pm2)
From Redis 5, there is new data type called "Streams" which is append-only datastructure. The Redis streams can be used as reliable message queue with both point to point and multicast communication using consumer group concept Redis_Streams_MQ
We have a situation where there are 2 modules, with one having a publisher and the other subscriber. The publisher is going to publish some samples using key attributes. Is it possible for the publisher to prevent the subscriber from reading certain samples? This case would arise when the module with the publisher is currently updating the sample, which it does not want anybody else to read till it is done. Something like a mutex.
We are planning on using Opensplice DDS but please give your inputs even if they are not specific to Opensplice.
Thanks.
RTI Connext DDS supplies an option to coordinate writes (in the documentation as "coherent write", see Section 6.3.10, and the PRESENTATION QoS.
myPublisher->begin_coherent_changes();
// (writers in that publisher do their writes) /* data captured at publisher */
myPublisher->end_coherent_changes(); /* all writes now leave */
Regards,
rip
If I understand your question properly, then there is no native DDS mechanism to achieve what you are looking for. You wrote:
This case would arise when the module with the publisher is currently updating the sample, which it does not want anybody else to read till it is done. Something like a mutex.
There is no such thing as a "global mutex" in DDS.
However, I suspect you can achieve your goal by adding some information to the data-model and adjust your application logics. For example, you could add an enumeration field to your data; let's say you add a field called status and it can take one of the values CALCULATING or READY.
On the publisher side, in stead of "taking a the mutex", your application could publish a sample with the status value set to CALCULATING. When the calculation is finished, the new sample can be written with the value of status set to READY.
On the subscriber side, you could use a QueryCondition with status=READY as its expression. Read or take actions should only be done through the QueryCondition, using read_w_condition() or take_w_condition(). Whenever the status is not equal to READY, the subscribing side will not see any samples. This approach takes advantage of the mechanism that newer samples overwrite older ones, assuming that your history depth is set to the default value of 1.
If this results in the behaviour that you are looking for, then there are two remaining disadvantages to this approach. First, the application logics get somewhat polluted by the use of the status field and the QueryCondition. This could easily be hidden by an abstraction layer though. It would even be possible to hide it behind some lock/unlock-like interface. The second disadvantage is due to the extra sample going over the wire when setting the status field to CALCULATING. But extra communications can not be avoided anyway if you want to implement a distributed mutex-like functionality. Only if your samples are pretty big and/or high-frequent, this is an issue. In that case, you might have to resort to a dedicated, small Topic for the single purpose of simulating the locking mechanism.
The PRESENTATION Qos is not specific RTI Connext DDS. It is part of the OMG DDS specification. That said the ability to write coherent changes on multiple DataWriters/Topics (as opposed to using a single DataWriter) is part of one of the optional profiles (object model profile), so no all DDS implementations necessariiy support it.
Gerardo
This Question is extension of the following
OpenFlow Rule Metadata
I would like to have this clarified, on my question about Metadata
Let us say, I have an Open Flow rules, as below
Cookie=0x8000001, duration=228925.445s, table=17, n_packets=350, n_bytes=32424, priority=10,metadata=0xc000f30000000000/0xffffff0000000000 actions=goto_table:19
I wanted to understand the following
Do we have certain rule/ Algorithm , to determine these Metadata from a Packet.
because the Packet in OVS is actually switched based on Matching Metadata, Is that correct ?? ( At least according to the above flow rule )
And the Packet itself does not carry the Metadata, then how exactly
the packet hitting a flow matched against the Metadata.
So, If I understood it correctly the Packets those are traversed between the flow-tables, are within the OVS application itself or Handled #OVS Application level, until it had determined Egress Port
So in that Case, the MetaData are handled #OVS-Application level, until the Packets is send via Egress Port.
Is this correct??
Finally which Module in ODL is responsible for determine the Metadata, and I would like to understand from the code how exactly it was done.
The OpenFlow metadata field starts with a value of zero for every packets. Tables can then writes to this field and you can match on it in subsequent tables. It is only used to carry information from one table to the next, as explained in the OpenFlow specifications:
Metadata: a maskable register that is used to carry information from one table to the next.
first of all you can try Ryu instead, its code is more easy to read and understand.
Then, I think metadata/instructions/actions.... these things are belong to the processing of OVS forwarding, but these things needs to attach to something and that is the packet that OVS received. About the question "Do we have certain rule/ Algorithm , to determine these Metadata from a Packet. " I think the value of the Metadata is determind by the controller, which means that it depends on 'how do you design your own network instance using some(e.g. RYU) controller application'.