Configure .eds file to map channels of a CANopen Client PLC - automation

In Order use a PLC as a Client (formerly “Slave”), one has to configure the PDO channels, since the default values of the manufacturer are often not suitable. In my case, I need the PDOs so send INT valued instead of the default UNSIGNED8 (see. Picture).
Therefore my question: What kind of workflow would you recommend, to map the CANopen Client PDO channels?

I found the following workflow suitable, however I appreciate any improvements and recommendations from your side!
Start by locating the .eds file from the manufacturer. The image show this in the B&R Automation Studio Programming Environment
Open the file in a eds. Editor. I found the free Vector CANEds Editor very useful. Delete all RxPODs and RxPDO mappings that you don’t need.
Assign the needed Data Type (e.g. INTEGER16) and Channel Name (“1 Byte In (1)”).
Add the necessary PDOs and PDO mapping from the database. (This might actually be a bug, but if you just edit the PDOs without deleting and recreating them, I always receive error messages)
Map the Date to the Channels
Don't forget to write the number of channels in the first entry (in this image: 1601sub0)
Check the eds file for Errors (press F5) and copy&paste the eds file to the original location point 1.)
Add the PLC Client device in Automation Studio and you should see the correct mappings.
(PS: I couldn't make the images smaller ... any recommendations about formating this question are welcome!)

Related

omnet++ Inet - Simulating dynamic access point behaviour

I have to create a particular simulation for a college project. The simulation should feature several mobile nodes cyclically switch between 802.11 access point and station modes. While in station mode, nodes should read the SSIDs of access points around them, and then they should change their SSID in AP mode accordingly. There is no need for connections or data exchange between the nodes beside the SSID reading.
Now, I've been through Omnet/Inet tutorials/documentation (all two of them), and I feel pretty much stuck.
What I could use right now is someone confirming my understanding of the framework giving me some directions on how exactly I should proceed.
From what I understand is Inet does not implement any direct/easy way to do what I'm trying to do. Most examples have fixed connections declared in NED files and hosts with a fixed status (AP or STA) defined in the .ini file.
So my question is basically how do I do that: do I need to extend a module (say, wirelessHost), modifying its runtime behaviour, or should I implement a new application (like UDPApp) to have my node read other SSIDs and change his accordingly? And what is the best way to access an host's SSID?
You may utilize two radios for each mobile node e.g. **.mobilenode[*].numRadios = 2 (see also example in /inet/examples/wireless/multiradio/).
The first radio operates as AP **.mobilenode[*].wlan[0].mgmtType = "Ieee80211MgmtAPSimplified" which has to adapt its SSID.
The second radio serves as STA **.mobilenode[*].wlan[1].mgmtType = "Ieee80211MgmtSTA". Now, you have to sub-class Ieee80211AgentSTA which handles the SSID scanning procedure and has to change the first radio's SSID upon new SSID detection. Then you utilize the adopted sub-class within the simulation. Finally, active scanning has to be activated **.mobilenode[*].wlan[1].agent.activeScan = true.

noflo-ui: Load and save projects/graphs/components from external database or api

I'm trying to create a custom build of noflo-ui that is effectively only a graph editor. Don't need it to connect to any runtimes.
I'm struggling to find where I can inject this code as it appears part of noflo-ui is written in noflo itself and I cannot find the scripts for those pieces.
For example, in graphs/main.fbp, there is this line:
'user,main,project,github,runtime,context' -> ROUTES Dispatch
Three questions on this:
Where is the source behind the Dispatch component?
If I add my own interface elements to Load data from an external api, where would be the best place to inject that data?
I see a lot of event driven code, so I'm guessing I would add a new polymer element, do my ajax call, the emit or fire something. I believe this is what happens when connecting to a noflo-nodejs runtime; I've traced the connection to line 51312 in a built noflo-ui.js
return port.send({
componentDefinition: definition
});
... but I can't figure out where it goes past here. A port on the main.fbp graph? As per my 1st question, I cannot find the source behind these core graphs.
And this leads to my last question
The code I pasted above from noflo-ui, I cannot find this code anywhere pre-build. I even searched the entire project tree for "componentDefinition: definition". Where is this coming from?
Any pointers on this would be greatly appreciated! Thanks
The FBP runtime protocol is the primary extension point of noflo-ui. You can implement a "runtime" which just provides components and graphs (for instance from a database), without a way to run these.
A network:persist message to let the UI indicate that "this is a good point to save the graphs" has been specced but is currently not implemented. For now you can just autosave latest state.

Is it possible to write a plugin for Glimpse's existing SQL tab?

Is it possible to write a plugin for Glimpse's existing SQL tab?
I'm trying to log my SQL queries and the currently available extensions don't support our in-house SQL libary. I have written a custom plugin which logs what I want, but it has limited functionality and it doesn't integrate with the existing SQL tab.
Currently, I'm logging to my custom plugin using a single helper method inside my DAL's base class. This function looks takes the SqlCommand and Duration in order to show data on my custom tab:
// simplified example:
Stopwatch sw = Stopwatch.StartNew();
sqlCommand.Connection = sqlConnection;
sqlConnection.Open();
object result = sqlCommand.ExecuteScalar();
sqlConnection.Close();
sw.Stop();
long duration = sw.ElapsedMilliseconds;
LogSqlActivity(sqlCommand, null, duration);
This works well on my 'custom' tab but unfortunately means I don't get metrics shown on Glimpse's HUD:
Is there a way I can provide Glimpse directly with the info it needs (in terms of method names, and parameters) so it displays natively on the SQL tab?
The following advise is based on the fact that you can't use DbProviderFactory and you can't use a proxied SqlCommand, etc.
The data that appears in the "out-of-the-box" SQL tab is based on messages of given types been published through our internal Message Broker (see below on information on this). Because of the above limitations in your case, to get things lighting up correctly (i.e. your data showing up in HUD and the SQL tab), you will need to simulate the work that we do under the covers when we publish these messages. This shouldn't be that difficult and once done, should just work moving forward.
If you have a look at the various proxies we have here you will be above to see what messages we publish in what circumstances. Here are some highlights:
DbCommand
Log command start - here
Log command error - here
Log command end - here
DbConnection:
Log connection open - here
Log connection closed - here
DbTransaction
Log Started - here
Log committed - here
Log rollback - here
Other
Command row count here - Glimpses calculates this at the DbDataReader level but you could do it elsewhere as well
Now that you have an idea of what messages we are expecting and how we generate them, as long as you pass in the right data when you publish those messages, everything should just light up - if you are interested here is the code that looks for the messages that you will be publishing.
Message Broker: If you at the GlimpseConfiguration here you will see how to access the Broker. This can be done statically if needed (as we do here). From here you can publish the messages you need.
Helpers: For generating some of the above messages, you can use the helpers inside the Support class here. I would have shifted all the code for publishing the actual messages to this class, but I didn't think there would be too many people doing what you are doing.
Update 1
Starting point: With the above approach you shouldn't need to write your own plugin. You should just be able to access the broker GlimpseConfiguration.GetConfiguredMessageBroker() (make sure you check if its null, which it is if Glimpse is turned off, etc) and publish your messages.
I would imagine that you would put the inspection that leverages the broker and published the messages, where ever you have knowledge of the information that needs to be collected (i.e. inside your custom lib). Normally this would require references inside your lib to glimpse (which you may not want), so to protect against this, from your lib, you would call a proxy (which could be another VS proj) that has the glimpse dependency. Hence your ado lib only has references to your own code.
To get your toes wet, try just publishing a couple of fake connection and command messages. Assuming the broker you get from GlimpseConfiguration.GetConfiguredMessageBroker() isn't null, these should just show up. Then you can work towards getting real data into it from your lib.
Update 2
Obsolete Broker Access
Its marked as obsolete because its going to change in v2. You will still be able to do what you need to do, but the way of accessing the broker has changed. For what you currently need to do this is ok.
Sometimes null
As you have found this is really dependent on where in the page lifecycle you are currently at. To get around this, I would probably change my original recommendation a little.
In the code where you are currently creating messages and pushing them to the message bus, try putting them into HttpContext.Current.Items. If you haven't used it before, this is a store which asp.net provides out of the box which lasts the lifetime of a given request. You could have a list that you put in there, still create the message objects that you are doing, but put them into that list instead of pushing them through the broker.
Then, create a HttpModule (its really simple to do) which taps into the PostLogRequest event. Within this handler, you would pull the list out of the context, iterate through it and push the message into the message broker (accessing the same way you have been).

Can I use a USB pen drive with libusbdotnet

I have just started on libusbdotnet. I have downloaded the sample code from http://libusbdotnet.sourceforge.net/V2/Index.html.
I am using a JetFlash 4GB Flash drive (a libusb-win32 filter driver was added for this drive).
The ShowInfo code works perfectly, and I can see my device info with two endpoints. Following is the device info from pastebin
http://pastebin.com/2Jdph6bY
However, the ReadOnly sample code does not work.
http://pastebin.com/hNZaEt8N
My code is almost same as that from the libsubdotnet website. I have only changed the endpoint that UsbEndpointReader uses. I have changed it from Ep01 to Ep02, because I read that the first endpoint is a control endpoint used for configuration, access control and similar stuff.
UsbEndpointReader reader = MyUsbDevice.OpenEndpointReader(ReadEndpointID.Ep02);
I always get the message "No more bytes!".
I thought that this is because of the absence of data, so I used the ReadWrite sample code.
http://pastebin.com/NiN5w9Jt
But here I also get "No more bytes!" message.
Interestly, the line
ec = writer.Write(Encoding.Default.GetBytes(cmdLine), 2000, out bytesWritten);
executes without errors.
Can pen drives be used for read write operations? Or is something wrong with the code?
A USB thumb drive implements the USB mass storage device class, which is a subset of SCSI. The specification is here.
You're not going to get anything sensible by just reading from an endpoint - you have to send the appropriate commands to get any response.
You have not chosen an easy device class to begin your exploration of USB - you may be better starting with something easier - a HID class device, perhaps (Mouse/Keyboard) though Windows does have enhanced security around mice and keyboards which may prevent you installing a filter.
If you meddle with the filesystem on the USB stick while it's mounted as a drive by Windows, you'll almost certainly run into cache-consistency problems, unless you're extremely careful about what kind of access you allow Windows to do.

Can I have same upnp device with different description/services during run time?

While going through upnp spec I got the following doubts .
Can I define a basic upnp device with all mandatory fields and with no servicelist and
when providing the description xml I will modify my description xml to advertise my service based to different conditions.
eg: services may playmusic OR switch light OR playfootball.
Can i modify the xml per device basis on run time to inlcude completely different and random services?
I hope the description and service xmls are not static .
Just like almost everything else in UPnP Device Arch document this is not 100% clearly defined, but the idea of dynamic device/service descriptions is mentioned:
If a device needs to
change one of these descriptions, it MUST cancel its outstanding
advertisements and re-advertise. Consequently, control points SHOULD
NOT assume that device and service descriptions are unchanged if a
device re-appears on the network, but they can detect whether
descriptions changed if a changed CONFIGID.UPNP.ORG field value is
present in the announcements.
So descriptions are not static, but you do need need to cancel and re-advertize.
That said, abusing this does not sound useful (Why not use separate root devices or at least sub-devices for totally unrelated services) and is bound to lead to compatibility issues.