I got a DDS-system(OMG DDS) who's communicating with a ROS-node over radio. The information being received is a struct with velocity, state, longitude, latitude etc. This works well, and my DDS-client has no problem printing the information being transmitted from the node over the radio. Now, I've got a GUI-application written in Qt, who creates models and puts them on a predefined map. These modelse have defined set-information functions, which when triggered updates the map to give a smooth visualisation of the information it receives.
Now here is problem, I've no idea how to make the GUI application communicate with my DDS-client. I would rather not intertwine these two, since I've had enough trouble just making the DDS-client and sender work and compile with ROS. Ive though about a separate queue system, which can be included in the DDS-client and the GUI application, but I dont know if this could work. Ive also though about writing a SQL database, and then push new data, and pull new data when it is detected in my GUI application. Some sort of on_data_available function which triggers the pull-function. Ive heard the last one is a bad idea, since I'm working with only one set of data which is being continuously updated (the model represents one USV), and a database is then considered overkill, but I would love to get inputs here.
Im sorry if this isn't sufficient information, I can't really provide code examples for different reasons. If anyone have any inputs, shout out, would love to hear them. And if I'm not being specific enough, ill try to rewrite it as best as i can.
I've no idea how to make the GUI application communicate with my
DDS-client
Your question is not specific to DDS or your GUI application -- you essentially ask for a simple and convenient inter-process communication (IPC) mechanism. As you can see when you follow the link, there are loads of different options.
Given that you already have your data as well as the associated type definitions available in DDS, I suspect that using DDS for this task would still be the easiest way to go. You could set it up to communicate over shared memory or local loopback. DDS will do all discovery and communication under the hood, including (cross-language) de/serialization. If you choose a different mechanism, you might end up doing more work yourself.
As an alternative, some DDS implementations (commercially) support native integration with SQL databases. Those will introspect the DDS data definitions and create all required tables for you. Updates from DDS are automatically forwarded to the database, and vice-versa. You could feed your GUI off of that database.
Related
I'm trying to understand how to use the FAST-RTPS libraries to implement a Command and Control application. The requirement is to allow multiple writers to direct command messages to a single reader that is tasked with controlling a piece of equipment. In this application there can be one or more identical pieces of equipment being controlled, each using a unique instance of the same reader code. I already understand that I should set the reader's RELIABILITY_QOS to RELIABLE and the OWNERSHIP_QOS to EXCLUSIVE_OWNERSHIP. The part that I am still thinking about is how to configure my application so that when a writer sends a command to the reader controlling the piece of equipment, other readers that might also receive the message will not act on it. I would like to do this at the FAST-RTPS level; that is, configure the application so that only the reader controlling the equipment receives the command message versus allowing multiple readers to receive the control message while programming these readers so that only the controlling reader will act on it. My approach so far involves assigning all controlling writers and only the controlling reader to a partition (See Advanced Functionalities in the Fast-RTPS Users Manual). There will be one of these partitions for each piece of equipment. Is this the proper way to implement my requirements or are there other, better ways?
Thank you.
Since this question was asked in under data-distribution-service, this answer references the OMG DDS specification, currently at version 1.4.
Although you could use Partitions to achieve the selective delivery that you are looking for, this would probably not be the recommended approach for your use case. The main disadvantage that comes to mind is a situation where a single writer has to send control messages to multiple pieces of equipment. With your current approach, you need a single Partition for each equipment, and you additionally need each message to be written into the right Partition. This can only be achieved by attaching a single Partition to each DataWriter, which would consequently require a single DataWriter per piece of equipment. Depending on your set-up, you may end up with many DataWriters where you would prefer to have a few, from the perspective of resource usage perspective as well as code complexity.
The proper mechanism that is intended for this kind of use-case is the so-called ContentFilteredTopic, as found in section 2.2.2.3.3 ContentFilteredTopic Class in the specification. For your convenience, I quoted some of it:
ContentFilteredTopic describes a more sophisticated subscription
that indicates the subscriber does not want to necessarily see all
values of each instance published under the Topic. Rather, it wants
to see only the values whose contents satisfy certain criteria. This
class therefore can be used to request content-based subscriptions.
The selection of the content is done using the filter_expression
with parameters expression_parameters.
Using ContentFilteredTopics, each DataReader would use a filter_expression that aligns with an identifier of the device that it is associated with. At the application level on the sender side, DataWriters would not be aware of that; they would just be writing their control messages. The middleware would take care of the delivery to those (and only those) DataReaders for which the filter expressions matches the data.
This is a core feature of many DDS-based systems. Although the DDS specification does not require it, in many cases the implementation is smart enough to do filtering on the DataWriter side, before the message goes onto the wire, in cases where that makes sense.
I do not know how much of this is actually implemented by Fast-RTPS.
Can someone guide me as googling does not send me to the right places.
I want to adapt netlogo to include players real or simulated with other technologies. I need to interface the netlogo core (running an established game) and have the moves and choices determined by the external entities, which may be human or automania.
In searching for the API to allow this, I can find is the controlling API which is seems to be for running NetLogo muitiple runs of a model and not what I want.
And please before you close this again suggest how I can make this more focused as the problem is finding the starting point....
Can anyone assist.
Regards
Robert
There are two main ways to interface between NetLogo and the "outside world":
The extensions API.
The controlling API.
Which one to use depends on what you're trying to achieve.
You say that you:
need to interface the netlogo core (running an established game) and have the moves and choices determined by the external entities, which may be human or automania.
That, to me, sounds very much like a job for the extensions API, which is what has been suggested to you on the netlogo-devel group. Extensions allow to add new "primitives" to NetLogo, and those primitives can be written in any language that runs on the JVM (Java, Scala, Clojure, etc.)
So if you have a game written in NetLogo and a bot written in some other language that the JVM can interface with (possibly by using the JNI), you could write a new NetLogo extension command that calls out to the bot (e.g., something like bot:play game-state). A game written in NetLogo could then use that command to let the bot play its turn.
If that is indeed what you want to do, then Jeremy has already given you a good starting point in his netlogo-devel answer, and there is not much we can add to this until you have more specific questions.
You also said, however, that you looked at using the controlling API, so maybe what you had in mind is to have a game written in some other language, and use a NetLogo model to generate the behaviour of a particular player in that game. The controlling API allows you to do that. (It's not just for running a model multiple times!) If you look at the examples in the controlling guide, you'll see that you can use the API to send commands to a NetLogo model and to report the results back to your external program. The only constraint is that this external program needs to be able to talk to the JVM.
If you don't think this answers you question, please try to give us a concrete example of a precise thing that you want to do but do not think can be achieve in one of these two ways. We might be able to help you better than.
I'm looking at building a Cocoa application on the Mac with a back-end daemon process (really just a mostly-headless Cocoa app, probably), along with 0 or more "client" applications running locally (although if possible I'd like to support remote clients as well; the remote clients would only ever be other Macs or iPhone OS devices).
The data being communicated will be fairly trivial, mostly just text and commands (which I guess can be represented as text anyway), and maybe the occasional small file (an image possibly).
I've looked at a few methods for doing this but I'm not sure which is "best" for the task at hand. Things I've considered:
Reading and writing to a file (…yes), very basic but not very scalable.
Pure sockets (I have no experience with sockets but I seem to think I can use them to send data locally and over a network. Though it seems cumbersome if doing everything in Cocoa
Distributed Objects: seems rather inelegant for a task like this
NSConnection: I can't really figure out what this class even does, but I've read of it in some IPC search results
I'm sure there are things I'm missing, but I was surprised to find a lack of resources on this topic.
I am currently looking into the same questions. For me the possibility of adding Windows clients later makes the situation more complicated; in your case the answer seems to be simpler.
About the options you have considered:
Control files: While it is possible to communicate via control files, you have to keep in mind that the files need to be communicated via a network file system among the machines involved. So the network file system serves as an abstraction of the actual network infrastructure, but does not offer the full power and flexibility the network normally has. Implementation: Practically, you will need to have at least two files for each pair of client/servers: a file the server uses to send a request to the client(s) and a file for the responses. If each process can communicate both ways, you need to duplicate this. Furthermore, both the client(s) and the server(s) work on a "pull" basis, i.e., they need to revisit the control files frequently and see if something new has been delivered.
The advantage of this solution is that it minimizes the need for learning new techniques. The big disadvantage is that it has huge demands on the program logic; a lot of things need to be taken care of by you (Will the files be written in one piece or can it happen that any party picks up inconsistent files? How frequently should checks be implemented? Do I need to worry about the file system, like caching, etc? Can I add encryption later without toying around with things outside of my program code? ...)
If portability was an issue (which, as far as I understood from your question is not the case) then this solution would be easy to port to different systems and even different programming languages. However, I don't know of any network files ystem for iPhone OS, but I am not familiar with this.
Sockets: The programming interface is certainly different; depending on your experience with socket programming it may mean that you have more work learning it first and debugging it later. Implementation: Practically, you will need a similar logic as before, i.e., client(s) and server(s) communicating via the network. A definite plus of this approach is that the processes can work on a "push" basis, i.e., they can listen on a socket until a message arrives which is superior to checking control files regularly. Network corruption and inconsistencies are also not your concern. Furthermore, you (may) have more control over the way the connections are established rather than relying on things outside of your program's control (again, this is important if you decide to add encryption later on).
The advantage is that a lot of things are taken off your shoulders that would bother an implementation in 1. The disadvantage is that you still need to change your program logic substantially in order to make sure that you send and receive the correct information (file types etc.).
In my experience portability (i.e., ease of transitioning to different systems and even programming languages) is very good since anything even remotely compatible to POSIX works.
[EDIT: In particular, as soon as you communicate binary numbers endianess becomes an issue and you have to take care of this problem manually - this is a common (!) special case of the "correct information" issue I mentioned above. It will bite you e.g. when you have a PowerPC talking to an Intel Mac. This special case disappears with the solution 3.+4. together will all of the other "correct information" issues.]
+4. Distributed objects: The NSProxy class cluster is used to implement distributed objects. NSConnection is responsible for setting up remote connections as a prerequisite for sending information around, so once you understand how to use this system, you also understand distributed objects. ;^)
The idea is that your high-level program logic does not need to be changed (i.e., your objects communicate via messages and receive results and the messages together with the return types are identical to what you are used to from your local implementation) without having to bother about the particulars of the network infrastructure. Well, at least in theory. Implementation: I am also working on this right now, so my understanding is still limited. As far as I understand, you do need to setup a certain structure, i.e., you still have to decide which processes (local and/or remote) can receive which messages; this is what NSConnection does. At this point, you implicitly define a client/server architecture, but you do not need to worry about the problems mentioned in 2.
There is an introduction with two explicit examples at the Gnustep project server; it illustrates how the technology works and is a good starting point for experimenting:
http://www.gnustep.org/resources/documentation/Developer/Base/ProgrammingManual/manual_7.html
Unfortunately, the disadvantages are a total loss of compatibility (although you will still do fine with the setup you mentioned of Macs and iPhone/iPad only) with other systems and loss of portability to other languages. Gnustep with Objective-C is at best code-compatible, but there is no way to communicate between Gnustep and Cocoa, see my edit to question number 2 here: CORBA on Mac OS X (Cocoa)
[EDIT: I just came across another piece of information that I was unaware of. While I have checked that NSProxy is available on the iPhone, I did not check whether the other parts of the distributed objects mechanism are. According to this link: http://www.cocoabuilder.com/archive/cocoa/224358-big-picture-relationships-between-nsconnection-nsinputstream-nsoutputstream-etc.html (search the page for the phrase "iPhone OS") they are not. This would exclude this solution if you demand to use iPhone/iPad at this moment.]
So to conclude, there is a trade-off between effort of learning (and implementing and debugging) new technologies on the one hand and hand-coding lower-level communication logic on the other. While the distributed object approach takes most load of your shoulders and incurs the smallest changes in program logic, it is the hardest to learn and also (unfortunately) the least portable.
Disclaimer: Distributed Objects are not available on iPhone.
Why do you find distributed objects inelegant? They sounds like a good match here:
transparent marshalling of fundamental types and Objective-C classes
it doesn't really matter wether clients are local or remote
not much additional work for Cocoa-based applications
The documentation might make it sound like more work then it actually is, but all you basically have to do is to use protocols cleanly and export, or respectively connect to, the servers root object.
The rest should happen automagically behind the scenes for you in the given scenario.
We are using ThoMoNetworking and it works fine and is fast to setup. Basically it allows you to send NSCoding compliant objects in the local network, but of course also works if client and server are on he same machine. As a wrapper around the foundation classes it takes care of pairing, reconnections, etc..
as opposed to writing your own library.
We're working on a project here that will be a self-dividing server pool, if one section grows too heavy, the manager would divide it and put it on another machine as a separate process. It would also alert all connected clients this affects to connect to the new server.
I am curious about using ZeroMQ for inter-server and inter-process communication. My partner would prefer to roll his own. I'm looking to the community to answer this question.
I'm a fairly novice programmer myself and just learned about messaging queues. As i've googled and read, it seems everyone is using messaging queues for all sorts of things, but why? What makes them better than writing your own library? Why are they so common and why are there so many?
what makes them better than writing your own library?
When rolling out the first version of your app, probably nothing: your needs are well defined and you will develop a messaging system that will fit your needs: small feature list, small source code etc.
Those tools are very useful after the first release, when you actually have to extend your application and add more features to it.
Let me give you a few use cases:
your app will have to talk to a big endian machine (sparc/powerpc) from a little endian machine (x86, intel/amd). Your messaging system had some endian ordering assumption: go and fix it
you designed your app so it is not a binary protocol/messaging system and now it is very slow because you spend most of your time parsing it (the number of messages increased and parsing became a bottleneck): adapt it so it can transport binary/fixed encoding
at the beginning you had 3 machine inside a lan, no noticeable delays everything gets to every machine. your client/boss/pointy-haired-devil-boss shows up and tell you that you will install the app on WAN you do not manage - and then you start having connection failures, bad latency etc. you need to store message and retry sending them later on: go back to the code and plug this stuff in (and enjoy)
messages sent need to have replies, but not all of them: you send some parameters in and expect a spreadsheet as a result instead of just sending and acknowledges, go back to code and plug this stuff in (and enjoy.)
some messages are critical and there reception/sending needs proper backup/persistence/. Why you ask ? auditing purposes
And many other use cases that I forgot ...
You can implement it yourself, but do not spend much time doing so: you will probably replace it later on anyway.
That's very much like asking: why use a database when you can write your own?
The answer is that using a tool that has been around for a while and is well understood in lots of different use cases, pays off more and more over time and as your requirements evolve. This is especially true if more than one developer is involved in a project. Do you want to become support staff for a queueing system if you change to a new project? Using a tool prevents that from happening. It becomes someone else's problem.
Case in point: persistence. Writing a tool to store one message on disk is easy. Writing a persistor that scales and performs well and stably, in many different use cases, and is manageable, and cheap to support, is hard. If you want to see someone complaining about how hard it is then look at this: http://www.lshift.net/blog/2009/12/07/rabbitmq-at-the-skills-matter-functional-programming-exchange
Anyway, I hope this helps. By all means write your own tool. Many many people have done so. Whatever solves your problem, is good.
I'm considering using ZeroMQ myself - hence I stumbled across this question.
Let's assume for the moment that you have the ability to implement a message queuing system that meets all of your requirements. Why would you adopt ZeroMQ (or other third party library) over the roll-your-own approach? Simple - cost.
Let's assume for a moment that ZeroMQ already meets all of your requirements. All that needs to be done is integrating it into your build, read some doco and then start using it. That's got to be far less effort than rolling your own. Plus, the maintenance burden has been shifted to another company. Since ZeroMQ is free, it's like you've just grown your development team to include (part of) the ZeroMQ team.
If you ran a Software Development business, then I think that you would balance the cost/risk of using third party libraries against rolling your own, and in this case, using ZeroMQ would win hands down.
Perhaps you (or rather, your partner) suffer, as so many developers do, from the "Not Invented Here" syndrome? If so, adjust your attitude and reassess the use of ZeroMQ. Personally, I much prefer the benefits of Proudly Found Elsewhere attitude. I'm hoping I can proud of finding ZeroMQ... time will tell.
EDIT: I came across this video from the ZeroMQ developers that talks about why you should use ZeroMQ.
what makes them better than writing your own library?
Message queuing systems are transactional, which is conceptually easy to use as a client, but hard to get right as an implementor, especially considering persistent queues. You might think you can get away with writing a quick messaging library, but without transactions and persistence, you'd not have the full benefits of a messaging system.
Persistence in this context means that the messaging middleware keeps unhandled messages in permanent storage (on disk) in case the server goes down; after a restart, the messages can be handled and no retransmit is necessary (the sender does not even know there was a problem). Transactional means that you can read messages from different queues and write messages to different queues in a transactional manner, meaning that either all reads and writes succeed or (if one or more fail) none succeeds. This is not really much different from the transactionality known from interfacing with databases and has the same benefits (it simplifies error handling; without transactions, you would have to assure that each individual read/write succeeds, and if one or more fail, you have to roll back those changes that did succeed).
Before writing your own library, read the 0MQ Guide here: http://zguide.zeromq.org/page:all
Chances are that you will either decide to install RabbitMQ, or else you will make your library on top of ZeroMQ since they have already done all the hard parts.
If you have a little time give it a try and roll out your own implemntation! The learnings of this excercise will convince you about the wisdom of using an already tested library.
One of our next projects is supposed to be a MS Windows based game (written in C#, with a winform GUI and an integrated DirectX display-control) for a customer who wants to give away prizes to the best players. This project is meant to run for a couple of years, with championships, ladders, tournaments, player vs. player-action and so on.
One of the main concerns here is cheating, as a player would benefit dramatically if he was able to - for instance - let a custom made bot play the game for him (more in terms of strategy-decisions than in terms of playing many hours).
So my question is: what technical possibilites do we have to detect bot activity? We can of course track the number of hours played, analyze strategies to detect anomalies and so on, but as far as this question is concerned, I would be more interested in knowing details like
how to detect if another application makes periodical screenshots?
how to detect if another application scans our process memory?
what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated?
is it possible to detect if another application requests informations about controls in our application (position of controls etc)?
what other ways exist in which a cheater could gather informations about the current game state, feed those to a bot and send the determined actions back to the client?
Your feedback is highly appreciated!
I wrote d2botnet, a .net diablo 2 automation engine a while back, and something you can add to your list of things to watch out for are malformed /invalid/forged packets. I assume this game will communicate over TCP. Packet sniffing and forging are usually the first way games (online anyways) are automated. I know blizzard would detect malformed packets, somehting i tried to stay away from doing in d2botnet.
So make sure you detect invalid packets. Encrypt them. Hash them. do somethign to make sure they are valid. If you think about it, if someone can know exactly what every packet means that is sent back and forth they dont even need to run the client software, which then makes any process based detection a moot point. So you can also add in some sort of packet based challenge response that your cleint must know how to respond to.
Just an idea what if the 'cheater' runs your software in a virtual machine (like vmware) and makes screenshots of that window? I doubt you can defend against that.
You obviously can't defend against the 'analog gap', e.g. the cheater's system makes external screenshots with a high quality camera - I guess it's only a theoretical issue.
Maybe you should investigate chess sites. There is a lot of money in chess, they don't like bots either - maybe they have come up with a solution already.
The best protection against automation is to not have tasks that require grinding.
That being said, the best way to detect automation is to actively engage the user and require periodic CAPTCHA-like tests (except without the image and so forth). I'd recommend utilizing a database of several thousand simple one-off questions that get posed to the user every so often.
However, based on your question, I'd say your best bet is to not implement the anti-automation features in C#. You stand very little chance of detecting well-written hacks/bots from within managed code, especially when all the hacker has to do is simply go into ring0 to avoid detection via any standard method. I'd recommend a Warden-like approach (download-able module that you can update whenever you feel like) combined with a Kernel-Mode Driver that hooks all of the windows API functions and watches them for "inappropriate" calls. Note, however, that you're going to run into a lot of false positives, so you need to not base your banning system on your automated data. Always have a human look over it before banning.
A common method of listening to keyboard and mouse input in an application is setting a windows hook using SetWindowsHookEx.
Vendors usually try to protect their software during installation so that hacker won't automate and crack/find a serial for their application.
Google the term: "Key Loggers"...
Here's an article that describes the problem and methods to prevent it.
I have no deeper understanding on how PunkBuster and such softwar works, but this is the way I'd go:
Iintercept calls to the API functions that handle the memory stuff like ReadProcessMemory, WriteProcessMemory and so on.
You'd detect if your process is involved in the call, log it, and trampoline the call back to the original function.
This should work for the screenshot taking too, but you might want to intercept the BitBlt function.
Here's a basic tutorial concerning the function interception:
Intercepting System API Calls
You should look into what goes into Punkbuster, Valve Anti-Cheat, and some other anti-cheat stuff for some pointers.
Edit: What I mean is, look into how they do it; how they detect that stuff.
I don't know the technical details, but Intenet Chess Club's BlitzIn program seems to have integrated program switching detection. That's of course for detecting people running a chess engine on the side and not directly applicable to your case, but you may be able to extrapolate the apporach to something like if process X takes more than Z% CPU time the next Y cycles, it's probably a bot running.
That in addition to a "you must not run anything else while playing the game to be eligible for prizes" as part of the contest rules might work.
Also, a draconian "we might decide in any time for any reason that you have been using a bot and disqualify you" rule also helps with the heuristic approach above (used in prized ICC chess tournaments).
All these questions are easily solved by the rule 1 above:
* how to detect if another application makes periodical screenshots?
* how to detect if another application scans our process memory?
* what are good ways to determine whether user input (mouse movement, keyboard input) is human-generated and not automated?
* is it possible to detect if another application requests informations about controls in our application (position of controls etc)?
I think a good way to make harder the problem to the crackers is to have the only authoritative copies of the game state in your servers, only sending to and receiving updates from the clients, that way you can embed in the communication protocol itself client validation (that it hasn't been cracked and thus the detection rules are still in place). That, and actively monitoring for new weird behavior found might get you close to where you want to be.