I'm looking for design advice for a domain model scenario I have.
Let's say I have have a squad of Robots, each controllable via a wireless network connection.
I have an IRobot domain object that represents a real-world robot. The interface looks like so:
public interface IRobot
{
void MoveHeadUp(int toAngle);
void MoveHeadDown(int toAngle);
int GetHeadAngle();
}
Example scenario: A virtual robot is shown in a GUI. In offline mode, the GUI shows what would happen if we tell the domain object (IRobot) to raise its head 5 degrees.
In online mode, the GUI would show the robot move AND the command would be sent to the physical robot and it would move as well.
I'm trying to add remote capabilities around this domain object, ie. getting/setting remote state via Ethernet or serial, etc. I don't want to pollute the domain object with network connectivity issues.
What is the best approach to implement IRobot domain behaviour and keep remote connection implementation details separate?
My take on this is messaging.
All of your commands (and that's what you're giving your robots), could be efficiently communicated over whatever you need (transport of your choice) and your robot (virtual or physical) would receive the command and act on it.
You can think of these robots as endpoints. Each endpoint has a name/address. You're dispatching a command to those robots and it's no longer your concern how on the robot (physical or emulated) command is executed.
You have mentioned
In online mode
Does that mean you also have an offline mode? If so, I'd look into messaging for sure.
Related
I am working with small electron application, and I want to ask one small question.
I need to share a singleton class instance between different two windows of my app.
“Share” means instance which has each class is the same and variables of the instance are the same.
I used affinity parameter in the BrowserWindow() constructor to run two windows in the same renderer process. I suppose if the two windows run in the same process, the two windows share the instance. But actually, the instance and the values of the instance are different.
Is this a correct behavior?
1.If so, could you tell me another way to share an instance between two windows?
2.If not, is this a bug? Or do I need to set another parameter?
affinity option exposes control to chromimum's process model (https://github.com/electron/electron/issues/11484 / https://www.chromium.org/developers/design-documents/process-models) and none of chromium's process model allows share hosted page's context between. Running two site in single process doesn't necessarily mean two hosted sites shares context, especially it exposes whole security concerns around.
There is no such thing for singleton in electron, at least from out of box supported way via electron's supported api surface. Though it's not true sharing, sync via ipc is nearly only way to go.
i use window.open() method to create new BrowserWindow, when use this method, you can paas your javascript singleton class by window
render-process
const modal = window.open(filePath, 'xxx');
modal.window.singletonClass = window.singletonClass;
main process
mainWindow.webContents.on('new-window', ()=> {});
As I understood both Adapter and Proxy patterns make two distinct/different classes/objects compatible with each for communication. And both of them are Structural patterns. I am getting that both of them are pretty much similar with each other.
Can some one explain what exactly make(s) them different?
EDIT:
I went through this question. But I'd rather like to have a close comparison between Adapter and Proxy.
Adapter:
It allows two unrelated interfaces to work together through the different objects, possibly playing same role.
It modifies original interface.
UML diagram:
You can find more details about this pattern with working code example in this SE post:
Difference between Bridge pattern and Adapter pattern
Proxy:
Proxy provide a surrogate or place holder for another object to control access to it.
UML diagram:
There are common situations in which the Proxy pattern is applicable.
A virtual proxy is a place holder for "expensive to create" objects. The real object is only created when a client first requests/accesses the object.
A remote proxy provides a local representative for an object that resides in a different address space. This is what the "stub" code in RPC and CORBA provides.
A protective proxy controls access to a sensitive master object. The "surrogate" object checks that the caller has the access permissions required prior to forwarding the request.
A smart Proxy provides sophisticated access to certain objects such as tracking the number of references to an object and denying access if a certain number is reached, as well as loading an object from database into memory on demand
For working code, have a look at tutorialspoint article on Proxy.
Key differences:
Adapter provides a different interface to its subject. Proxy provides the same interface
Adapter is meant to change the interface of an existing object
You can find more details about these patterns in sourcemaking articles of proxy and adapter articles.
Other useful articles: proxy by dzone
From here:
Adapter provides a different interface to its subject. Proxy provides the same interface.
You might think of an Adapter as something that should make one thing fit to another that is incompatible if connected directly. When you travel abroad, for example, and need an electrical outlet adapter.
Now a Proxy is an object of the same interface, and possibly the same base class (or a subclass). It only "pretends" to be (and behaves like) the actual object, but instead forwards the actual behavior (calculations, processing, data access, etc.) to an underlying, referenced object.
Extrapolating to the electrical analogy, it would be OK that the use of an adapter is visible to the client - that is, the client "knows" an adapter is being used - while the use of a proxy might more often be hidden, or "transparent" - the client thinks an actual object is being used, but it is only a proxy.
In practice the concepts wrapper, adapter and proxy are so closely related that the terms are used interchangeably.
As the name suggests, a wrapper is literally something that wraps around another object or function. e.g. a function that calls another function, or an object that manages the lifecycle of another object and forwards requests and responses.
An adapter literally adapts the contract. That commonly refers to changing the interface of an object, or changing a method signature. And in both cases that can only be accomplished by wrapping it with a different object or function.
The word proxy is used for exactly the same thing. However, some sources will use it more explicitly to refer to an adapter to access a remote resource. Basically, that means that local calls will be forwarded to a remote object. And it may appear natural to define a common interface which can then be shared/reused both locally and remotely for those objects.
Note: The latter interpretation of the proxy pattern isn't really a thing any more. This approach made sense in a time where technologies like CORBA were hot. If you have a remote service to access, it makes more sense to clearly define Request, Response and Context objects, and reach for technologies like OpenAPI or XSD.
Difference between Adapter pattern and Proxy Pattern
ADAPTER PATTERN
Indian mobile charger (CLIENT) does not fit in USA switch board (SERVER).
You need to use adapter so that Indian mobile charger (CLIENT) can fit in USA switch board (SERVER).
From point 2, you can understand that the CLIENT contacts adapter directly. Then adapter contacts server.
PROXY PATTERN
In adapter pattern client directly contacts adapter. It does not contact server.
In proxy pattern, proxy and server implements the same interface. Client would call the same interface.
UNDERSTANDING THROUGH CODE
class client{
public void main(){
//proxy pattern
IServer iserver = new proxy();
iserver.invoke();
//adapter pattern
IAdapter iadapter = new adapter();
iserver.iadapter();
}
}
class server implements IServer{
public void invoke(){}
}
class proxy implments IServer{
public void invoke(){}
}
class adapter implements IAdapter{
public void invoke(){}
}
Reference: Difference between Adapter pattern and Proxy Pattern
From a little bit of reading around, it is my understanding that the only way to detect that a client has connected to my service is through writing my own code. I am using a Singleton service. I would like to display a message every time a client connects to my service that client x with ip xxx has connected. There is no built-in event that is generated? Am I correct?
No, I don't think there's any support in WCF for your requirement.
Not sure what you want to achieve with this, either. Your service class (in your case, just a single instance) really doesn't have any business putting up messages (on screen, I presume) - that really not it's job. The service class is used to handle a request and deliver a response - nothing more.
The ServiceHost class might be more of a candidate for this feature - but again, it's job really is to host the service, spin up the WCF runtime etc. - and it's really not a UI component, either.
What you could possibly do is this
have an Admin UI (a Winforms, console, or WPF app) running on your server alongside your service, providing an admin service to call
define a fast connection between the two services (using e.g. netNamedPipe binding which is perfect for intra-application messaging)
when your "real" service gets a call, the first thing it does is send out a message to the admin UI which can then pick up that message and handle it
That way, you could cleanly separate your real service and it's job (to provide that service) and the Admin UI stuff you want to do and build a cleanly separated system.
I have actually implemented my own connect, disconnect and ping service methods which I manually call from my client once the channel has been created. By using them as a kind of header section in all of my ServiceContract interface definitions (and their implementations, of course), they form an makeshift "base service definition" that only requires a bit of cut-n-paste.
The string-based parameters of connect and disconnect will be used to send client info to the server and return server info and (perhaps a unique connection id) to the client. In addition a set of timing reference points may make its way in also.
Note how SessionMode is required and the individual OperationContract properties IsInitiating and IsTerminating are explicitly specified for each method, the end result being what I would call a "single-session" service in that it defines connect and disconnect as the sole session bookends.
Note also that the ping command will be used as the target of a timer-based "heartbeat" call that tests the service connection state and defeats ALL connection timeouts without a single config file :-)
Note also that I haven't determined my fault-handling structure yet which may very well add a method or more and/or require other kinds of changes.
[ServiceContract( SessionMode = SessionMode.Required )]
public interface IRePropDalSvr {
[OperationContract( IsInitiating=true, IsTerminating=false )]
string connect (string pClientInfo);
[OperationContract( IsInitiating=false, IsTerminating=true, IsOneWay=true )]
void disconnect (string pClientInfo);
// ------------------------------------------------------------------------------------------
[OperationContract( IsInitiating=false, IsTerminating=false )]
string ping (string pInp);
// ------------------------------------------------------------------------------------------
// REST OF ServiceContract DEFINITION GOES HERE
One caveat: while I am currently using this code and its implemention in my service classes, I have not verified the code yet.
I have been trying to get up to speed on Named Pipes this week. The task I am trying to solve with them is that I have an existing windows service that is acting as a device driver that funnels data from an external device into a database. Now I have to modify this service and add an optional user front end (on the same machine, using a form of IPC) that can monitor the data as it passes between the device and the DB as well as send some commands back to the service.
My initial ideas for the IPC were either named pipes or memory mapped files. So far I have been working through the named pipe idea using WCF Tutorial Basic Interprocess Communication . My idea is to set the Windows service up with an additional thread that implements the WCF NamedPipe Service and use that as a conduit to the internals of my driver.
I have the sample code working, however I can not get my head around 2 issues that I am hoping that someone here can help me with:
In the tutorial the ServiceHost is instantiated with a typeof(StringReverser) rather than by referencing a concrete class. Thus there seems to be no mechanism for the Server to interact with the service itself (between the host.Open() and host.Close() lines). Is it possible to create a link between and pass information between the server and the class that actually implements the service? If so, how?
If I run a single instance of the server and then run multiple instance of the clients, it seems that each client gets a separate instance of the service class. I tried adding some state information to the class implementing the service and it was only retained within the instance of the named pipe. This is possibly related to the first question, but is there anyway to force the named pipes to use the same instance of the class that is implementing the service?
Finally, any thoughts on MMF vs Named Pipes?
Edit - About the solution
As per Tomasr's answer the solution lies in using the correct constructor in order to supply a concrete singleton class that implements the service (ServiceHost Constructor (Object, Uri[])). What I did not appreciate at the time was his reference to ensuring the service class was thread safe. Naively just changing the constructor caused a crash in the server, and that ultimately lead me down the path of understanding InstanceContextMode from this blog entry Instancecontextmode And Concurrencymode. Setting the correct context nicely finished off the solution.
For (1) and (2) the answer is simple: You can ask WCF to use a singleton instance of your service to handle all requests. Mostly all you need to do is use the alternate ServiceHost constructor that takes an Object instance instead of a type.
Notice, however, that you'll be responsible for making your service class thread safe.
As for 3, it really depends a lot on what you need to do, your performance needs, how many clients you expect at the same time, the amount of data you'll be moving and for how long it needs to be available, etc.
I'm creating a WCF service that may be used either locally or remotely, and processes files sometimes using third-party components applications that unfortunately require as input a path to actual file on the filesystem, not a .net Stream or anything like that. Is there a standard approach for this situation, in terms of what the parameters to contract operations should be etc.? Although I suppose this can't be vital since it ultimately has to perform acceptably in both the local and remote cases, I'd prefer if, in the local case, it didn't have to read the whole file from the filesystem, include the contents in the message, and rematerialize it again on the filesystem, but for remote use this is necessary. Is there a way to do this e.g. by having an FSRefDoc type which serializes differently depending on whether it's used locally or remotely?
edit: To clarify: The problem is that I want to send different pieces of information entirely in the two cases. If I'm controlling a local service, I can just send a path to the file on the local filesystem, but if it's a remote service, I have to send the file contents themselves. Of course I can send the contents in both cases, but that means I lose performance in the local case. Maybe I shouldn't be worried about this.
OK,
Following your update, I would consider the following.
1) Create a method that takes a path. Expose this via a named pipe binding and use this locally.
2) Create a method that takes a file (stream/byte array etc). Expose this using an appropriate binding (on a different end point) for non local computers (in a LAN scenario TCP is usually your best bet).
Then all you need to do is make sure you don't duplicate the same business logic. So in a nutshell- create 2 different service interfaces, 2 different end points and 2 different bindings.
Well, you really touch on two separate issues:
local vs. remote service availability
"normal" vs. streamed service (for large files)
In general, if your service works behind a corporate firewall on a LAN, you should use the NetTcpBinding since it's the fastest and most efficient. It's fast and efficient because it uses binary message encoding (vs. text message encoding over the internet).
If you must provide a service for the "outside" world, you should try to use a binding that's as interoperable as possible, and here your choices are basicHttpBinding (totally interoperable - "old" SOAP 1.1 protocols) which cannot be secured too much, and wsHttpBinding which offers a lot more flexibility and options, but is less widely supported.
Since you can easily create a single service with three endpoints, you can really create your service and then define these three endpoints: one for local clients using NetTcpBinding, one of the widest availability using basicHttpBinding, and optionally another one with wsHttpBinding.
That's one side of the story.
The other is: for your "normal" service calls, exchanging a few items of information (up to a few KB in size), you should use the normal default behavior of "buffered transfer" - the message is prepared completely in a buffer and sent as a whole.
However, for handling large files, you're better off using a streaming transfer mode - either "StreamedResponse" if you want clients to be able to download files from your server, or "StreamedRequest" if you want clients to be able to uplaod files, or just plain "Streamed" if you send files both ways.
So besides the three "regular" endpoints, you should have at least another endpoint for each binding that handles streaming exchange of data, i.e. upload/download of files.
This may seems like a lot of different endpoints - but that's really not a problem, your clients can connect to whatever endpoint(s) are appropriate for them - regular vs. streamed and internal/local (netTcpBinding) vs. external (basicHttpBinding) as they need - and in the end, you write the code only once!
Ah , the beauty of WCF! :-)
Marc
UPDATE:
OK, after your comment, this is what I would do:
create a ILocalService service contract with a single method GetFile that returns a path and file name
create an implementation for the service contract
host that service on an endpoint with netTcpBinding (since it's internal, local)
[ServiceContract]
interface ILocalService
{
[OperationContract]
string GetFile(......(whatever parameters you need here).....);
}
class LocalService : ILocalService
{
string GetFile(......(whatever parameters you need here).....)
{
// do stuff.....
return fileName;
}
}
and secondly:
create a second service contract IRemoteService with a single method GetFile which doesn't return a file name as string, but instead returns a stream
create an implementation for the service contract
host that service on an endpoint with basicHttpBinding for internet use
make sure to have transferMode="StreamedResponse" in your binding configuration, to enable streaming back the file
[ServiceContract]
interface IRemoteService
{
[OperationContract]
Stream GetFile(......(whatever parameters you need here).....);
}
class RemoteService : IRemoteService
{
Stream GetFile(......(whatever parameters you need here).....)
{
// do stuff.....
FileStream stream = new FileStream(....);
return stream;
}
}