How do we initialize objects in Workflow (i.e. .xamlx) - wcf

My WCF service is suppose to push a call to MSMQ (queue), which eventually will call another WCF service to perform database operations.
I have created a new project for MSMQ operations, and it has an "xamlx" file for the workflow. How do (or in which event) I initialize objects so that they can be accessible in the destination WCF?

hugh makes a great point. Based on what you've told us, it doesn't seem that workflow is absolutely necessary here.
If it is needed for some other reason (e.g. flowing a distributed transaction), then in your workflow project, you should be able to do Add Service Reference to your destination WCF service. This will generate you activities that match the signature of your destination WCF service. The objects that those activities expect can be initialized via expressions, e.g. directly on the activity in the expression text box, or using a variable that is set via an Assign activity.
Hope that helps,
-- Dave, WF Team

Related

single WCF endpoint for all commands in Nservicebus

We are trying to build a Nservicebus service that can communicated with form and wpf based clients using WCF. I have read that you can inherit from WcfService.
like:
public class ThirdPartyWebSvc : WcfService<ThirdPartyCmd, ThirdPartyCmdResponse>
And then you simple create a endpoint in the app.config and you done like described here. but the problem is that i have to create a endpoint for every command.
I would like to have a single endpoint that excepts any command and returns its response.
public class ThirdPartyWebSvc : WcfService<ICommand, IMessage>
Can someone point me in the right direction? Using Nservicebus for client communication can't be done for us and i don't want to build a proxy like server unless thats the only way to do it.
Thanks
So from what I can gather, you want to expose a WCF service operation which consumers can call to polymorphically pass one of a number of possible commands to, and then have the service route that command to the correct NServiceBus endpoint which then handles the command.
Firstly, in order to achieve this you should forget about using the NserviceBus.WcfService base class, because to use this you must closely follow the guidance in the article you linked in your post.
Instead, you could:
design your service operation contract to accept polymorphic requests by using the ServiceKnownType attribute on your operation definition, adding all possible command types,
host the service using a regular System.ServiceModel.ServiceHost(), and then configure an NserviceBus.IBus in the startup of your hosted WCF service, and
define your UnicastBusConfig config section in your service config file by adding all the command types along with the recipient queue addresses
However, you now have the following drawbacks:
Because of the requirement to be able to pass in implementations of ICommand into the service, you will need to recompile your operation contract each time you need to add a new command type.
You will need to manage a large quantity of routing information in the config file, and if any of the recipient endpoints change, you will need to change your service config.
If your service has availability problems then no more messages to any of your NSB endpoints.
You will need to write code to handle what to do if you do not receive a response message from the NSB endpoints in a timely manner, and this logic may depend on the type of command sent.
I hope you are beginning to see how centralizing this functionality is not a great idea.
All the above problems would go away if you could get your clients to send commands to the bus in the standard way, but without msmq how can you do that?
Well, for a start you could look at using one of the other supported transports.
If none of these work for you and you have to use WCF hosted services, then you must follow the guidance in the linked article. This guidance is there to steer you in the correct direction - multiple WCF services sounds like a pain, until you try to centralize them into a single service - then the pain gets bigger, not less.

Communication between two WCF service libraries on the same Windows Service host

The project I'm currently working on includes a server that receives C# scripts (partial code) from clients, wraps it to create a complete class, compiles it then load it into a separate AppDomain for execution.
A task (currently running script) can send feedback to the user at any point of it's execution, as defined in the script by the user. And possibly the task might wait for a response from the user (currently assuming it's only right after having sent feedback). And the user might, at any moment, decide to kill a task.
The server is implemented as a Windows Service hosting a WCF Service Library.
As I don't want to overcomplicate the client to make it communicate directly with the dynamically created AppDomains, the (partial) solution that I considered after some research was hosting a second WCF service with named pipe binding to make the dynamic AppDomains use it as a relay between them and the client facing WCF service.
My issue is that now I can't think of a clean way to have the two WCF services interact.
My ideas are:
Having them maintain direct references to each other:
Seeing as Normally both of the services are singletons it shouldn't be hard to do.
But that would be a pain to maintain in the case one of them fails and needs to be restarted. (I'm still new to WCF so I have no idea how common that is, but it's still an issue to consider. I think.)
Introducing some sort of a "message queue" (or two, one for each direction) with properties that can be set and subscribed to. Thus when one service sets a property an event will be triggered in the second. But that feels somewhat hacky to me, even though I can't really think of any clear issues.
I could really use some expert input on what I'm trying to accomplish, be it opinions on my thoughts or new ideas. Even if that involves rethinking the architecture. This project is still in an early enough stage to afford some rework, as long as there is enough reason to do that of course.
Since I've put lots of efforts (read: 2 minutes on paint) to prepare a quick (read: useless) schema of the system, I'll link it here since I don't have the reputation to post images:
Link to schema
Edit:
As I now have the reputation thanks to an upvote:
Still after rereading my question, I feel that perhaps I have been looking at this issue from a too narrow perspective by thinking of the services as something more special than ordinary classes. The more I think about it the more I feel that the observer pattern is probably the best approach to take.
Just for the record, and to avoid leaving my (silly) question unanswered, I've realised that I was looking at this too narrowly by trying to find a solution specific to WCF services.
And finally I ended up using a variation of the observer pattern (based on the IObservable<T>Interface).
I came across the same issue. The way I handled a duplex communication between the two servers is as following:
For each process (AppDomain Seperated Task) create a pair of WCF services. Both services have their Instancing set to PerSession (no need for singleton which may cause problems in the long run like disconnect). This means the Client will be communicating for each process (AppDomain Separated Task) with two distinct Service instances or a service pair (i.e. Service1 and Service2).
We want a duplex communication in between these two services, which means that both can communicate with the other and pass data (in the form of a DataContract class object).
For this:
1- Declare two services (i.e. in a separate class library) and host them (self hosting or else).
2- Create your DataContract class and add any property, collection, enum etc. as you like. Both services must have a get-set property for this class.
3- In the same class library (where the Service1 and 2 classes reside), create another class. This class will act as a depository for the Service pair instances. It has a static List in order to register the service pair instances (you can identify each service with a GUID).
4- We setup the client proxy using svcUtil.exe (or by code). When the client makes a service request, a service (i.e. service1) will be created by the WCF. At service1, create or launch the process (App Domain Separated Task) as client2 and at its constructor create the Service2 proxy by code.
5- Initialize the Service2 instance (i.e. by a call to the service2) and register the service pair instances at static list of the depository (so that it can be retrieved later for duplex communication). Now we have both service instances and both of them are registered as a pair into a static list.
6- Start communication between both services by making a call from Client1 proxy.
7- At Service1 call method, retrieve the service pair from the static list. Deep copy (DeepClone) the Datacontract class object from Service1 to the Service2 using the get-set property mentioned at (2). (Note that you can use one of the many Deep Clone libraries from Nuget like DeepCloner).
8- Make a call back from Service2. Client2 now has the identical DataContract class property values as Client1
9- Repeat steps 6-8 for Client2 proxy for Service2-Service1 communication.

Is shared assembly the only way to create objects from WCF REST service

I am writing an application that is consuming an in-house WCF-based REST service and I'll admit to being a REST newbie. Since I can't use the "Add Service Reference", I don't have ready-made proxy objects representing the return types from the service methods. So far the only way I've been able to work with the service is by sharing the assembly containing the data types exposed by the service.
My problem with this arrangment is that I see only two possibilities:
Implement DTOs (DataContracts) and expose those types from my service. I would still have to share an assembly but this approach would limit the types contained in the assembly to the service contract and DTOs. I don't like to use DTOs just for the sake of using them, though as they add another layer of abstraction and processing time to convert from domain object to DTO and vice versa. Plus, if I want to have business rules, validation, etc. on the client, I'd have to share the domain objects anyways, so is the added complexity necessary.
Support serialization of my domain objects, expose those types and share that assembly. This would allow me to share business and validation logic with the client but it also exposes parts of my domain objects to the client that are meant only for the service app.
Perhaps an example would help the discussion...
My client application will display a list of documents that is obtained from the REST service (a GET operation). The service returns an array of DocumentInfo objects (lightweight, read-only representation of a Document).
When the user selects one of the items, the client retrieves the full Document object from the REST service (GET by id) and displays a data entry form so the user can modify the object. We would want validation rules for a rich user experience.
When the user commits the changes, the Document object is submitted to the REST service (a PUT operation) where it is persisted to the back-end data store.
If the state of the Document allows, the user may "Publish" the Document. In this case, the client POSTs a request to the REST service with the Document.ID value and the service performs the operation by retrieving the server-side Document domain object and calling the Publish method. The Publish method should not be available to the client application.
As I see it, my Document and DocumentInfo objects would have to be in a shared assembly. Doing this makes Document.Publish available to the client. One idea to hide it would be to make the method internal and add an InternalsVisibleTo attribute that allows my service app to call the method and not the client but this seems "smelly."
Am I on the right track or completely missing something?
The classes you use on the server should not be the same classes you use on the client (apart from during the data transfer itself). The best approach is to create a package (assembly/project) containing DTOs, and share these between the server and the client. You did mention that you don't want to create DTO's for the sake of it, but it is best practice. The performance impact of adding extra layers is negligible, and layering actually helps make your application easier to develop and maintain (avoiding situations like yours where the client has access to server code).
I suggest starting with the following packages:
Service: Resides on server only, exposes the service and contains server application logic.
DTO: Resides on both server and client. Contains simple classes which contain data which need to be passed between server and client. Classes have no code apart from properties. These are short lived objects which survive long enough only to transfer data.
Repository: Resides on client only. Calls the server, and turns Model objects into DTO's (and vice versa).
Model: Resides on client only. Contains classes which represent business objects and relationships. Model objects stay in memory throughout the life of the application.
Your client application code should call into Repository to get Model objects (you might also consider looking into MVVM if your not sure how to go about this).
If your service code is sufficiently complex that it needs access to Model classes, you should create a separate Model package (obviously give it a different name) - the only classes which should exist both on server and client are DTO classes.
I thought that I'd post the approach I took while giving credit to both Greg and Jake for helping guide me down the path.
While Jake is correct that deserializing the data on the client can be done with any type as long as it implements the same data contract, enforcing this without WSDL can be a bit tricky. I'm in an environment where other developers will be working with my solution both to support and maintain the existing as well as creating new clients that consume my service. They are used to "Add Service Reference" and going.
Greg's points about using different objects on the client and the server were the most helpful. I was trying to minimize duplicate by sharing my domain layer between the client and the server and that was the root of my confusion. As soon as I separated these into two distinct applications and looked at them in isolation, each with their own use cases, the picture became clearer.
As a result, I am now sharing a Contracts assembly which contains my service contracts so that a client can easily create a channel to the server (using WCF on the client-side) and data contracts representing the DTOs passed between client and service.
On the client, I have ViewModel objects which wrap the Model objects (data contracts) for the UI and use a service agent class to communicate with the service using the service contracts from the shared assembly. So when the user clicks the "Publish" button in the UI, the controller (or command in WPF/SL) calls the Publish method on the service agent passing in the ID of the document to publish. The service agent relays the request to the REST API (Publish operation).
On the server, the REST API is implemented using the same service contracts. In this case, the service works with my domain services, repositories and domain objects to carry out the tasks. So when the Publish service operation is invoked, the service retrieves the Document domain object from the DocumentRepository, calls the Publish method on the object which updates the internal state of the object and then the service passes the updated object to the Update method of the repository to persist the changes.
I am pleased with the outcome as I believe this gives me a more robust and extensible architecture to work with. I can change the ViewModels as needed to support the UI with no concern over poluting the service(s) and, likewise, change the internal implementation of the service operations (domain layer) without affecting the client application(s). All that binds the two are the contracts they share. Pretty clean.
You can serialize your domain objects and then de-serialize them into different types on the client. Both types need to implement the same data contract. All serializable types have at least a default data contract that includes all public read/write properties and fields.

WCF customizing metadata publishing

I have a universal service hosted on IIS7 that accepts a Message and returns Message ( with Action="*"). This service still publishes meta for the clients.
This metadata is explicitly specified using LocationUrl property in ServiceMetadataBehavior.
We have a requirement that the metadata can change during lifetime of the service, so in essence metadata has a lifetime.
I tried adding IWsdlExportExtension to the service endpoint behavior, but the ExportEndpoint method only gets called once (when the service is loaded first time). Is there a way for me to invalidate the loaded metadata so that anytime there is a call for wsdl using HttpGet, the behavior gets called ?
What you are asking for (changing the published service definition at runtime) is not possible - you need to remove the requirement which specifies that the metadata can change over time.
Once you've published a service, the only reason the service specification should change is because the service has been upgraded.
You should look closer at the business requirement which is making this technical requirement necessary, and try to find another way to satisfy it (perhaps post in programmers.stackexchange). Perhaps you can have multiple services available, and switch between the services over time - but this is a bit of a stab in the dark without knowing the business requirement.
No there is no way. Moreover if you needed you are up to your fully custom solution because this is out of scope of web services. Changing metadata means changing the service itself = its internal logic which always result in restarting the hosting process and publishing new metadata.

WCF Workflow Service single instance correlation

Using visual studio 2010 RC/.Net 4.0
I have a wcf workflow service with three receive activities defined, basically StartProcessing, StopProcessing, and GetProcessingStatus. This is a long running service that continues to poll an external service for data once StartProcessing is called, until StopProcessing is called.
My problem is with figuring out how to use correlation to ensure that all calls into the service call the same instance of the workflow. I am trying to avoid requiring any sort of instance id be required to be passed back in to subsequent calls to the service. In a nutshell, I would like the workflow being executed to be a singleton, and ensure that all receive activities operate on the same instance. How do I go about doing this?
You can correlate on a constant for example. Edit the XPath in query correlation to return the number 1 for example.
I think that what you want is impossible, you need to correlate, WWF does not know how to execute it. If two parallel calls are received they will use the same object with unexpected results.
In wcf it could be possible, you can set a session in the client or you could manage wcf object creation, but in WWF I think you even don't have that options.