I'm working on creating a REST service that contacts a SOAP service that already exists. I'm trying to figure out what a certain SOAP response is sending back, but when I log the raw xml it is saying the body is simple ...stream... Does this mean a stream is being passed back or the actual string "...stream..."
The actual stream is being sent, not the string "...stream...". Since many streams can only be read once, WCF won't consume it to log the message, otherwise it would not be able to send it to the other party (or in the incoming message case, to deliver it to the application)
Related
I have long operation, which called via Web API. Status code 102 says to us:
An interim response used to inform the client that the server has
accepted the complete request, but has not yet completed it.
This status code SHOULD only be sent when the server has a reasonable
expectation that the request will take significant time to complete.
As guidance, if a method is taking longer than 20 seconds (a
reasonable, but arbitrary value) to process the server SHOULD return a
102 (Processing) response. The server MUST send a final response after
the request has been completed.
So, I want to return 102 status code to client, then client waits response about result of operation. How to implement it on .NET?
I read this thread: How To Return Http 102 Processing in Asp.Net Web Api?
This thread has good explanation what is necessary, but no response. I don't understand how it implement on .NET, not theory...
Using HTTP 102 requires that the server send two responses for one request. ASP.NET (Core or not) does not support sending a response to the client without completely ending the request. Any attempt to send two responses will end up in throwing an exception and just not working. (I tried a couple different ways)
There's a good discussion here about how it's not actually in the HTTP spec, so implementing it isn't really required.
There are a couple alternatives I can think of:
Use web sockets (a persistent connection that allows data to be sent back and forth), like with SignalR, for example.
If your request takes a long time because it's getting data from elsewhere, you can try pulling in that data via a stream and send it to the client via a stream. That will send the data as it's coming in, rather than loading it all into memory first before sending it. Here's an example of streaming data from a database to the response: https://stackoverflow.com/a/45682190/1202807
I agreed that the command pattern is an excellent way of building loosely coupled application. My concern is how to respond to the client instantly and gracefully about the status of the request. For example, a client making a request to place order. In the typical way, order will be created followed by sending order id as a JSON response to browser. In the command pattern particularly with NServiceBus, how is it possible to send the response?
Isn't this what Return and Reply are for?
Full duplex
Replying to a message
Handling responses
I have an NServiceBus endpoint that handles saving documents to a document management system. After the document is saved, I call Bus.Reply(new DocumentSaved{}).
This works fine when I am sending SaveDocument from a Saga (which cares deeply about the reply), but it fails when I am sending it from my web client endpoint (i.e. an MVC project, which doesn't care at all about the reply). The failure is because my web client endpoint doesn't have a queue to process the reply.
What am I doing wrong here? (I really don't want to have to create a queue for my MVC project to hold a bunch of replies that will never ever get processed.)
Replies are just normal messages. The only thing that links original messages and relies is correlation id, which is stored in the message header and the originator address, where a reply is sent to.
This means that all rules that apply to normal messages are also applicable to replies. There are no special "reply queues". Replies go to normal queues as any other message.
I suspect that you have no message-endpoint mapping configuration in your web endpoint. I am not sure if SendOnly endpoint has any effect here, since I assume you already received a message there, which you want to send a reply to.
I would start by checking the message assembly to endpoint mapping and enabling debug level logging.
I built a RESTful API based on expresss.js which communicates to a remote server through a TCP socket using JSON. Requested URLs are converted into the appropriate JSON messages, a new TCP socket is open and the message is sent. Then when a message coming from the same connection is received an event is fired, the JSON reply is evaluated and a new JSON message is returned as the result of the GET request.
Possible paths:
Async (currently in use) - Open a connection to the server for each
request.
Sync - Create a queue with all the requests and wait for
the response, blocking code.
Track - Send all the request at once and asynchronously receive the answers. Use a tracker id on the request to relate each request with its answer.
What will be the best direction to go? Is there any common pattern to solve this kind of application?
1 (async, a new connection for each request) is probably the easiest to implement.
If you want to reuse the socket for efficiently, you should come up with your own "keep-alive" mechanism - essentially streaming multiple requests and answers using the same socket.
I'd probably use double CRLF ('\n\r\n\r') as the delimiter of each JSON request, fire a 'request' event for each request, and simply write back the answer asynchronously. Delimiter-less streaming is possible, but it requires extra parsing when you receive a partial JSON string from the socket.
My WCF service(hosted as Windows Service), has some 'SendEmail' methods, which sends out emails after doing some processing.
Now, I have got another requirement where client wants to preview emails before they are being sent out, so my WCF service needs to return whole email object to calling web app.
If client is happy with emails object, they can simply click 'Send out' which will then again call WCF service to send the emails.
Because at times it can take a bit longer for emails object processingy, I do not want calling application to wait until emails object is ready.
Can anyone please guide what changes I need to make to my WCF service (which currently has all one way operation)?
Also, please guide me whether I need to go for Asynch operation or message queuing or may be a duplex contract?
Thank you!
Based on your description I think you will have to:
Change current operation from sending email to storing email (probably in database).
Add additional operation for retrieving prepared emails for current user
Add additional method to confirm sending one or more emails and removing them from storage.
The process will be:
User will trigger some http request which will result in calling your WCF service for processing (first operation)
WCF service will initiate some processing (asynchronously or firt operation will be one-way so that client doesn't have to wait).
Processing will save email somehow
Depend on duration of processing you can either use AJAX to poll WebApp which will in turn poll WCF service for prepared emails or you will create separate page which will user have to access to see prepared emails. Both methods are using second operation.
User will check prepared email(s) and trigger http request which will result in calling third operation to send those emails.
You have multiple options:
Use Ladislav's approach. Only to add that service returns a token and then client uses the token to poll until a time out or a successful response. Also server keeps these temp emails for a while and after a timeout purges them.
Use duplex communication so that server also gets a way to callback the client and does so when it has finished processing. But don't do this - and here is my view why not.
Use an Asynchronous approach. You can find nice info here.