I am writing a addin that requires the email content in MIME format. Now I am using the ewsurl(), getCallBackTokenAsync(), and Office.context.mailbox.itemId api's to find out the EWS url, access token, message id and sending it to my back-end via Ajax. My back-end sends a SOAP request to EWS and downloads the email message. Is this approach correct?
My main concern is will there be any throttling if the EWS endpoint sees too many request coming from the same machine. For example, several users(say couple of hundreds) in an organisation could use the add-in concurrently. In this case , the add-in back-end could send several concurrent requests to EWS. Would EWS implement some throttling if it sees too many requests coming from the same add-in / the same back-end machine
Are there any alternative approaches?
This is the only approach at the moment. Neither Office.js nor Graph supports getting the raw email message. Given that add-ins run within the context of a single email, I can't imagine this would result in more than a couple of calls per second at most so I wouldn't be too concerned with throttling.
I am writing a addin that requires the email content in MIME format.
You will not be able to get original MIME message. Exchange doesn't store MIME content. You may get generated MIME content which done on the fly by Exchange and this conversion is expensive. Resieved MIME message has nothing to do with original MIME received by Exchange, so why bother? The maximum you will be able to get with EWS requests is original message headers (PR_TRANSPORT_MESSAGE_HEADERS), but not entire original MIME message.
Limits: If you would use EWS requests from the client (makeEwsRequestAsync ) you would be limited to 3 asynchronous calls and 1 Mb of response. If you do this from your server you will be able by pass those limits. More on this: Limits for activation and JavaScript API for Outlook add-ins
Related
I have this service that posts JSON messagest to a given url, the requests cannot be modified and because of that I cannot add all the headers I need to post to a queue in Azure.
I have been researching on this but seems like the only way to make a post other from the authorization headers is allowing a range of IPs to post to the Q; something that I cannot do either because the sender can change IPs and I could loose data, not good.
What I'm trying to find out is if Microsoft has a way (or some different service from queues) that I can use to prevent data loss in case my app is down (this is where the queue comes in), or if there's some way I can allow this external provider to post to my queue without all the security or with a minimum of it.
Thanks in advance.
I'm going to use Amazon SES for sending emails in the website I'm building currently. According to the sample java code they have provided in their API documentation I developed the functionality and I was able to send the emails. But when it comes to handle huge number of emails in a very short time of period what is the best mechanism to follow up? Do they provide any queue mechanism for emails? I couldn't find this from their API documentation and their technical service is available only for users who has purchased the account.
Can anyone has come across a solution for this problem?
Generally I use a custom SQS solution for a batch mailing process like this.
Sending more than a few emails from a web server isn't ideal, so I usually only have the website submit the request for the emails to a back-end process in a single call, then I create an SQS message for each recipient and (in my case) use a windows service that requests messages from SQS and sends the emails at the pace I want them to go out. If errors are encountered the message stays in the queue, and get retried automatically.
With an architecture like this, depending on your volumes you can spin up new instances automatically if the SQS queue size gets too large for a single instance to process in a timely manner.
I have an NServiceBus endpoint that handles saving documents to a document management system. After the document is saved, I call Bus.Reply(new DocumentSaved{}).
This works fine when I am sending SaveDocument from a Saga (which cares deeply about the reply), but it fails when I am sending it from my web client endpoint (i.e. an MVC project, which doesn't care at all about the reply). The failure is because my web client endpoint doesn't have a queue to process the reply.
What am I doing wrong here? (I really don't want to have to create a queue for my MVC project to hold a bunch of replies that will never ever get processed.)
Replies are just normal messages. The only thing that links original messages and relies is correlation id, which is stored in the message header and the originator address, where a reply is sent to.
This means that all rules that apply to normal messages are also applicable to replies. There are no special "reply queues". Replies go to normal queues as any other message.
I suspect that you have no message-endpoint mapping configuration in your web endpoint. I am not sure if SendOnly endpoint has any effect here, since I assume you already received a message there, which you want to send a reply to.
I would start by checking the message assembly to endpoint mapping and enabling debug level logging.
My WCF service(hosted as Windows Service), has some 'SendEmail' methods, which sends out emails after doing some processing.
Now, I have got another requirement where client wants to preview emails before they are being sent out, so my WCF service needs to return whole email object to calling web app.
If client is happy with emails object, they can simply click 'Send out' which will then again call WCF service to send the emails.
Because at times it can take a bit longer for emails object processingy, I do not want calling application to wait until emails object is ready.
Can anyone please guide what changes I need to make to my WCF service (which currently has all one way operation)?
Also, please guide me whether I need to go for Asynch operation or message queuing or may be a duplex contract?
Thank you!
Based on your description I think you will have to:
Change current operation from sending email to storing email (probably in database).
Add additional operation for retrieving prepared emails for current user
Add additional method to confirm sending one or more emails and removing them from storage.
The process will be:
User will trigger some http request which will result in calling your WCF service for processing (first operation)
WCF service will initiate some processing (asynchronously or firt operation will be one-way so that client doesn't have to wait).
Processing will save email somehow
Depend on duration of processing you can either use AJAX to poll WebApp which will in turn poll WCF service for prepared emails or you will create separate page which will user have to access to see prepared emails. Both methods are using second operation.
User will check prepared email(s) and trigger http request which will result in calling third operation to send those emails.
You have multiple options:
Use Ladislav's approach. Only to add that service returns a token and then client uses the token to poll until a time out or a successful response. Also server keeps these temp emails for a while and after a timeout purges them.
Use duplex communication so that server also gets a way to callback the client and does so when it has finished processing. But don't do this - and here is my view why not.
Use an Asynchronous approach. You can find nice info here.
I have a vb.net 2.0 winforms project that is full of all kinds of business reports (generated with Excel interop calls) that can be run "on-demand". Some of these reports filter through lots of data and take a long time to run - especially on our older machines around the office.
I'd like to have a system where a report request can be made from the client machines, some listener sees it, locates a server with low-load, runs the report on that server, and emails the result to the user that requested it.
How can I design such a change? All our reports take different parameters, and I can't seem to figure out how to deal with this. Does each generator need to inherit from a "RemoteReport" class that does this work? Do I need to use a service on one of our servers to listen for these requests?
One approach you could take is to create a database that the clients can connect to, and have the client add a record that represents a report request, including the necessary parameters which could be passed in an xml field.
You can then have a service that periodically checks this database for new requests, and depending on how many other requests are current processing, submit the request to the least busy server.
The server would then be able to run the report and email the file to the user.
This is by no means a quick solution and will likely take some time to design the various elements and get them to work together, but its not impossible, especially considering that it has the possibility to scale rather well (adding more available/more powerful servers).
I developed a similar system where a user can submit a request for data from a web interface, that would get picked up by a request manager service that would delegate the request to the appropriate server based on the type of request, while providing progress indication to the client.
How about write a web service that accepts reporting requests. On completion the reports could be emailed to the users. The web service can provide a Status method that allows your WinForms app to interrogate the current status of the report requests.