After having done various research I cannot find the best way to achieve this:
I have a Node Express server, providing services to various users
I would like any user to be able to send an email to user-id#mydomain.com
Upon reception, I would like to trigger a specific task in the server, to process the email body.
Option 1
I was first hoping to find some kind of SMTP node package that I could simply embed in the server, and configure it with the various email addresses to be accepted, and with a callback function to trigger the task whenever an email arrives. Does this exist?
Option 2
Another option would be to install a SMTP server (ideally in a Docker container, and in any case on Linux), to handle the storage of each user mailbox. My Node server would then periodically check each user mailbox via POP3 or IMAP, and trigger the task whenever an email is found. But this seems a bit overkill to me:
I don't need to store the emails once the task is performed
This would be less responsive than having a callback like in the first option, and would require a periodic check of all users, whereas in practice, such emails will arrive very sporadically.
In this approach, what would you recommend as a dockerized SMTP server, and as a POP3/IMAP node package to retrieve and process emails?
Option 3?
Would there be any other approach?
Any recommendation welcome!!
Many thanks!
It looks like the Node smtp-server package can do the job. Combined with mailparser I can retrieve a Javascript object with the email structure fully parsed.
Related
I have an extremely basic BPMN2 diagram that is being served by a local Kogito instance. I can move through the various tasks without issue.
A single user task has a notification configured to send an email. This notification configuration was created with the VS Code Tools provided by Kogito. Below is the XML generated for the notification in the bpmn2 file.
<bpmn2:dataInputAssociation>
<bpmn2:targetRef>_26A2A9B8-5A6F-4D0B-A388-66795F520516_NotStartedNotifyInputX</bpmn2:targetRef>
<bpmn2:assignment>
<bpmn2:from xsi:type="bpmn2:tFormalExpression"><![CDATA[[from:|tousers:|togroups:|toemails:person#place.com|replyTo:|subject:Hello world|body:I wish I wish this email would fire.]#[PT1M]]]></bpmn2:from>
<bpmn2:to xsi:type="bpmn2:tFormalExpression"><![CDATA[_26A2A9B8-5A6F-4D0B-A388-66795F520516_NotStartedNotifyInputX]]></bpmn2:to>
</bpmn2:assignment>
</bpmn2:dataInputAssociation>
I've dug through the Kogito documentation and examples and could not find a way to configure notifications. Is this something that is supported and just needs to be configured? At the very least is there an event that I can write a listener for to send the email myself?
its old question though, have you tried service task to execute notification of your choice?
it might not available at the time of question, but now we can run this based on our flow
now user task has notification too
for editor based notification we have API now for different channels, see link
https://blog.kie.org/2021/02/kogito-notifications-api.html
I can configure Zabbix to send me mail warning notifications only if a certain amount of time has passed and the trigger problem is still active on the dashboard.
Now, Zabbix doesn't have an option under "recovery operations" for delay like "Operations" has but is there a way to configure something so I can receive "RESOLVED" mail only if there was a "PROBLEM" mail for a certain trigger in the first place?
The way it works now is - if I set up 'recovery operations' for sending me 'resolved' mail it will send me that regardless if it did or didn't send me 'problem mail'.
I want to do solve this because it's very annoying getting all the notifications but I still need some notifications. Like when a problem is active for more than 20 minutes and I only want to see problem and resolved notifications for that.
Unfortunately there's no way out of the box to manage the recovery operation.
You can find more details in the documentation:
Recovery operations do not support escalating - all operations are
assigned to a single step.
If this is an important issue to you there are some ways to mitigate it, but any workaround that comes to mind is time consuming.
You can implement multiple triggers with tags and tag-bound actions (ie: duplicate triggers with different actions and recovery actions), manage the issue with an agent in your mailbox (horrible!) or write a custom script to be used as default recovery action.
This script should receive the problem ID as a parameter and use it to check via API if it needs to silently close the issue or send an email or set a trigger with a specific tag and use it with another zabbix action etc...
I want to develop an Emailer microservice which provides a SendEmail command. Inside the microservice I have an aggregate in mind which represents the whole email process with the following events:
Aggregate Email:
(EmailCreated)
EmailDeliveryStarted
EmailDeliveryFailed
EmailRecipientDelivered when one of the recipients received the email
EmailRecipientDeliveryFailed when one of the recipients could not receive the email
etc.
In the background the email delivery service SendGrid is used; my microservice works like a facade for that with my own events. The incoming webhooks from SendGrid are translated to proper domain events.
The process would look like this:
Command SendEmail ==> EmailCreated
EmailCreatedHandler ==> Email.Send (to SendGrid)
Incoming webhook ==> EmailDeliveryStarted
Further webhooks ==> EmailRecipientDelivered, EmailRecipientDeliveryFailed, etc.
Of course if I'd want to replace the external webservice and it would apply other messaging strategies I would adapt to that but keep my domain model with it's events. I want to let the client not worry about the concrete email delivery strategy.
Now the crucial problem I face: I want to accept the SendEmail commands even if SendGrid is not available at that very moment, which entails storing the whole email data (with attachments) and then, with an event handler, start the sending process. On the other hand I don't want to bloat my initial EmailCreated event with this BLOB data. And I want to be able to clean up this data after SendGrid has accepted my send email request.
I could also try both sending the email to SendGrid and storing an initial EmailDeliveryStarted event in the SendEmail command. But this feels like a two-phase commit: if SendGrid accepted my call but somehow my repository was unable to store the EmailDeliveryStarted event the client would be informed that something went wrong and it tries again which would be a disaster.
So I don't know how to design my aggregate and, more important, my EmailCreated event since it should not contain the BLOB data like attachments.
I found this question interesting and it took a little bit to reflect on that.
First things first - I do not see an obligation to store the email attachments in the event. You can simply store the fully qualified name of the files attached. That would keep the event log smaller and perhaps rule out the need for "deleting" the event (and you know that, in an event source model, you should not do that).
Secondly, assuming that the project is not building an e-mail client, I don't see a need to model an e-mail as an aggregate root. I see AggregateRoots represent business-relevant domains, not for a utility task like sending an e-mail. You could model this far more easily using a database table / document that keeps track of what has been sent and what not yet. I see sending e-mails through SendGrid as a reaction to a business event, certainly to be tracked, but not an AggregateRoot in its own right.
Lastly, if you want to accept SendEmail commands also when SendGrid is offline, the aggregate emits an EmailQueued event. The EmailQueuedHandler will produce a line on the read model of the process in charge taking all the Emails in queued state and batch them for sending. If the communication with SendGrid fails, you can either:
Do nothing, the sender process will pick the email at the next attempt
Emit a EmailSendFailed, intercepted by a Handler that will increase the retry count (if you want to stop after a number of retries).
Hope that is sufficiently clear and best of luck with your project.
Wondering if there is a better option than a wcf callback.
When processing some data Invoices and printing them and I need to constantly show the user in a winform -"Invoice 1 Printed" invoice 2 printed etc....
I have put together a call back mechanism and all works but wondering if there is a better way of doing this .
Was thinking along the line if 2 services would be better than a callback.
One that loops at server side through the invoices and saves to the database the status ="Printed" and the other the queries it and check if it has printed and return to the user
.
Would that be better than a callback,faster and avoid timeouts etc..?
Just thinking as an alternative as a collegue who used callback extensively said" dont use callback use 2 services".
What would you do if you had to process 2000 invoices and notify the user for each one
Any suggestions?
On one project we have done the following:
All windows clients also host a WCF service
When the windows client starts it registers itself with the server, that this user is loggon with this IP address.
The server stores info on who is logged in where
Then we can send a message to the user whenever we want
When the client recieves the message we fire an event, then whatever part of the UI that is affected can update itself or show a message.
I have a vb.net 2.0 winforms project that is full of all kinds of business reports (generated with Excel interop calls) that can be run "on-demand". Some of these reports filter through lots of data and take a long time to run - especially on our older machines around the office.
I'd like to have a system where a report request can be made from the client machines, some listener sees it, locates a server with low-load, runs the report on that server, and emails the result to the user that requested it.
How can I design such a change? All our reports take different parameters, and I can't seem to figure out how to deal with this. Does each generator need to inherit from a "RemoteReport" class that does this work? Do I need to use a service on one of our servers to listen for these requests?
One approach you could take is to create a database that the clients can connect to, and have the client add a record that represents a report request, including the necessary parameters which could be passed in an xml field.
You can then have a service that periodically checks this database for new requests, and depending on how many other requests are current processing, submit the request to the least busy server.
The server would then be able to run the report and email the file to the user.
This is by no means a quick solution and will likely take some time to design the various elements and get them to work together, but its not impossible, especially considering that it has the possibility to scale rather well (adding more available/more powerful servers).
I developed a similar system where a user can submit a request for data from a web interface, that would get picked up by a request manager service that would delegate the request to the appropriate server based on the type of request, while providing progress indication to the client.
How about write a web service that accepts reporting requests. On completion the reports could be emailed to the users. The web service can provide a Status method that allows your WinForms app to interrogate the current status of the report requests.