internally, our organization limits what servers and applications can send emails. I would like to be able to have scripts that could be run on any server send an email when done. Is it possible to install IIS SMTP on a single server and have that relay all mail the servers send to our standard relay servers? All the advice I see on the internet talks about configuring relays for outbound connectivity, but this would be for internal use only. the flow would be something like this, I believe
[any server] --> My SMTP relay --> corporate SMTP relay --> Internal Mail system
Is this doable? if so, any links on how to configure? I have nearly zero SMTP knowledge.
Doable, you should search for "SMARTHOST SMTP" in google. If your mail server limits relaying to specific HOSTS/IP address, you'll still need to add the new server to the relay list. Setup will be a little different depending on you mail server/version (Exchange, IIS SMTP).
SMART HOST for Exchange:
http://www.dnsexit.com/support/mailrelay/exchange/setup.htm
I have a similar setup to what you have described. You might want want to check if your SMTP server allows relaying for authenticated users, since this might allow you current script to send emails using a domain/email user account.
Related
I am new with OPC UA world and I need to getting start with this. I have a company in witch there is a new machinery that is an opc server. This machinery is actually linked in internet with dhcp. In particular, I need to understand:
For remote control on the same network I only need to take the ip address (eventually static) and I can monitor and write values of the server, is this right?
OPC UA server provide different endpoints, typically in the form of opc.tcp://myOPCUAServer:12345/path those endpoints can be discovered using the local IP address or DNS name. Your OPC UA stack typically provide functionality to list all the endpoints, like DiscoveryClient.GetEndpoints() and than select one for you CoreClientUtils.SelectEndpoint().
Often endpoint support different connection settings like Security Policy (e.g. Basic256Sha256), Message Security Mode (e.g. SignAndEncrypt) and User Authentication (Anonymous, Username/Password, Certificate). Your client connection would need to support the same, in order to connect.
Very new to James, so please bear with the question.
James 2.3.2.1, Ubuntu 14.04.
Configured as both POP3 and SMTP. SSL enabled and certificate store successfully connected.
The problem is this: once SSL is enabled, the SMTPS listen port is 465, and there is no longer a listener on the standard port 25 to receive email from external senders (e.g., from Gmail). Thus mail delivery sent to local accounts works when sent from other local accounts, but fails when sent from external servers.
Is it possible to configure James to listen both on the standard port 25 for external senders and on the secured port 465 for authenticated senders? If so, how is it done, and how do I make sure it doesn't become an open relay (i.e., only receives mail sent to local user accounts)? With the SSL configuration, I just set both authRequired and verifyIdentity to true, which ensures only authenticated users can send mail. With standard SMTP, I'm not sure:
a) how to configure it while also having the secured connection; and
b) how to avoid becoming an open relay.
Thanks in advance for any help.
So I didn't find a way to do this in James, but my goals were:
a) secured SMTP for authenticated (domain) user accounts;
b) regular SMTP for receiving email from external servers;
c) not becoming an open relay.
I achieved this by using the nifty OpenSMTPD server relaying to the secured James port. Took a while to get the configuration right on both servers, but the setup is working like a charm now.
Postfix looked too complicated to set up, and Sendmail does not support client-side SSL connections (to secured SMTP servers). OpenSMTPD is a lifesaver.
I'm faced with a problem when trying to access the GCM from a controlled environment that restrict me to a few websites that I can access. In this environment I need to specify what websites I would get access. At first time, I allow the https://android.googleapis.com/gcm/send to free access, but it does not work. Only when I allowed the whole http://google* (notice the asterisc) that worked fine, but I don't can let that mode.
Anybody knows the whole list of websites that are accessed by GCM, in order to register them in my firewall whitelisting?
From the GCM Http Connection server documentation, it states that:
Note: If your organization has a firewall that restricts the traffic to or from the Internet, you need to configure it to allow connectivity with GCM in order for your GCM client apps to receive messages. The ports to open are: 5228, 5229, and 5230. GCM typically only uses 5228, but it sometimes uses 5229 and 5230. GCM doesn't provide specific IPs, so you should allow your firewall to accept outgoing connections to all IP addresses contained in the IP blocks listed in Google's ASN of 15169.
So you need to configure you ports for 5228, 5229, and 5230.
I am publically distributing an application which can be installed on users PC. Client will periodically communicate with the server to send information from the client. Server have to acknowledge the successful receipt of the information. Occasionally, server will do an one way communication with the client. My question is what is the best/failproof/recommended way to do client-server communication when client is massively distributed? I am currently focusing on self-hosted service to do the communication. What precaution should i take if the clients ip address change frequently?
My suggestions are:
Use HTTP or HTTPS on default ports. By massively I understand you will have no control over the network restrictions, firewalls, NAT traversal, etc. Using HTTP(S) and initiating the connections from the clients with simple web requests will save you a lot of trouble.
Use polling at regular/smart intervals to solve your occasional server initiated data transfer. Clients running on workstations wont have a public IP address, let alone a fixed one.
We have a IIS 6.0 server on AWS EC2 that is receiving emails and forwarding onto another IIS box, we are inadvertently sending NDR emails via the SMTP service to the forged From: header with the spam attached.
A few quick questions regarding IIS 6.0 SMTP
From reading we don't see a was to stop NDRs (this is by design to meet RFC requirements)
As we accept all emails sent to our address and process off line on a seperate machine can someone advise why NDR's are been delivered in the first place? Is there some other loophole they are using to force the SMTP server to generate Delayed and Non Delivery Reports?
Also can anyone recommend software that can stop this type of attack. e.g. Toriss, ORF from Vamsoft
You have to use SPF on the receiving machine so it does not accept mails with forged reverse-paths. There is no way to really fix the issue later in the mail server chain. (Note that the SMTP reverse path is not necessarily the same as the address in the From header, for example they always differ in list mails. If IIS does send bounce mails to the From address instead of the reverse-path then it is horribly broken.) If IIS does not know SPF, then you have to use a different mail server or an SMTP proxy.