Change SPF records of multiple domain names in WHM - cpanel

I recently changed the IP address of Exim that email sending uses a different IP address from the main shared IP of WHM.
Now I realized that I need to change the SPF value 100's of domains from WHM
From
"v=spf1 +a +mx +ip4:xxx.xxx.xxx.xxx ~all"
to
"v=spf1 +a +mx +ip4:xxx.xxx.xxx.xxx +ip4:xxx.xxx.xxx.yyy ~all"
Is there a fast way to do this aside from manually editing each domain?

You can change all domain zone through command line using replace command
Please check it : http://www.computerhope.com/unix/replace.htm

WHM v. 78 has new feature that allows adding new hosts for all domains in the WHM system.
On the WHM 78 release notes page, search for "New settings for smarthost route lists" to learn how to do it.
Also even in older WHM version one can use SPF installer script. Currently the example command to make sure some domain has additional values 1.2.3.4 and host.name.tld is:
/usr/local/cpanel/bin/spf_installer cpanelusername '+ip4:1.2.3.4,+include:host.name.tld' 0 1 0
Note the + signs and comma between the two values. The command will make sure the SPF has these two values beside the server IP value that is added by WHM by default.
To apply on all users, i did:
for user in $(ls -A1 /var/cpanel/users/ | grep -Ev "system|\."); do /usr/local/cpanel/bin/spf_installer "$user" '+ip4:1.2.3.4,+include:host.name.tld' 0 1 0;done
to check the result, do command
dig txt hosteddomain.com
(if the domain is behind proxy like cloudflare, the result will not be visible immediately)

Related

CanonicalHostname and ssh connections through a jumphost

Say I have an internal subdomain named internal.sub.domain.net with hosts like foo.internal.sub.domain.net and bar.internal.sub.domain.net. Furthermore, these hosts are only accessible from the outside world through the jumphost internal.sub.domain.net. Thus, the following ~/.ssh/config allows me to directly connect to either of the internal hosts from outside:
Host foo bar
HostName %h.internal.sub.domain.net
ProxyJump internal.sub.domain.net
This is fine for a subdomain with just only a couple of hosts, but it quickly becomes rather unmaintainable if the number of hosts is large and / or changes occasionally. But wildcards may come to the rescue:
Host *.internal.sub.domain.net
ProxyJump internal.sub.domain.net
This avoids the maintenance issue, but forces to always specify the fully qualified hostname to connect, which is rather tedious. From looking at the ssh_config man-page, the CannonicalizeHostname and CannonicalDomains options seems to fix that:
CannonicalizeHostname always
CannonicalDomains internal.sub.domain.net
Host *.internal.sub.domain.net
ProxyJump internal.sub.domain.net
But this would only work if the name lookup for the host that is to be connected succeeds. But as these hosts are internal by definition, it is no surprise that name resolution fails.
A not really helpful but very illustrative hack is to fake successful name resolutions by just adding all the internal hosts as aliases for e.g. 127.0.0.1 to /etc/hosts, i.e. adding the following line to /etc/hosts:
127.0.0.1 foo.internal.sub.domain.net bar.internal.sub.domain.net
With that line in place, the last ~/.ssh/config works like a charm. But apart from the fact that this would be quite a hack, it just only shifts the maintenance issue form ~/.ssh/config to /etc/hosts.
As the described scenario should not be so uncommon, is there a way to make it work? To phrase it in one sentence again:
I want to be able to ssh to all internal hosts that live in the internal.sub.domain.net, i.e. that are only accessible through the internal.sub.domain.net jumphost without having to list each of these hosts somewhere, as they may frequently be added or removed from the internal domain and without being forced to always type their fully qualified hostnames.
If you are invoking ssh from a shell, you could define a short variable for the internal domain and append that to the relevant hostnames:
e.g. in your ~/.bashrc or similar:
i=".internal.sub.domain.net"
Then, on the command line:
ssh foo$i
ssh bar$i
At least until a better solution comes along. It's not perfect but it's only 2 extra characters on the command line.
Update: A better solution has come along.
A better solution, by Peter Stuge on the openssh mailing list, is to have something like this in your config file:
Host *.i
ProxyCommand ssh -W %hnternal.sub.domain.net:%p jumphost
This enables the use of abbreviated hostnames like foo.i (rather than foo$i in my first answer). Being able to use . rather than $ is better because it doesn't require the use of the shift key.
However, this is limited to a single remote jumphost whose internal domain starts with i. I think that internal domains that start with internal. are common. If you need to connect to multiple such internal domains, the above needs a slight (but very ugly) tweak. I apologize in advance if anyone's eyes start to bleed.
For example, if you need to connect to these internal domains:
internal.a.com
internal.b.net
Then the above needs to be changed to something like this:
Host *.a
ProxyCommand sh -c "H=%h; exec sh -c \"exec ssh -W \${H%%.a}.internal.a.com:%p jumphosta\""
Host *.b
ProxyCommand sh -c "H=%h; exec sh -c \"exec ssh -W \${H%%.b}.internal.b.net:%p jumphostb\""
Believe me, I wish it could be simpler but that's the shortest thing that I could get to work (without using a separate shell script or environment variables).
If a username is supplied on the command line, it only applies to the jumped-to host. Your local username will be used by default for the jumphost.
If you need to use a username for the jumphost that isn't the same as your local username, add it in the ProxyCommand (e.g. jumpuser#jumphosta).

openshift create app ask for passwd

all. when I try 'rhc create-app demo python-2.7', I meet an issue not able to check out the git repo. system will ask for the password of the cartridge or something. but in fact I have upload the default key from openshift console.
here is what I have done:
install openshift from puppet
oo-diagnostics check pass
create app
then I remove the default files in /root/.ssh and remove the key file from openshift console, and recreate the ssh key, and run rhc setup again to upload key. then create app again, but failed again.
In the Broker Virtual Machine, while running - oo-register-dns -h node -d domainX.example.com XXX.XXX.XXX.XXX -k /var/named/domainX.example.com.key,
The proxy XXX.XXX.XXX.XXX should be your Node Virtual Machine's IP Address (as I think most probabily you have used Broker's IP Address. Change accordingly and run this command again,
It will work.
Can you try with a different (main) domain name instead of example.com? I think it might be the issue as per wikipedia explanation:
Example.com, example.net, example.org, and example.edu are second-level domain names reserved for documentation purposes and examples of the use of domain names.
Even if you've masked it with your hosts file or local DNS it still might be confusing the Openshift's DNS.

Apache Module - block an IP address

Can i block an IP address in apache module?
i searched it, but it comes up with no answer, is it true?
You can block using iptables. with a simple rule.
But if you have concerns over some ip address causing DOS attacks or something then you can add it to your black list automatically by using DOS deflate program. I am providing it below for your convenience.
This script will help protect dos attack on our servers. We will use DDos deflate method.
This method will use iptables to block any dos traffic. and will create a cronjob that will execute every minute to execute the script, script will then monitor any threats and stops them using iptables.
1. Install the deflate script
wget http://www.mattzuba.com/wordpress/wp-content/uploads/2011/02/ddos_deflate-0.7.tar.gz
tar -xf ddos_deflate-0.7.tar.gz
cd ddos_deflate-0.7
sudo ./install.sh
2. now the above script is installed in the directory /usr/local/ddos
cd /usr/local/ddos
you will see a file called ddos.conf, edit the file
vi ddos.conf
Here you will see various options, look for ##### How many connections define a bad IP? Indicate that below.
NO_OF_CONNECTIONS=xyz
change no of connections to 200
NO_OF_CONNECTIONS=200
Where 200 is the number of total connections allowed by an ip address in one second. The number can be increased or decreased as per need. Connections over the defined number will be considered a dos attack.
3. setting ip address block time
cd /usr/local/ddos
you will see a file called ddos.conf, edit the file
vi ddos.conf
and look for ##### Number of seconds the banned ip should remain in blacklist.
BAN_PERIOD=60
change ban period to
BAN_PERIOD=600
Where 600 is the number of seconds the ip address will be blocked for, after that it will be released. This nmber can be changed as per need.
4. allow selected ip address to access the system without getting banned or restrictions.
cd /usr/local/ddos
you will see a file called ignore.ip.list, edit the file
vi ignore.ip.list
enter the following ip address to be unblocked
127.0.0.1 ----> the localhost ip address
92.235.247.186 ----> public ip address of the server (change as required)
Hope this helps a little.

How I can I get my home network's IP address from a shell script?

I have an account at a server at school, and a home computer that I need to work with sometimes. I have exchanged keys, and now only have one problem. While my school account has a name associated with it, "account_name#school", my home network does not. My plan is to have a script that every hour retrieves my home network's IP address, ssh'es into my school account and updates my ssh config file storing my home network's IP address.
How can I retrieve my home computer's IP address from a shell script?
P.S. Is this a sensible plan?
You have two basic choices:
A dynamic DNS address (e.g. dyndns.org). Setting that up is outside the scope of Stack Overflow, but it's probably The Right Thing™ for you.
Use a tool like http://checkip.dyndns.org/ to report your external IP address, then parse the result.
lynx -dump http://checkip.dyndns.org/ | awk '{print $NF}'
Either way, you'll need to configure your router to allow inbound access for SSH, so further information needs to be asked on Super User or Unix & Linux.
My http://ipcalf.com server supports header-based or explicit content negotiation, which is a fancy way of saying that if you curl -H Accept:text/plain ipcalf.com or curl -s http://ipcalf.com/?format=text you should get your public IP address in raw form.
("Full" documentation and source code are on github.)
You can make a request to http://automation.whatismyip.com/n09230945.asp to fetch your public IP from WIMI. This command:
ssh account_name#school "echo -n "$(wget http://automation.whatismyip.com/n09230945.asp -O -)" > some_remote_file"
will write your home public IP to a file called "some_remote_file" in your home folder at school. Set it up with cron and you will have access to your home's routers public IP from your school.
If you've already set up some sort of port forwarding on your router so that you can ssh to your home computer with the routers IP, you should be good to go :)
I figured out a better way to get the IP address then to use an external service. I am SSHing into the server; therefore, the server must know who is SSHing into it, so I can use a tool like who or pinky to find out who is SSHing in.
Here is the command to get the name of the computer logged in.
echo $SSH_CLIENT | awk '{ print $1 }'

gitolite with non-default port

To clone a repository managed by gitolite one usually uses following syntax
git clone gitolite#server:repository
This tells the SSH client to connect to port 22 of server using gitolite as user name. When I try it with the port number:
git clone gitolite#server:22:repository
Git complains that the repository 22:repository is not available. What syntax should be used if the SSH server uses a different port?
The “SCP style” Git URL syntax (user#server:path) does not support including a port. To include a port, you must use an ssh:// “Git URL”. For example:
ssh://gitolite#server:2222/repository
Note: As compared to gitolite#server:repository, this presents a slightly different repository path to the remote end (the absolute /repository instead of the relative path repository); Gitolite accepts both types of paths, other systems may vary.
An alternative is to use a Host entry in your ~/.ssh/config (see your ssh_config(5) manpage). With such an entry, you can create an “SSH host nickname” that incorporates the server name/address, the remote user name, and the non-default port number (as well as any other SSH options you might like):
Host gitolite
User gitolite
HostName server
Port 2222
Then you can use very simple Git URLs like gitolite:repository.
If you have to document (and or configure) this for multiple people, I would go with ssh:// URLs, since there is no extra configuration involved.
If this is just for you (especially if you might end up accessing multiple repositories from the same server), it might be nice to have the SSH host nickname to save some typing.
It is explained in great detail here: https://github.com/sitaramc/gitolite/blob/pu/doc/ssh-troubleshooting.mkd#_appendix_4_host_aliases
Using a "host" para in ~/.ssh/config lets you nicely encapsulate all this within ssh and give it a short, easy-to-remember, name. Example:
host gitolite
user git
hostname a.long.server.name.or.annoying.IP.address
port 22
identityfile ~/.ssh/id_rsa
Now you can simply use the one word gitolite (which is the host alias we defined here) and ssh will infer all those details defined under it -- just say ssh gitolite and git clone gitolite:reponame and things will work.