Applying IPSet in Lede - iptables

I have been having trouble setting ipset based on host address. Here are the steps I followed with little success recently:
1. Apply list of host to /etc/dnsmasq.conf like this
ipset=/somehost.com/myipset
2. Created ipset using
ipset -n myipset hash:ip (also tried with hash:net)
3.May be the ipsets are getting created, I do not know how to validate this. However when i run
ipset list myipset
Name: myipset
Type: hash:ip
Revision: 4
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 88
References: 0
Members:
Members being blank and Reference being '0' seems to indicate that there are no ipsets being formed.
What I have done already:
Saved ipset creation in a startup script to ensure the ipset is created on boot-up
Tried restarting dnsmasq followed by firewall
Tried booting up the router.
Installed dnsmasq-full, ipset packages additionally in LEDE
I do remember this reference going up in Lede v17.01.3, now I use 17.01.4. I do not recollect what I did differently to make this reference reflect correctly.

Indeed, the default dnsmasq package on LEDE doesn't provide ipset extenstion. You have to use the package dnsmasq-full instead.
You have to create your ipset before a query is made, idealy before dnsmasq is started.
# ipset -N myipset hash:ip
Dnsmasq will fill myipset when you make a DNS query on facebook.com (or a subdomain), you have to add this command on your /etc/dnsmasq.conf :
ipset=/.facebook.com/myipset
You need to restart dnsmasq at this point, then this simple command is enough to fill your ipset
# ping facebook.com
on either the router (provided that you keep 127.0.0.1 as a nameserver on the router /etc/resolv.conf) or on a client of the router (that uses the router as a DNS of course).
Alternatively, you can do this command on the router, it will give you the address(es) given by dnsmasq:
# nslookup facebook.com 127.0.0.1
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: facebook.com
Address 1: 157.240.1.35
Address 2: 2a03:2880:f129:83:face:b00c::25de
After that dnsmasq will fill your ipset:
# ipset -L myipset
Name: whitelist_domain
Type: hash:ip
Revision: 4
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 100
References: 2
Members:
157.240.1.35

Related

Mapping hostnames in --hosts (-s) to addresses

Guest: Virtual Box (Linux Mint with 2 network adapters)
When I try to create Vercita's db like this:
echo "NETWORKING=yes" >> /etc/sysconfig/network
export SHORT_HOSTNAME=$(hostname -s)
expect install_image/vertica.expect
I get error:
Mapping hostnames in --hosts (-s) to addresses...
hplaptop resolved to multiple (2) addresses: (IPv4...) 10.0.2.15, 192.11.12.102 (IPv6...) <none>
Error: Unable to resolve 'hplaptop'
Installation FAILED with errors.
Try:
echo "NETWORKING=yes" >> /etc/sysconfig/network
export SHORT_HOSTNAME=10.0.2.15 # I assume this is the internal IP add
expect install_image/vertica.expect
If one hostname resolves to two IP addresses, you'd have to expect that behaviour. And IP addresses in a Vertica cluster config are less complicated to work with than DNS names or host names ...

How do I SSH to molecule instance without molecule login

I'm using molecule and vagrant to deploy centos7 instance. For some reasons, I need to use ssh command access molecule instance, instead of molecule login. The ssh informations will then paste into one of my VS code extension.
Molecule.yml
---
dependency:
name: gilt
driver:
name: vagrant
provider:
name: virtualbox
lint:
name: yamllint
platforms:
- name: openresty-instance
box: centos/7
instance_raw_config_args:
- "ssh.insert_key = false"
- "vm.network 'forwarded_port', guest: 22, host: 22"
- "vm.network 'forwarded_port', guest: 80, host: 8080"
interfaces:
- auto_config: true
network_name: private_network
ip: '192.168.33.111'
provisioner:
name: ansible
log: true
lint:
name: ansible-lint
verifier:
name: testinfra
lint:
name: flake8
The IP above let me access port 80 outside vagrant.
But the ssh command to molecule instance IP is not working.
Error
########################################################### #
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
########################################################### IT IS
POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be
eavesdropping on you right now (man-in-the-middle attack)! It is also
possible that a host key has just been changed. The fingerprint for
the ECDSA key sent by the remote host is
SHA256:wVk4Da5pWWNHLiypvEKAJuwzG/2FLOMgwPkrO4oFBZQ. Please contact
your system administrator. Add correct host key in
/Users/abel/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/abel/.ssh/known_hosts:32 ECDSA
host key for 192.168.33.111 has changed and you have requested strict
checking. Host key verification failed
This message can mean what it says: "that there is something nasty going on" if you have this in an environment with static servers.
But if you have, say, a testing-environment, where you create and destroy virtual machines as a daily procedure, this is a "normal" security warning.
It just means "hey, I now this guy, but his fingerprint doesn't match the one in my document archive". If this is intended (like I said, in a test-environment) - then just go into the "document archive", delete "this guys fingerprint" and "take a new fingerprint of him".
So in your case ("/Users/abel/.ssh/known_hosts:32") just open your "known_hosts"-file, and delete the line 32.
Or use the command:
ssh-keygen -R 192.168.33.111 -f "~/Users/abel/.ssh/known_hosts"

cannot ssh to Google VM Engine

at the first day when i created the instance, i was able to SSH no problem, but after yesterday, i just couldnt connect to my instance. when i checked the console i get something like this
Nov 5 15:30:49 my-app kernel: [79738.555434] [UFW BLOCK] IN=ens4 OUT= MAC=42:01:0a:94:00:02:42:01:0a:94:00:01:08:00 SRC=71.15.27.115 DST=10.121.0.7 LEN=60 TOS=0x00 PREC=0x00 TTL=50 ID=38049 PROTO=TCP SPT=37344 DPT=22 WINDOW=60720 RES=0x00 SYN URGP=0
i figured its a firewall issue, but my firewall rule seems okay (assuming i did not change anything since first i created the instance). i wonder what else could be the problem? here's my fw config
default-allow-http
http-server
IP ranges: 0.0.0.0/0
tcp:80
Allow
1000
default
default-allow-https
https-server
IP ranges: 0.0.0.0/0
tcp:443
Allow
1000
default
default-beego-http
http-server
IP ranges: 0.0.0.0/0
tcp:8080
Allow
1000
default
default-jenkins-app
http-server
IP ranges: 0.0.0.0/0
tcp:8989
Allow
1000
default
default-allow-icmp
Apply to all
IP ranges: 0.0.0.0/0
icmp
Allow
65534
default
default-allow-internal
Apply to all
IP ranges: 10.128.0.0/9
tcp:0-65535, udp:0-65535, 1 more
Allow
65534
default
default-allow-rdp
Apply to all
IP ranges: 0.0.0.0/0
tcp:3389
Allow
65534
default
default-allow-ssh
Apply to all
IP ranges: 0.0.0.0/0
tcp:22
Allow
65534
default
Looking at the output you’ve provided following your attempt to SSH into your instance, it looks like you’re being blocked by UFW (Uncomplicated Firewall) which is installed/enabled on the actual instance, rather than the GCP project wide firewall rules you have set (these look okay).
In order to SSH into your VM you will need to open port 22 in UFW on the instance. There are a couple of possible methods that will allow you to do this.
Firstly, see Google Compute Engine - alternative log in to VM instance if ssh port is disabled , specifically the answer by Adrián which explains how to open port 22 using a startup script. This method requires you to reboot your instance before the firewall rule is applied.
Another method which doesn’t require a reboot of the machine makes use of the Serial Console. However, in order to use this method a password for the VM is required. This method is therefore only possible if you previously set a password on the VM (before losing access).
To connect via the Serial Console the following metadata must be added, either to the instance you are trying to connect to, or to the entire project:
serial-port-enable=1
You can apply the metadata to a specific instance like so:
gcloud compute instances add-metadata [INSTANCE_NAME] \
--metadata=serial-port-enable=1
Or alternatively, to the entire project by running:
gcloud compute project-info add-metadata --metadata=serial-port-enable=1
After setting this metadata you can attempt to connect to the instance via the Serial Console by running the following command from the Cloud Shell:
gcloud compute connect-to-serial-port [INSTANCE_NAME]
When you have accessed the instance you will be able to manage the UFW rules. To open port 22 you can run:
sudo /usr/sbin/ufw allow 22/tcp
Once UFW port 22 is open, you should then be able to SSH into your instance from Cloud Shell or from the Console.
There is some additional info about connecting to instances via the Serial Console here:
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console

"Could not resolve hostname" Ansible

I have created my first ansible playbook according to this tutorial, so it looks like this:
---
- hosts: hostb
tasks:
- name: Create file
file:
path: /tmp/yallo
state: touch
- hosts: my_hosts
sudo: yes
tasks:
- name: Create user
user:
name: mario
shell: /bin/zsh
- name: Install zlib
yum:
name: zlib
state: latest
However, I can not figure out which hosts I should put into my hosts file. I have something like this for now:
[my_hosts]
hostA
hostB
Obviously, it is not working and I get this:
ssh: Could not resolve hostname hostb: Name or service not known
So how should I change my hosts file? I am new to ansible so I would be very grateful for some help!
Ok so the Ansible inventory can be based on following format:
HostName => IP Address
HostName => DHCP or Hosts file hostname reference localhost/cassie.local
Create your own alias => hostname ansible_host=IP Address
Group of hosts => [group_name]
That is the most basic structure you can use.
Example
# Grouping
[test-group]
# IP reference
192.168.1.3
# Local hosts file reference
localhost
# Create your own alias
test ansible_host=192.168.1.4
# Create your alias with port and user to login as
test-2 ansible_host=192.168.1.5 ansible_port=1234 ansible_user=ubuntu
Grouping of hosts will only end when the end of file or another group detected. So if you wish to have hosts that don't belong to a group, make sure they're defined above the group definition.
I.E. everything in the above example is belong to test-group, and if you do following; it will execute on all of the hosts:
ansible test-group -u ubuntu -m ping
ansible is case sensitive host name in your inventory file is hostB and in your playbook is hostb i think way its showing " Name or service not known" error
change your host name in the playbook to hostB

Yosemite localhost resolver and dnsmasq fails offline

Setup my local dev environment similar to this post and everything was working fine but recently I am unable to access my local dev domains when I am offline. When I am connected to the internet it works fine. I'm wondering if something changed with how resolver is used in Yosemite. It seems as if resolver rules are ignored if I'm offline.
dnsmasq.conf:
address=/.dev/127.0.0.1
listen-address=127.0.0.1
/etc/resolver/dev
nameserver 127.0.0.1
When online:
ping -c 1 mydomain.dev
PING mydomain.dev (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.038 ms
--- mydomain.dev ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.038/0.038/0.038/0.000 ms
scutil --dns
resolver #1
search domain[0] : nomadix.com
nameserver[0] : 203.124.230.12
nameserver[1] : 202.54.157.36
if_index : 4 (en0)
flags : Request A records
reach : Reachable
resolver #2
domain : dev
nameserver[0] : 127.0.0.1
flags : Request A records, Request AAAA records
reach : Reachable,Local Address
when offline:
ping -c 1 mydomain.dev
ping: cannot resolve mydomain.dev: Unknown host
scutil --dns
No DNS configuration available
OSX Yosemite + resolver + dnsmasq offline === resolved !!
when you're offline every interface on your computer, but 127.0.0.1, goes down.
so if you want to have a dns resolution your dns server have to listen to 127.0.0.1. In my case it's dnsmasq I choose because you don't have to be a sys admin to make it work, and it does !
following those simple steps I got it working:
1) brew install dnsmasq
2) cp /usr/local/opt/dnsmasq/dnsmasq.conf.example /usr/local/etc/dnsmasq.conf
if like me it's not properly installed in /usr/local/opt you should be able to read in the brew installation debug lines something like this :
make install PREFIX=/usr/local/Cellar/dnsmasq/2.72
in this case run the following command:
ln -s /usr/local/Cellar/dnsmasq/2.72 /usr/local/opt/dnsmasq
and then back to step 2
3) vi /usr/local/etc/dnsmasq.conf
and add your domains like this for exemple:
address=/foo.dev/192.168.56.101
where in that case every url ending with foo.dev (http://www.foo.dev, http://foo.dev, http://what.ever.you.want.foo.dev, etc...) will be resolved as 192.168.56.101 (this is the kind of ip you have using Virtualbox, 192.168.56.*)
4) sudo launchctl load /Library/LaunchDaemons/homebrew.mxcl.dnsmasq.plist
5) try it before putting it into the resolver
nslookup foo.dev 127.0.0.1
and expect this :
Server: 127.0.0.1
Address: 127.0.0.1#53
Name: foo.dev
Address: 192.168.56.101
6) mkdir -p /etc/resolver
vi /etc/resolver/dev
add those two lines :
nameserver 127.0.0.1
search_order 1
7) ping foo.dev or hint http://foo.dev or http://so.cool.foo.dev in your browser address bar and you're good to go !!!
8) Be happy !! You can work offline AGAIN !!!!
I've been checking this question for months hoping for an answer. I believe this will help when 10.10.4 drops: http://arstechnica.com/apple/2015/05/new-os-x-beta-dumps-discoveryd-restores-mdnsresponder-to-fix-dns-bugs/
Apple are replacing discoveryd with mDNSresponder (like it used to be)
The problem is when you are offline you should specify a resolver for the root domain '.':
When we search for www.google.com
There is a "." (root domain) added automatically at the end like: www.google.com.
So all you have to do is :
Set all your network interface dns servers to 127.0.0.1:
networksetup -setdnsservers Ethernet 127.0.0.1
networksetup -setdnsservers Wi-Fi 127.0.0.1
...
Create a file /etc/resolver/whatever:
nameserver 127.0.0.1
domain .
See this question for more details