AWS|Traffic mirroring by using iptables - iptables

I am trying to achieve Network Traffic Mirroring with iptables,In my scenario, i am mirroring the traffic on Server 1 to Server 2's IP address. Apparently configurations are straight foreword as follow
Server 1
echo "1" > /proc/sys/net/ipv4/ip_forward
iptables -t mangle -I POSTROUTING -j TEE --gateway 172.31.34.228 (Server 2 IP)
iptables -t mangle -I PREROUTING -j TEE --gateway 172.31.34.228
But when i run tcpdump on Server 2's interface(172.31.34.228), it is not showing any results.
Both servers are on AWS and under same subnet, OS is AWS latest IAM,
[root#ip-172-31-37-29 ~]# iptables --version
iptables v1.4.21
Kernel Modules
[root#ip-172-31-37-29 ~]# ls /lib/modules/`uname -r`/kernel/net/netfilter/
ipset nf_log_common.ko nf_tables_inet.ko nft_objref.ko xt_cluster.ko xt_hashlimit.ko xt_nat.ko xt_sctp.ko
ipvs nf_log_netdev.ko nf_tables.ko nft_queue.ko xt_comment.ko xt_helper.ko xt_NETMAP.ko xt_SECMARK.ko
nf_conntrack_amanda.ko nf_nat_amanda.ko nf_tables_netdev.ko nft_redir.ko xt_connbytes.ko xt_hl.ko xt_nfacct.ko xt_set.ko
nf_conntrack_broadcast.ko nf_nat_ftp.ko nft_compat.ko nft_reject_inet.ko xt_connlabel.ko xt_HL.ko xt_NFLOG.ko xt_socket.ko
nf_conntrack_ftp.ko nf_nat_irc.ko nft_counter.ko nft_reject.ko xt_connlimit.ko xt_HMARK.ko xt_NFQUEUE.ko xt_state.ko
nf_conntrack_h323.ko nf_nat.ko nft_ct.ko nft_rt.ko xt_connmark.ko xt_IDLETIMER.ko xt_osf.ko xt_statistic.ko
nf_conntrack_irc.ko nf_nat_redirect.ko nft_exthdr.ko nft_set_bitmap.ko xt_CONNSECMARK.ko xt_ipcomp.ko xt_owner.ko xt_string.ko
nf_conntrack.ko nf_nat_sip.ko nft_fib_inet.ko nft_set_hash.ko xt_conntrack.ko xt_iprange.ko xt_physdev.ko xt_tcpmss.ko
nf_conntrack_netbios_ns.ko nf_nat_tftp.ko nft_fib.ko nft_set_rbtree.ko xt_cpu.ko xt_ipvs.ko xt_pkttype.ko xt_TCPMSS.ko
nf_conntrack_netlink.ko nfnetlink_acct.ko nft_fib_netdev.ko x_tables.ko xt_CT.ko xt_l2tp.ko xt_policy.ko xt_TCPOPTSTRIP.ko
nf_conntrack_pptp.ko nfnetlink_cthelper.ko nft_hash.ko xt_addrtype.ko xt_dccp.ko xt_length.ko xt_quota.ko xt_tcpudp.ko
nf_conntrack_proto_gre.ko nfnetlink_cttimeout.ko nft_limit.ko xt_AUDIT.ko xt_devgroup.ko xt_limit.ko xt_rateest.ko xt_TEE.ko
nf_conntrack_sane.ko nfnetlink.ko nft_log.ko xt_bpf.ko xt_dscp.ko xt_LOG.ko xt_RATEEST.ko xt_time.ko
nf_conntrack_sip.ko nfnetlink_log.ko nft_masq.ko xt_cgroup.ko xt_DSCP.ko xt_mac.ko xt_realm.ko xt_TPROXY.ko
nf_conntrack_snmp.ko nfnetlink_queue.ko nft_meta.ko xt_CHECKSUM.ko xt_ecn.ko xt_mark.ko xt_recent.ko xt_TRACE.ko
nf_conntrack_tftp.ko nf_synproxy_core.ko nft_nat.ko xt_CLASSIFY.ko xt_esp.ko xt_multiport.ko xt_REDIRECT.ko xt_u32.ko
[root#ip-172-31-37-29 ~]# rpm -ql kernel | grep xt_TEE
/lib/modules/4.14.62-70.117.amzn2.x86_64/kernel/net/netfilter/xt_TEE.ko
/lib/modules/4.14.70-72.55.amzn2.x86_64/kernel/net/netfilter/xt_TEE.ko
I am really stuck, and any help will be really appreciated.

According to SO thread target must be in same network as your computer. https://code.google.com/archive/p/port-mirroring/ is hinted to be alternative way of achieving your goal but it seems to be openwrt specific.

Related

neutron-linuxbridge-agent oslo_service.service amqp.exceptions.InternalError: Connection.open: (541) INTERNAL_ERROR

Openstack Train version's neutron-linuxbridge-agent component's log show error:
2022-03-17 14:38:36.727 6 ERROR oslo_service.service File "/var/lib/kolla/venv/lib/python3.6/site-packages/amqp/connection.py", line 648, in _on_close
2022-03-17 14:38:36.727 6 ERROR oslo_service.service (class_id, method_id), ConnectionError)
2022-03-17 14:38:36.727 6 ERROR oslo_service.service amqp.exceptions.InternalError: Connection.open: (541) INTERNAL_ERROR - access to vhost '/' refused for user 'openstack': vhost '/' is down
2022-03-17 14:38:36.727 6 ERROR oslo_service.service
2022-03-17 14:38:36.729 6 INFO neutron.plugins.ml2.drivers.agent._common_agent [-] Stopping Linux bridge agent agent.
docker logs neutron_linuxbridge_agent get:
++ /usr/bin/update-alternatives --query iptables
update-alternatives: error: no alternatives for iptables
++ . /usr/local/bin/kolla_neutron_extend_start
+ echo 'Running command: '\''neutron-linuxbridge-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini'\'''
+ exec neutron-linuxbridge-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini
Running command: 'neutron-linuxbridge-agent --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini'
All openstack network agent list show state are UP, but Alive are XXX.
What's the problem with my cluster, and how could I fixed that? Thanks a lot.
The key server is rabbitmq reference of amqp.exceptions.InternalError, and the rabbit#node-3.log shows:
2022-03-18 06:50:35.270 [error] <0.21119.0> Error on AMQP connection <0.21119.0> (1.1.1.2:12345 -> 1.1.1.3:55672 - neutron-linuxbridge-agent:7:11111111-1111-1111-1111-111111111111, vhost: 'none', user: 'openstack', state: opening), channel 0:
{handshake_error,opening,
{amqp_error,internal_error,
"access to vhost '/' refused for user 'openstack': vhost '/' is down",
'connection.open'}}
While check and login the rabbitmq server site(http://1.1.1.3:15672/), I get this error tip:
rabbitmq virtual host experienced an error on node and may be inaccessible
Solve it by:
1, come in the rabbitmq container, and remove or move out recovery.dets file in directory /var/lib/rabbitmq/mnesia/rabbit#node-3/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L.
2, restart rabbitmq container.
Because of:
In RabbitMQ versions starting with 3.7.0 all messages data is combined in the msg_stores/vhosts directory and stored in a subdirectory per vhost. Each vhost directory is named with a hash and contains a .vhost file with the vhost name, so a specific vhost's message set can be backed up separately.
In RabbitMQ versions prior to 3.7.0 messages are stored in several directories under the node data directory: queues, msg_store_persistent and msg_store_transient. Also there is a recovery.dets file which contains recovery metadata if the node was stopped gracefully.
My whole cluster was reboot by accident, it was recoveried by this method.
if you wanna fix your problem easily please deploy your Rabbimq again with Kolla-ansible.
kolla-ansible -i <INVENTORY> deploy -t rabbitmq -vvvv
it's my experience that the easiest way with the lowest cost of fixing Rabbimq or oslo problem in OpenStack is to redeploy Rabbitmq and invest your time.

Why won't chrony sync to GPS when serial data flows through socat?

I'm using gpsd to sync time to a GPS. When I connect my GPS to /dev/ttyUSB0, and tell gpsd to listen on that port, chrony is happy to use it as a time source.
gpsd -D 5 -N -n /dev/ttyUSB0
However, as soon as I try and pipe that data through socat (which is how it needs to work in our production system), chrony won't use it as a source. This is the case even though gpsd, cgps, and gpsmon all seem perfectly happy with the GPS data they are getting.
Here's my socat:
socat -d -d pty,rawer,echo=0,link=/tmp/ttyVSP0 /dev/ttyUSB0,b4800
(my gpsd command is the same as above but with /tmp/ttyVSP0 as the port to listen to in this case).
I'm using chronyc sources to confirm when GPS is a chrony source.
My refclock line in my /etc/chrony/chrony.conf looks like this:
refclock SHM 0 refid GPS
pty ports are prevented from talking to ntp (and thus, chrony) by an early return meant to prevent code from being executed during testing.
void ntpshm_link_activate(struct gps_device_t *session)
/* set up ntpshm storage for a session */
{
/* don't talk to NTP when we're running inside the test harness */
if (session->sourcetype == source_pty)
return;
if (session->sourcetype != source_pps ) {
/* allocate a shared-memory segment for "NMEA" time data */
session->shm_clock = ntpshm_alloc(session->context);
if (session->shm_clock == NULL) {
gpsd_log(&session->context->errout, LOG_WARN,
"NTP: ntpshm_alloc() failed\n");
return;
}
}
Discovered thanks to this bug report

resolve.conf (generated) wrong order? (2 routers)

I have 2 routers in my network.
A) The one issued by my ISP (limited settings, had even to ask to get portforwarding settings), which is alo my modem.
B) My own router (wher i set my DHCP etc)
Now the generated resolve.txt on raspberrian and archlinux list:
domain local
nameserver <IP of A>
nameserver <IP of B>
As in understand it this is the order it will try to use when resolving names, but her it schould try my internal B before trying to resolve using A.
PS: Both subnetmasks are 255.255.255.0
Router A has 192.168.0.1
Router B has 192.168.1.1
All devices are in the 192.168.1.### range.
PPS: Archlinux is setup to use networkmanager, not a manual configured dhcpcd
NetworkManager may use dnsmasq for dhcp and to handle dns lookups.
I noticed that dnsmasq reverses the order of nameservers. Look at your logs. That would show up better in log if we also set dnsmasq to call dns servers in parallel:
#/etc/dnsmasq.conf
#all-servers
#/etc/dnsmasq.d/laptop.conf
all-servers
log-queries=extra
log-async=100
log-dhcp
#/etc/dnsmasq.d/servers.conf
server=66.187.76.168
server=162.248.241.94
server=165.227.22.116
/var/log/dnsmasq.log--
Mar 14 02:14:20 dnsmasq[3216]: 71700 127.0.0.1/38951 cached firefox.settings.services.mozilla.com is <CNAME>
Mar 14 02:14:20 dnsmasq[3216]: 71700 127.0.0.1/38951 forwarded firefox.settings.services.mozilla.com to 165.227.22.116
Mar 14 02:14:20 dnsmasq[3216]: 71700 127.0.0.1/38951 forwarded firefox.settings.services.mozilla.com to 162.248.241.94
Mar 14 02:14:20 dnsmasq[3216]: 71700 127.0.0.1/38951 forwarded firefox.settings.services.mozilla.com to 66.187.76.168
...order of calls is reversed in log lines!
I got rid of systemd-resolved to rely on dnsmasq.

java.net.ConnectException: JBAS012144: Could not connect to remote://nnn.nn.nn.88:9999. The connection timed out

I am trying to run in jboss instance in domain mode. While I do that I am getting the following issue......
[Host Controller] 12:45:56,535 WARN [org.jboss.as.host.controller] (Controller Boot Thread) JBAS010900: Could not connect to remote domain controller at remote://nnn.nn.nn.88:9999 -- java.net.ConnectException: JBAS012144: Could not connect to remote://nnn.nn.nn.88:9999. The connection timed out
I had ran two JBoss instance in domain mode after configuring...
First JBoss instance->
./domain.sh -b nnn.nn.nn.88 -Djboss.bind.address.management=nnn.nn.nn.88
Second JBoss Instance ->
./domain.sh -b nnn.nn.nn.89 -Djboss.domain.master.address=nnn.nn.nn.88 --host-config=host-slave.xml
nnn.nn.nn.88 host.xml configuration is as follows...
<domain-controller>
<local/>
</domain-controller>
nnn.nn.nn.89 host-slave.xml configuration is as follows...
<domain-controller>
<remote host="${jboss.domain.master.address}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
I am able to telnet to port 9999 on host nnn.nn.nn.88 from 89..... as I configured by removing loopback ip for public & management port...... Although is it the implication that <domain-controller> has <local/>....
Please help me to solve this issue... JDK version is JDK 7 Update 80.... EAP 6.3....
In HC host.xml and if we use --host-config=host-slave.xml that particular xml has to connected with DC under <domain-controller> node....
jboss.domain.master.address should be Domain Controller address nnn.nn.nn.88....
<domain-controller>
<remote host="${jboss.domain.master.address:nnn.nn.nn.88}" port="${jboss.domain.master.port:9999}" security-realm="ManagementRealm"/>
<domain-controller>
As per the solution article from redhat....
https://access.redhat.com/solutions/218053#
I ran following command for the same configuration which I had while posting this question..... And I got succeeded.....
DC->
./domain.sh -b my-host-ip1 -bmanagement my-host-ip1
HC->
./domain.sh -Djboss.domain.master.address=my-host-ip1 -b my-host-ip2 -bmanagement my-host-ip2
Although is this way of configuring gives clustering capability to DC and HCs..... I had raised same question to Redhat on the same solution article..... The answer must be yes I hope....
https://access.redhat.com/solutions/218053#comment-975683

Sony Camera Remote API, How can I show/use liveview-stream data with VB.net (use of Sony QX1)

I'm programming a small software for the remote use of a Sony camera (I use QX1 but the model should be irrelevant) in VB.net. I could make pictures by sending the JSON-commands to the camera and also could start the liveview-stream with the method "startLiveview" wrapped in a JSON-command. In return I get the address to download the livestream, like http://192.168.122.1:8080/liveview/liveviewstream (wrapped in a JSON-answer).
According to the Sony CameraRemote-API-reference this is a stream which contains some header-data and the single jpeg-data. But it seems not to be a MJPEG-stream. I could past the livestream-link to my browser and it starts to infinitely download the livestream. I could not show the stream with a MJPEG-stream player like VLC.
My question is, how can I filter out the jpeg-data with VB.net or how can I show the livestream.
A similar question was already posted at an older question but without any reply. Therefore I'm trying it again.
This is my way, I use ffserver to make the video stream-able.
this is myconfig for
ffserver config (server.conf):
Port 8090
BindAddress 0.0.0.0
MaxHTTPConnections 2000
MaxClients 1000
MaxBandwidth 10000
CustomLog -
<Feed feed1.ffm>
File /tmp/feed1.ffm
FileMaxSize 1G
ACL allow 127.0.0.1
</Feed>
<Stream cam.webm>
Feed feed1.ffm
Format webm
VideoCodec libvpx
VideoSize vga
VideoFrameRate 25
AVOptionVideo flags +global_header
StartSendOnKey
NoAudio
preroll 5
VideoBitRate 400
</Stream>
<Stream status.html>
Format status
ACL allow localhost
ACL allow 192.168.0.0 192.168.255.255
</Stream>
And then I run the ffserver with that config:
ffserver -f server.conf
And then encode the video from sony liveview, and broadcast via ffserver:
ffmpeg -i http://192.168.122.1:8080/liveview/liveviewstream -vcodec libvpx -fflags nobuffer -an http://127.0.0.1:8090/feed1.ffm
After that you can stream liveview from the address
localhost:8090/cam.webm
(I use my laptop with linux in a terminal)
Install GSTREAMER:
sudo apt-get install libgstreamer1.0-0 gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav gstreamer1.0-doc gstreamer1.0-tools gstreamer1.0-x gstreamer1.0-alsa gstreamer1.0-gl gstreamer1.0-gtk3 gstreamer1.0-qt5 gstreamer1.0-pulseaudio
fix the parameters of your camera to enable the control via Smartphone, for example the ssd of my camera on my network is DIRECT-dpC3:DSC-RX100M5A
Use Wifi to connect your computer directly to your camera
Tell your camera to begin liveView with this command:
curl http://192.168.122.1:10000/sony/camera -X POST -H 'Content- type:application/json' --data '{ "method": "startLiveview", "params": [], "id": 1, "version": "1.0"}'
Note the response of the camera is an URL: mine is:
{"id":1,"result":["http://192.168.122.1:60152/liveviewstream?%211234%21%2a%3a%2a%3aimage%2fjpeg%3a%2a%21%21%21%21%21"]}
Tell gstreamer to use this URL:
gst-launch-1.0 souphttpsrc location=http://192.168.122.1:60152/liveviewstream?%211234%21%2a%3a%2a%3aimage%2fjpeg%3a%2a%21%21%21%21%21 ! jpegdec ! autovideosink
7; Enjoy ;-)
I try to use ffmpeg to process the streaming, and success to save streaming as flv file.
I use this code on terminal (I use UNIX) and I success save the file as flv file:
ffmpeg -i http://192.168.122.1:8080/liveview/liveviewstream -vcodec flv -qscale 1 -an output.flv
Maybe you can modify or optimize it as you needed.
In VLC works for me adding .mjpg to URL try this. Wait for sec and should be played http://192.168.122.1:8080/liveview/liveviewstream.mjpg