Unknown connection on my SSH - ssh

I'd like to understand something on my SSH server.
When I type
netstat -an | grep -i ':22'
It came out this :
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 192.168.0.1:22 192.168.0.3:49236 ESTABLISHED
tcp 0 0 192.168.0.1:22 43.229.53.72:16866 ESTABLISHED
My local IP is actually 192.168.0.3 and my server is at 192.168.0.1
How can I interpret 43.229.53.72:16866 ? It appears to be a Chinese address.
who -a
Gives me
2015-09-09 02:05 62 id=si term=0 sortie=0
démarrage système 2015-09-09 02:05
niveau d'exécution 2 2015-09-09 02:05 dernier=S
2015-09-09 02:06 1890 id=l2 term=0 sortie=0
IDENTIFIANT tty1 2015-09-09 02:06 2987 id=1
IDENTIFIANT tty5 2015-09-09 02:06 2991 id=5
IDENTIFIANT tty2 2015-09-09 02:06 2988 id=2
IDENTIFIANT tty4 2015-09-09 02:06 2990 id=4
IDENTIFIANT tty3 2015-09-09 02:06 2989 id=3
IDENTIFIANT ttyAMA0 2015-09-09 02:06 2993 id=T0
IDENTIFIANT tty6 2015-09-09 02:06 2992 id=6
pi + pts/0 2015-09-12 19:17 . 4965 (192.168.0.3)
pts/1 2015-09-12 18:59 3529 id=ts/1 term=0 sortie=0
cat /var/log/auth.log | grep '43.229.53.72'
It appears that 43.229.53.72 tried so much times to connect to my ssh
Sep 8 21:55:21 raspberrypi sshd[30282]: Failed password for root from 43.229.53.72 port 39483 ssh2
Sep 8 21:55:23 raspberrypi sshd[30282]: Failed password for root from 43.229.53.72 port 39483 ssh2
Sep 8 21:55:25 raspberrypi sshd[30282]: Failed password for root from 43.229.53.72 port 39483 ssh2
Sep 8 21:55:25 raspberrypi sshd[30282]: Received disconnect from 43.229.53.72: 11: [preauth]
Sep 8 21:55:25 raspberrypi sshd[30282]: PAM 2 more authentication failures; logname= uid=0 euid=0 tty=ssh ruser= rhost=43.229.53.72 user=root
For sure he tries to brute-force the access and he succeed.
How to kick&blacklist this address and how to prevent from this in the future ?

First note, that establishing TCP connection doesn't mean that the authentication succeed.
On public IP, this is really frequent that bots are trying to connect and try some common passwords and known users. You don't have to worry about this, but you can mitigate this phenomenon by these things:
Install and set up fail2ban as proposed in the other answer
Disable password authentication -- bots don't try public keys or other methods
Disable root login -- most of the bots are trying to connection only to root user
Move your service to different port than 22 -- this is hiding but also mitigates the most of the connections
Install "port-knocking" tool that will hide your service for unauthorized access -- for example fwknop

Related

Error when using TLS server with pgBackRest : [113] No route to host

I´m trying to implement the TLS server feature available with pgBackRest to use a secure connection between the DB server and the repo server, replacing the previous SSH passwordless setup (that was working fine).
After following the online documentation, I´m having the following error when issuing the stanza-create command :
pgbackrest#pgb-repo$ pgbackrest --stanza=training --log-level-console=info stanza-create
2022-06-13 12:56:55.677 P00 INFO: stanza-create command begin 2.39: --buffer-size=16MB --exec-id=8994-62e5ecac --log-level-console=info --log-level-file=info --pg1-host=pg1-primary --pg1-host-ca-file=/etc/pgbackrest/cert/ca.crt --pg1-host-cert-file=/etc/pgbackrest/cert/pg1-primary.crt --pg1-host-key-file=/etc/pgbackrest/cert/pg1-primary.key --pg1-host-type=tls --pg1-host-user=postgres --pg1-path=/data/postgres/13/pg_data --repo1-path=/backup/pgbackrest --stanza=training
WARN: unable to check pg1: [HostConnectError] unable to connect to 'pg1-primary:8432': [113] No route to host
ERROR: [056]: unable to find primary cluster - cannot proceed
HINT: are all available clusters in recovery?
2022-06-13 12:58:55.835 P00 INFO: stanza-create command end: aborted with exception [056]
The PostgreSQL server is up and running on the the DB host:
[postgres#pg1-primary ~]$ psql -c "SELECT pg_is_in_recovery();"
pg_is_in_recovery
-------------------
f
(1 row)
Question
Why am I having this [113] No route to host error ?
Configuration for each server :
pg1-primary
[postgres#pg1-primary ~]$ cat /etc/pgbackrest/pgbackrest.conf
[global]
repo1-path=/backup/pgbackrest
repo1-host-ca-file=/etc/pgbackrest/cert/ca.crt
repo1-host-cert-file=/etc/pgbackrest/cert/pgb-repo.crt
repo1-host-key-file=/etc/pgbackrest/cert/pgb-repo.key
repo1-host-type=tls
tls-server-address=*
tls-server-auth=pgb-repo=training
tls-server-ca-file=/etc/pgbackrest/cert/ca.crt
tls-server-cert-file=/etc/pgbackrest/cert/pg1-primary.crt
tls-server-key-file=/etc/pgbackrest/cert/pg1-primary.key
[postgres#pg1-primary ~]$ cat /etc/pgbackrest/conf.d/training.conf
[training]
pg1-path=/data/postgres/13/pg_data
pg1-socket-path=/tmp
repo1-host=pgb-repo
repo1-host-user=pgbackrest
[postgres#pg1-primary ~]$ ll /etc/pgbackrest/cert/
total 20
-rw-------. 1 postgres postgres 1090 Jun 13 12:12 ca.crt
-rw-------. 1 postgres postgres 977 Jun 13 12:12 pg1-primary.crt
-rw-------. 1 postgres postgres 1708 Jun 13 12:12 pg1-primary.key
-rw-------. 1 postgres postgres 977 Jun 13 12:23 pgb-repo.crt
-rw-------. 1 postgres postgres 1704 Jun 13 12:23 pgb-repo.key
pgb-repo
pgbackrest#pgb-repo$ cat /etc/pgbackrest/pgbackrest.conf
[global]
repo1-path=/backup/pgbackrest
tls-server-address=*
tls-server-auth=pg1-primary=training
tls-server-ca-file=/etc/pgbackrest/cert/ca.crt
tls-server-cert-file=/etc/pgbackrest/cert/pgb-repo.crt
tls-server-key-file=/etc/pgbackrest/cert/pgb-repo.key
pgbackrest#pgb-repo$ cat /etc/pgbackrest/conf.d/training.conf
[training]
pg1-host=pg1-primary
pg1-host-user=postgres
pg1-path=/data/postgres/13/pg_data
pg1-host-ca-file=/etc/pgbackrest/cert/ca.crt
pg1-host-cert-file=/etc/pgbackrest/cert/pg1-primary.crt
pg1-host-key-file=/etc/pgbackrest/cert/pg1-primary.key
pg1-host-type=tls
pgbackrest#pgb-repo$ ll /etc/pgbackrest/cert/
total 20
-rw-------. 1 pgbackrest pgbackrest 1090 Jun 13 12:27 ca.crt
-rw-------. 1 pgbackrest pgbackrest 977 Jun 13 12:27 pg1-primary.crt
-rw-------. 1 pgbackrest pgbackrest 1708 Jun 13 12:27 pg1-primary.key
-rw-------. 1 pgbackrest pgbackrest 977 Jun 13 12:27 pgb-repo.crt
-rw-------. 1 pgbackrest pgbackrest 1704 Jun 13 12:27 pgb-repo.key
The servers are reachable from one another:
[postgres#pg1-primary ~]$ ping pgb-repo
PING pgb-repo.xxxx.com (XXX.XX.XXX.117) 56(84) bytes of data.
64 bytes from pgb-repo.xxxx.com (XXX.XX.XXX.117): icmp_seq=1 ttl=64 time=0.365 ms
64 bytes from pgb-repo.xxxx.com (XXX.XX.XXX.117): icmp_seq=2 ttl=64 time=0.421 ms
pgbackrest#pgb-repo$ ping pg1-primary
PING pg1-primary.xxxx.com (XXX.XX.XXX.116) 56(84) bytes of data.
64 bytes from pg1-primary.xxxx.com (XXX.XX.XXX.116): icmp_seq=1 ttl=64 time=0.325 ms
64 bytes from pg1-primary.xxxx.com (XXX.XX.XXX.116): icmp_seq=2 ttl=64 time=0.298 ms
So actually the issue had to do with the firewall preventing access to the default TLS port (8432) used by pgBackRest.
[root#pgb-server ~]# firewall-cmd --zone=public --add-port=8432/tcp --permanent
[root#pgb-server ~]# firewall-cmd --reload
Once the port was accessible through the firewall I could issue a telnet command successfully (for testing access) - and of course run my pgBackRest commands too.
[pgbackrest#pgb-server]$ telnet pg1-server 8432
Trying 172.XX.XXX.XXX...
Connected to pg1-server.
Escape character is '^]'.

slapd starts when called directly but won't start from systemctl

running fedora 27 here. I'm attempting to run slapd from a fresh openldap install. When I try and run with systemctl start openldap, the daemon fails to start. journalctl gives the following output:
Jun 19 00:30:25 slapd[1325]: #(#) $OpenLDAP: slapd 2.4.45 (Dec 6 2017 14:25:36) $
mockbuild#buildhw-08.phx2.fedoraproject.org:/builddir/build/BUILD/openldap-2.4.45/openldap-2.4.45/servers/slapd
Jun 19 00:30:25 slapd[1326]: mdb_db_open: database "dc=my-domain,dc=com" cannot be opened: Permission denied (13). Restore from backup!
Jun 19 00:30:25 slapd[1326]: backend_startup_one (type=mdb, suffix="dc=my-domain,dc=com"): bi_db_open failed! (13)
Jun 19 00:30:25 slapd[1326]: slapd stopped.
Jun 19 00:30:25 audit[1326]: AVC avc: denied { map } for pid=1326 comm="slapd" path="/var/lib/ldap/lock.mdb" dev="xvda1" ino=1716389 scontext=system_u:system_r:slapd_t:s0 tcontext=system_u:object_r:slapd_db_t:s0 tclass=file permissive=0
However, if I run the daemon directly with /usr/sbin/slapd -u ldap -d -1 -h "ldap:/// ldaps:/// ldapi:///", the daemon starts with no issue.
My systemctl script is below:
[Unit]
Description=OpenLDAP Server Daemon
After=syslog.target network-online.target
Documentation=man:slapd
Documentation=man:slapd-config
Documentation=man:slapd-hdb
Documentation=man:slapd-mdb
Documentation=file:///usr/share/doc/openldap-servers/guide.html
[Service]
Type=forking
ExecStartPre=/usr/libexec/openldap/check-config.sh
ExecStart=/usr/sbin/slapd -u ldap -h "ldap:/// ldaps:/// ldapi:///"
[Install]
WantedBy=multi-user.target
Alias=openldap.service
I've checked permissions on the ldap config directory and db directory and they seem correct for the ldap user:
[root#localhost operations]# ll /etc/openldap/slapd.d/cn\=config
total 24
drwxr-x---. 2 ldap ldap 4096 Jun 15 23:00 'cn=schema'
-rw-------. 1 ldap ldap 378 Jun 15 23:00 'cn=schema.ldif'
-rw-------. 1 ldap ldap 513 Jun 15 23:00 'olcDatabase={0}config.ldif'
-rw-------. 1 ldap ldap 412 Jun 15 23:00 'olcDatabase={-1}frontend.ldif'
-rw-------. 1 ldap ldap 562 Jun 15 23:00 'olcDatabase={1}monitor.ldif'
-rw-------. 1 ldap ldap 609 Jun 15 23:00 'olcDatabase={2}mdb.ldif'
[root#localhost operations]# ll /var/lib/| grep ldap
drwx------. 2 ldap ldap 4096 Jun 19 00:30 ldap
[root#localhost operations]# ll /var/lib/ldap/
total 0
-rw-------. 1 ldap ldap 8192 Jun 19 00:30 lock.mdb
Any advice would be much appreciated.
It seems you're using back-mdb. Good.
Does your DB directory /var/lib/ldap/ really contain only file lock.mdb?
There should also be a bigger file called data.mdb with the actual data.

CumulocityLongPollingTransport - canceling the long poll request because of inactivity

I am using the Cumulocity java agent (7.38.0) and it apparently lost communication with the server somehow and never recovered. The admin interface says:
LAST COMMUNICATION
November 22, 2016 2:25 AM
and last cumulo record in the the device syslog was:
Nov 22 01:25:47 localhost root: 01:25:47.166 [CumulocityLongPollingTransport-scheduler-2] WARN c.c.s.c.n.ConnectionHeartBeatWatcher - canceling the long poll request because of inactivity
(there was 1 hour time diff due to some device config prob.)
process looks running anyways:
ps -ef | grep -i c8y
root 1341 1257 0 Nov19 ? 00:00:00 /bin/sh ./c8y-agent.sh
root 1342 1341 0 Nov19 ? 00:00:00 /bin/sh ./c8y-agent.sh
root 1344 1342 0 Nov19 ? 00:25:39 java -cp cfg/*:lib/* -Dlogback.configurationFile=cfg/logback.xml c8y.lx.agent.Agent
Has anyone seen this prob before?
We had it once or twice when people were connecting to cumulocity via firewall or vpn. The result was exactly as you described: the polling gets stuck after some time, like if connections were blocked. In other words i would suspect that it’s a proxy that’s blocking the reconnect.

CPanel/WHM Unknown License File Error

So my issue is like the title suggests. However I have tried the following suggestions from this page (https://documentation.cpanel.net/display/ALD/Installation+Guide+-+Troubleshoot+Your+Installation#InstallationGuide-TroubleshootYourInstallation-Licenseerrors) with no results.
1.) curl -L http://cpanel.net/showip.cgi (shows my ip address on the server for use on the verify.cpanel.net script), this can be verified also here... (http://verify.cpanel.net/index.cgi?ip=xxx.xxx.xxx.xx) (I don't like showing my IP, but trust me it was verified.)
2.) /usr/local/cpanel/cpkeyclt
Updating cPanel license...Done. Update Failed!
Error message:
A License check appears to already be running.
Building global cache for cpanel...Done
So the above didn't work.
I then tried these commands.
3.) /usr/local/cpanel/etc/init/stopcpsrvd and then /usr/local/cpanel/scripts/upcp --sync to attempt to resynchronize.
This appears to successfully run but I still get the same error. Attached below is the error message I get when I attempt to login to WHM.
4.) I then tried running rdate -s rdate.cpanel.net as suggested in some other posts to have the times match up and then when I run (/usr/local/cpanel/cpkeyclt) it seems to time out and nothing ever happens.
Looking at the logs for the cpanel license (/usr/local/cpanel/logs/license_log) I see this.
Tue Jul 26 16:23:30 2016: Trying server 208.74.125.22
Tue Jul 26 16:23:45 2016: Timed out while connecting to port 2089
Tue Jul 26 16:24:00 2016: Timed out while connecting to port 80
Tue Jul 26 16:24:15 2016: Timed out while connecting to port 110
Tue Jul 26 16:24:30 2016: Timed out while connecting to port 143
Tue Jul 26 16:24:45 2016: Timed out while connecting to port 25
Tue Jul 26 16:25:00 2016: Timed out while connecting to port 23
Tue Jul 26 16:25:15 2016: Timed out while connecting to port 993
Tue Jul 26 16:25:30 2016: Timed out while connecting to port 995
Tue Jul 26 16:30:14 2016: License Update Request
Tue Jul 26 16:30:14 2016: Using full manual DNS resolution
Tue Jul 26 16:30:14 2016: Trying server 208.74.121.85
Tue Jul 26 16:30:29 2016: Timed out while connecting to port 2089
Any help is appreciated!
Notes
Results of running /usr/local/cpanel/etc/init/stopcpsrvd
/usr/local/cpanel/etc/init/stopcpsrvd
Waiting for “cpsrvd” to stop ……Gracefully Terminating processes: cpsrvd: with pids 20842 and owner root.......waited 1 second(s) for 1 process(es) to terminate....Done
…finished.
Startup Log
Starting PID 20839: /usr/local/cpanel/libexec/cpsrvd-dormant
Results of running /usr/local/cpanel/scripts/upcp –sync (Couldn't show everything because of text character limitations)
[2016-07-26 15:39:39 -0400] Detected cron=0 (Terminal detected)
----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------
=> Log opened from cPanel Update (upcp) - Slave (21620) at Tue Jul 26 15:41:53 2016
[2016-07-26 15:41:53 -0400] Maintenance completed successfully
[2016-07-26 15:41:54 -0400] 95% complete
[2016-07-26 15:41:54 -0400] Running Standardized hooks
[2016-07-26 15:41:54 -0400] 100% complete
[2016-07-26 15:41:54 -0400]
[2016-07-26 15:41:54 -0400] cPanel update completed
[2016-07-26 15:41:54 -0400] A log of this update is available at /var/cpanel/updatelogs/update.1469561979.log
[2016-07-26 15:41:54 -0400] Removing upcp pidfile
[2016-07-26 15:41:54 -0400]
[2016-07-26 15:41:54 -0400] Completed all updates
=> Log closed Tue Jul 26 15:41:54 2016
It turns out the answer was IPTables. Before that it was the rDate command that was necessary to fix it, but my IPTables was blocking the connections.
To temporarily disable your firewall do this.
iptables-save > /root/current.ipt
iptables -P INPUT ACCEPT; iptables -P OUTPUT ACCEPT
iptables -F INPUT; iptables -F OUTPUT
ping -c 3 google.com
iptables-restore < /root/current.ipt
rm -f /root/current.ipt
The first command saves a copy of your firewall settings.
The next 2 commands make it so all input/output are allowed (for outgoing and incoming connections)
Finally test by pinging the ip address that was giving the issue for cPanel in your log file.
If it works that means the update license command will work.
Simply run:
/usr/local/cpanel/cpkeyclt
and you are good to go.
You can restore back your rules by using the last 2 commands if you want:
iptables-restore < /root/current.ipt
rm -f /root/current.ipt
Be warned that you will be blocked again, unless you fix the firewall.

HiveServer2: Thrift SASL related exception when using custom PasswdAuthenticationProvider

I've created a custom implementation of the PasswdAuthenticationProvider interface, based on OAuth2. I think the code is irrelevant for the problem I'm experiencing, nevertheless, it can be found here.
I've configured hive-site.xml with the following properties:
<property>
<name>hive.server2.authentication</name>
<value>CUSTOM</value>
</property>
<property>
<name>hive.server2.custom.authentication.class</name>
<value>com.telefonica.iot.cosmos.hive.authprovider.OAuth2AuthenticationProviderImpl</value>
</property>
Then I've restarted the Hive service and I've connected a JDBC based remote client with success. This is an example of a successful run found in /var/log/hive/hiveserver2.log:
2016-02-01 11:52:44,515 INFO [pool-5-thread-5]: authprovider.HttpClientFactory (HttpClientFactory.java:<init>(66)) - Setting max total connections (500)
2016-02-01 11:52:44,515 INFO [pool-5-thread-5]: authprovider.HttpClientFactory (HttpClientFactory.java:<init>(67)) - Setting default max connections per route (100)
2016-02-01 11:52:44,799 INFO [pool-5-thread-5]: authprovider.HttpClientFactory (OAuth2AuthenticationProviderImpl.java:Authenticate(65)) - Doing request: GET https://account.lab.fiware.org/user?access_token=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx HTTP/1.1
2016-02-01 11:52:44,800 INFO [pool-5-thread-5]: authprovider.HttpClientFactory (OAuth2AuthenticationProviderImpl.java:Authenticate(76)) - Response received: {"organizations": [], "displayName": "frb", "roles": [{"name": "provider", "id": "106"}], "app_id": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx", "email": "frb#tid.es", "id": "frb"}
2016-02-01 11:52:44,801 INFO [pool-5-thread-5]: authprovider.HttpClientFactory (OAuth2AuthenticationProviderImpl.java:Authenticate(104)) - User frb authenticated
2016-02-01 11:52:44,868 INFO [pool-5-thread-5]: thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(188)) - Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V6
2016-02-01 11:52:44,871 INFO [pool-5-thread-5]: session.SessionState (SessionState.java:start(358)) - No Tez session required at this point. hive.execution.engine=mr.
2016-02-01 11:52:44,873 INFO [pool-5-thread-5]: session.SessionState (SessionState.java:start(358)) - No Tez session required at this point. hive.execution.engine=mr.
The problem is after that the following error appears in a recurrent manner:
2016-02-01 11:52:48,227 ERROR [pool-5-thread-4]: server.TThreadPoolServer (TThreadPoolServer.java:run(215)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:189)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 4 more
2016-02-01 11:53:18,323 ERROR [pool-5-thread-5]: server.TThreadPoolServer (TThreadPoolServer.java:run(215)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:189)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 4 more
Why? I've seen in several other questions this occurs when using the default value of hive.server2.authentication, i.e. SASL, and the client is not doing the handshake. But in my case, the value of such a property is CUSTOM. I cannot understand it, and any help would be really appreciated.
EDIT 1
I've found there are periodical requests to the HiveServer2... from the HiveServer2 itself! These are the requests that are resulting in Thrift SASL errors:
$ sudo tcpdump -i lo port 10000
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
...
...
10:18:48.183469 IP dev-fiwr-bignode-11.hi.inet.ndmp > dev-fiwr-bignode-11.hi.inet.55758: Flags [.], ack 7, win 512, options [nop,nop,TS val 1034162147 ecr 1034162107], length 0
^C
21 packets captured
42 packets received by filter
0 packets dropped by kernel
[fiware-portal#dev-fiwr-bignode-11 ~]$ sudo netstat -nap | grep 55758
tcp 0 0 10.95.76.91:10000 10.95.76.91:55758 CLOSE_WAIT 7190/java
tcp 0 0 10.95.76.91:55758 10.95.76.91:10000 FIN_WAIT2 -
[fiware-portal#dev-fiwr-bignode-11 ~]$ ps -ef | grep 7190
hive 7190 1 1 10:10 ? 00:00:10 /usr/java/jdk1.7.0_71//bin/java -Xmx1024m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -Xmx4096m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/lib/hive/lib/hive-service-0.13.0.2.1.7.0-784.jar org.apache.hive.service.server.HiveServer2 -hiveconf hive.metastore.uris=" " -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive
1011 14158 12305 0 10:19 pts/1 00:00:00 grep 7190
Any idea?
EDIT 2
More research about the connections sent from HiveServer2 to HiveServer2. Data packets always sent 5 bytes, the following ones (hexadecimal): 22 41 30 30 31
Any idea about these connections?
I finally "fixed" this. Since the message was sent by the Ambari agent running in the HiveServer2 machine (some king of weird ping), I simply added an iptables rule blocking all the connections to TCP/10000 port on the loopback interface:
iptables -A INPUT -i lo -p tcp --dport 10000 -j DROP
Of course, now Ambari warns the HiveServer2 is not alive (the pings are droped). And the above rule must be removed if I want to restart the server from Ambari (there is another alive check in the starting script); then after the restart I can enable the rule again. Well, I can live with that.