Openvpn Raspberry PI login loop - authentication

Suddenly my nordvpn through openvpn on my raspberry pi isn't working anymore. Now I get the following error:
Sun Sep 13 12:25:14 2020 Outgoing Control Channel Authentication: Using 512 bit message hash 'SHA512' for HMAC authentication
Sun Sep 13 12:25:14 2020 Incoming Control Channel Authentication: Using 512 bit message hash 'SHA512' for HMAC authentication
Sun Sep 13 12:25:14 2020 TCP/UDP: Preserving recently used remote address: [AF_INET]62.112.11.159:443
Sun Sep 13 12:25:14 2020 Socket Buffers: R=[87380->87380] S=[16384->16384]
Sun Sep 13 12:25:14 2020 Attempting to establish TCP connection with [AF_INET]62.112.11.159:443 [nonblock]
Sun Sep 13 12:25:15 2020 TCP connection established with [AF_INET]62.112.11.159:443
Sun Sep 13 12:25:15 2020 TCP_CLIENT link local: (not bound)
Sun Sep 13 12:25:15 2020 TCP_CLIENT link remote: [AF_INET]62.112.11.159:443
Sun Sep 13 12:25:15 2020 Connection reset, restarting [0]
Sun Sep 13 12:25:15 2020 SIGUSR1[soft,connection-reset] received, process restarting
Sun Sep 13 12:25:15 2020 Restart pause, 5 second(s)
No idea what to do. I can't find any server log. Tried removing and reinstalling. Tried updating. I can connect to the internet. It's just when I try to connect to a OVPN file it does this in a loop. I can even give a wrong login information and it won't say anything. Can anyone shine some light on this? Thanks

Related

Redis crashing without any log errors

I'm debugging some weird behavior in my redis, where it's crashing each 2 days more or less, but not showing any errors whatsoever, only this on the logs:
1:C 10 Sep 2020 15:44:14.517 # Configuration loaded
1:M 10 Sep 2020 15:44:14.522 * Running mode=standalone, port=6379.
1:M 10 Sep 2020 15:44:14.522 # Server initialized
1:M 10 Sep 2020 15:44:14.524 * Ready to accept connections
1:C 12 Sep 2020 13:20:23.751 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 12 Sep 2020 13:20:23.751 # Redis version=6.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 12 Sep 2020 13:20:23.751 # Configuration loaded
1:M 12 Sep 2020 13:20:23.757 * Running mode=standalone, port=6379.
1:M 12 Sep 2020 13:20:23.757 # Server initialized
1:M 12 Sep 2020 13:20:23.758 * Ready to accept connections
That's all redis says to me.
I have lots of RAM available, but I have redis running as a single instance on a docker container, could the lack of processing power cause this? Should I use multiple nodes? I don't want to setup a cluster just to find out the problem was another, how can I trace down the actually cause of the problem?
So, in the end, it was exactly what I thought it was not: a memory leak!
I had 16GB that was slowly being consumed until redis crashed with no warnings, nor the operating system/docker. I fixed the app that caused the leak and the problem was gone.

Guacamole fails to connect to xRDP server

I have a xrdp server running and would like to connect to it using Guacamole. However, each time I try to make any RDP connection it always fails with "You Have Been Disconnected." I know it is a fault with guacamole because I can log into xRDP using Remmina RDP client using the same credentials.
Here are my Logs:
/var/run/syslog :
Jul 26 10:02:36 ubuntu guacd[1291]: Creating new client for protocol "rdp"
Jul 26 10:02:36 ubuntu guacd[1291]: Connection ID is "$0c72bf59-0ff9-448d-a5a2-dc3229157122"
Jul 26 10:02:36 ubuntu guacd[5737]: Security mode: ANY
Jul 26 10:02:36 ubuntu guacd[5737]: Resize method: none
Jul 26 10:02:36 ubuntu guacd[5737]: User "#cce2ec3d-03c5-4387-be88-054a00927f56" joined connection "$0c72bf59-0ff9-448d-a5a2-dc3229157122" (1 users now present)
Jul 26 10:02:36 ubuntu guacd[5737]: Loading keymap "base"
Jul 26 10:02:36 ubuntu guacd[5737]: Loading keymap "en-us-qwerty"
Jul 26 10:02:36 ubuntu kernel: [ 4736.455320] guacd[5749]: segfault at 8000000000 ip 0000008000000000 sp 00007f3bc9f8bc98 error 14
Jul 26 10:02:36 ubuntu kernel: [ 4736.455323] traps: guacd[5750] general protection ip:7f3bcb074c69 sp:7f3bc978ac98 error:0
Jul 26 10:02:36 ubuntu kernel: [ 4736.455323]
Jul 26 10:02:36 ubuntu kernel: [ 4736.455325] in libguac.so.5.0.0[7f3bcb070000+d000]
Jul 26 10:02:36 ubuntu guacd[1291]: Connection "$0c72bf59-0ff9-448d-a5a2-dc3229157122" removed.
/var/log/tomcat8/Catalina.out :
10:02:33.079 [http-nio-8080-exec-2] WARN o.a.g.r.auth.AuthenticationService - Authentication attempt from 0:0:0:0:0:0:0:1 for user "-------" failed.
10:02:33.943 [http-nio-8080-exec-1] WARN o.a.g.r.auth.AuthenticationService - Authentication attempt from 0:0:0:0:0:0:0:1 for user "jonathan" failed.
10:02:36.100 [http-nio-8080-exec-6] INFO o.a.g.r.auth.AuthenticationService - User "guacadmin" successfully authenticated from 0:0:0:0:0:0:0:1.
10:02:36.241 [http-nio-8080-exec-10] INFO o.a.g.tunnel.TunnelRequestService - User "guacadmin" connected to connection "3".
10:02:38.179 [Thread-7] INFO o.a.g.tunnel.TunnelRequestService - User "guacadmin" disconnected from connection "3". Duration: 1937 milliseconds
Connection settings:
security mode: any
port: 3389
I am on ubuntu server 16.04. Any possible solutions would be much appreciated.
Try:
Removing the [path to libfreerdp*.so]/freerdp/guac*.so files that were copied, assuming this is the case.
Create symbolic links within [path to libfreerdp*.so]/freerdp/ to /usr/local/lib/freerdp/guac*.so, so you do not need to worry about
this going forward.
Source: RDP stopped working v0.9.9 - Apache Guacamole.

Aerospike DB always starts in COLD mode

It's stated here that Aerospike should try to start in warm mode, meaning reuse same memory region holding keys. Instead, every time the database is restarted all keys are loaded back from the SSD drive, which can take tens of minutes if not hours. What I see in the log is the following:
Oct 12 2015 03:24:11 GMT: INFO (config): (cfg.c::3234) Node id bb9e10daab0c902
Oct 12 2015 03:24:11 GMT: INFO (namespace): (namespace_cold.c::101) ns organic **beginning COLD start**
Oct 12 2015 03:24:11 GMT: INFO (drv_ssd): (drv_ssd.c::3607) opened device /dev/xvdb: usable size 322122547200, io-min-size 512
Oct 12 2015 03:24:11 GMT: INFO (drv_ssd): (drv_ssd.c::3681) shadow device /dev/xvdc is compatible with main device
Oct 12 2015 03:24:11 GMT: INFO (drv_ssd): (drv_ssd.c::1107) /dev/xvdb has 307200 wblocks of size 1048576
Oct 12 2015 03:24:11 GMT: INFO (drv_ssd): (drv_ssd.c::3141) device /dev/xvdb: reading device to load index
Oct 12 2015 03:24:11 GMT: INFO (drv_ssd): (drv_ssd.c::3146) In TID 104520: Using arena #150 for loading data for namespace "organic"
Oct 12 2015 03:24:13 GMT: INFO (drv_ssd): (drv_ssd.c::3942) {organic} loaded 962647 records, 0 subrecords, /dev/xvdb 0%
What could be the reason that Aerospike fails to perform fast restart?
Thanks!
You are using community edition of the software. Warm start is not supported in it. It is available only in the enterprise edition.

request size problem in apache + weblogic with mod_wl.so

We´re using apache (2.0, with ssl) to proxy the requests to a web service installed on weblogic. We have mod_wl.so installed, and all works fine with small requests.
However, with larger requests (say, 300 KB), the apache stalls and display this error message:
"Failure of server APACHE bridge: No Backend Sever available for connections": timed out after 20 seconds or idempotent is set to off."
We replicate the scenario in another server, and the error persists (instead of "20 seconds", it says "30 seconds" for the same request).
How can I avoid this size limitation? Is it a bug in mod_wl.so? Is it a config value that is missing? (As a side note, the web service works fine when tested directly from inside the weblogic console, no matter how big the file)
Thanks for any help!
UPDATE:
changed to mod_wl_20.so with same results, here is the chunk of the log:
Wed Sep 28 11:27:37 2011 <15359131722005124> Hdrs to WLS:[Content-Length]=[352196]
Wed Sep 28 11:27:37 2011 <15359131722005124> Hdrs to WLS:[Connection]=[Keep-Alive]
Wed Sep 28 11:27:37 2011 <15359131722005124> Hdrs to WLS:[WL-Proxy-SSL]=[true]
Wed Sep 28 11:27:37 2011 <15359131722005124> Hdrs to WLS:[WL-Proxy-Client-IP]=[163.247.57.10]
Wed Sep 28 11:27:37 2011 <15359131722005124> Hdrs to WLS:[Proxy-Client-IP]=[163.247.57.10]
Wed Sep 28 11:27:37 2011 <15359131722005124> Hdrs to WLS:[X-Forwarded-For]=[163.247.57.10]
Wed Sep 28 11:27:37 2011 <15359131722005124> Hdrs to WLS:[X-WebLogic-KeepAliveSecs]=[30]
Wed Sep 28 11:27:37 2011 <15359131722005124> Hdrs to WLS:[X-WebLogic-Request-ClusterInfo]=[true]
Wed Sep 28 11:27:37 2011 <15359131722005124> Hdrs to WLS:[x-weblogic-cluster-hash]=[2Ik836PQKnD7XHQ2RcWGOWkcRRA]
Wed Sep 28 11:27:37 2011 <15359131722005124> operation WRITE failed on fd 23: revents=0x00000018
Wed Sep 28 11:27:37 2011 <15359131722005124> IO TImed out error
Wed Sep 28 11:27:37 2011 <15359131722005124> POST timed out to the server 10.182.5.5:7005
Wed Sep 28 11:27:37 2011 <15359131722005124> ***Exception type [WRITE_ERROR_TO_SERVER] (POST ti
med out to the server 10.182.5.5:7005
) raised at line 152 of ap_proxy.cpp
Wed Sep 28 11:27:37 2011 <15359131722005124> error sending headers or Post Data to WebLogic, sys er
r#: [0] sys errmsg [Success]
Wed Sep 28 11:27:37 2011 <15359131722005124> Marking 10.182.5.5:7005 as bad
Wed Sep 28 11:27:37 2011 <15359131722005124> got exception in sendRequest phase: WRITE_ERROR_TO_SER
VER [os error=0, line 152 of ap_proxy.cpp]: POST timed out to the server 10.182.5.5:7005
at line 2994
Wed Sep 28 11:27:37 2011 <15359131722005124> Failing over after WRITE_ERROR_TO_SERVER exception in
sendRequest()
Wed Sep 28 11:27:37 2011 <15359131722005124> attempt #1 out of a max of 10
Wed Sep 28 11:27:37 2011 <15359131722005124> No good servers left in the general list, reverting ba
ck to the static list
Wed Sep 28 11:27:37 2011 <15359131722005124> Host extracted from serverlist is [10.182.5.5]
Wed Sep 28 11:27:37 2011 <15359131722005124> Host extracted from serverlist is [10.182.5.5]
Wed Sep 28 11:27:37 2011 <15359131722005124> Initializing lastIndex=0 for a list of length=2
Post timed out to 10.182.5.5:7005
This is the WebLogic which Apache is trying to post to.
You have confirmed this works when directly posted to the same Weblogic server.
The 20 seconds mathces the default KeepAliveSecs which you can try to increase.
Have you set a value in the plugin for WLIOTimeoutSecs.
This defaults to 300. Defines the amount of time in seconds the plug-in waits for a response to a request from WebLogic Server.
But from your log it does not look like Apache is waiting for 300 seconds before failing.
Similarly MaxPostSize defaults to -1, just check that you have not set some low value for that.
Check out the other plugin parameters on this list
http://download.oracle.com/docs/cd/E12840_01/wls/docs103/plugins/plugin_params.html#wp1143055
You might also want to tinker with the FileCaching element for POST requests

Problems with weblogic 9.2 load balancing and clustering using proxy plugin

I have a cluster in weblogic 9.2 with 2 nodes(172.20.1.68:7101, 172.20.1.23:7102), 1 adminserver (172.20.1.23:7001) and 1 balancer (apache proxy plugin) 172.20.1.49:7103.
What I see in the balancer's access.log is that every request is marked as 404 not found. But in the node's log I can see those very same request distribuited marked as GET with code 200.
The problem is that my application is not working.
Any idea would be appreciated.
Thanks very much!
Edit:
Here it is my relevant httpd.conf, I have not section, instead I
have this:
<VirtualHost *:80>
ServerName fake.server.name
DocumentRoot "/usr/local/apache_ssl/htdocs"
<Location /myApp/>
SetHandler weblogic-handler
WebLogicCluster 172.20.1.23:7102,172.20.1.68:7101
Debug All
DebugConfigInfo ALL
WLLogFile logs/p.log
KeepAliveEnabled ON
KeepAliveSecs 15
</Location>
<Location /psoc-app>
SetHandler weblogic-handler
WebLogicCluster 172.20.1.23:7102,172.20.1.68:7101
KeepAliveEnabled ON
KeepAliveSecs 15
</Location>
WLLogFile logs/p.log
ErrorLog logs/_log_error
CustomLog logs/_log common
</VirtualHost>
/tmp/wlproxy.log request:
================New Request: [GET /myApp/path HTTP/1.1] =================
Thu Jul 29 14:30:00 2010 <1382912804066002> INFO: SSL is not configured
Thu Jul 29 14:30:00 2010 <1382912804066002> Using Uri /myApp/path
Thu Jul 29 14:30:00 2010 <1382912804066002> After trimming path: '/myApp/path'
Thu Jul 29 14:30:00 2010 <1382912804066002> The final request string is '/myApp/path'
Thu Jul 29 14:30:00 2010 <1382912804066002> SEARCHING id=[172.20.1.23:7102,172.20.1.68:7101] from current ID=[172.20.1.23:7102,172.20.1.68:7101]
Thu Jul 29 14:30:00 2010 <1382912804066002> The two ids matched
Thu Jul 29 14:30:00 2010 <1382912804066002> ###FOUND...id=[172.20.1.23:7102,172.20.1.68:7101], server_name=[172.20.1.49], server_port=[80]
Thu Jul 29 14:30:00 2010 <1382912804066002> attempt #0 out of a max of 5
Thu Jul 29 14:30:00 2010 <1382912804066002> Trying a pooled connection for '172.20.1.68/7101/7106'
Thu Jul 29 14:30:00 2010 <1382912804066002> getPooledConn: No more connections in the pool for Host[172.20.1.68] Port[7101] SecurePort[7106]
Thu Jul 29 14:30:00 2010 <1382912804066002> general list: trying connect to '172.20.1.68'/7101/7106 at line 2619 for '/myApp/path'
Thu Jul 29 14:30:00 2010 <1382912804066002> INFO: New NON-SSL URL
Thu Jul 29 14:30:00 2010 <1382912804066002> Connect returns -1, and error no set to 115, msg 'Operation now in progress'
Thu Jul 29 14:30:00 2010 <1382912804066002> EINPROGRESS in connect() - selecting
Thu Jul 29 14:30:00 2010 <1382912804066002> Local Port of the socket is 38958
Thu Jul 29 14:30:00 2010 <1382912804066002> Remote Host 172.20.1.68 Remote Port 7101
Thu Jul 29 14:30:00 2010 <1382912804066002> general list: created a new connection to '172.20.1.68'/7101 for '/myApp/path', Local port:38958
Thu Jul 29 14:30:00 2010 <1382912804066002> URL::parseHeaders: CompleteStatusLine set to [HTTP/1.1 404 Not Found]
Thu Jul 29 14:30:00 2010 <1382912804066002> URL::parseHeaders: StatusLine set to [404 Not Found]
Thu Jul 29 14:30:00 2010 <1382912804066002> parsed all headers OK
Thu Jul 29 14:30:00 2010 <1382912804066002> sendResponse() : r->status = '404'
Thu Jul 29 14:30:00 2010 <1382912804066002> canRecycle: conn=1 status=404 isKA=0 clen=1214 isCTE=0
Thu Jul 29 14:30:00 2010 <1382912804066002> closeConn: URL.canRecycle() returns false, deleting URL '172.20.1.68/7101'
Thu Jul 29 14:30:00 2010 <1382912804066002> request [/myApp/path] processed sucessfully..................
Sorry I just cant get the formatter to work
From the comments so far there are 3 things to note and try:
A) Where in the Apache conf is it pointing to /tmp/wlproxy.log ? This questions whether we are looking at the right conf file - or is there perhaps another instance of Apache running?
Run this command (if on unix) to identify the owner pid writing into the log
/usr/sbin/fuser /tmp/wlproxy.log
This will return the pid of an Apache process - is that the Apache which you are running?
You could also try running fuser with your Apache shut down and see does it still show a pid owning the file?
B) The plugin debug log shows the apache request goes to the 2nd server in the cluster and gets a 404.
Thu Jul 29 14:30:00 2010 <1382912804066002> general list: created a new connection to '172.20.1.68'/7101 for '/myApp/path', Local port:38958
Thu Jul 29 14:30:00 2010 <1382912804066002> URL::parseHeaders: CompleteStatusLine set to [HTTP/1.1 404 Not Found]
Thu Jul 29 14:30:00 2010 <1382912804066002> URL::parseHeaders: StatusLine set to [404 Not Found]
Is the application definitely deployed and available on 172.20.1.68:7101/myApp/path also ?
C) What happens when you make a request for the /psoc-app shown in the conf?