I installed Redis Server on the cloud machine (Ubuntu 18.0) where it contains SSD.
In the configuration file, I changed the dir to /temp and the dbfilename to dump.rdb
I restarted the server and checked the runtime settings with CONFIG GET.
It is showing the values what I set in the redis.conf file.
After 6 Hours, I checked it again. The strange thing is, these values got changed dir=/var/spool/cron"
and dbfilename=root.
I am sure, nobody attacked my server and it is under our own VPN and not publicly accessible.
Now, I did one more test, I installed a Docker Container (Ubuntu 18.0) in that cloud instance (Same instance) and did the test in the container. There is no change in the configuration at runtime after couple of hours.
Also, suspect if the cloud machine is built with magnetic HDD redis seems working fine. If I built with SSD then redis not working after couple of hours.
Can anybody help in this regard.
Thanks
I had a similar situation on the my Redis server.
If your Redis server is accessible from the public network, It might be an attack.
In my case, I changed the default port of Redis servers and add password protection.
After that, the same situation does not happen.
Check the below issue in Redis GitHub, You can get more information about your case.
https://github.com/redis/redis/issues/3594
Related
I recently deployed an AWS EC2 Ubuntu instance for a new intranet wiki. I installed java, tomcat, mySQL, and XWiki for this specifically.
I closed the SSH connection with PuTTY as I was setting up the XWiki and branding it appropriately but when I went to access it again, all I get is timeouts. SSH inbound rules are set to accept from all sources so I am almost certain it is not a network error, but I can't figure out what it is!
This has happened twice now, does anyone know of XWiki disallowing the ubuntu#ip.add.re.ss login with public key authentication?
XWiki certainly does not do anything like this nor it could even if it wanted to if you installed tomcat properly (it's usually not supposed to run as root).
While trying to solve another issue with connection problems to our servers, I thought to solve the problem by setting the MaxConnections and MaxStartups to my sshd.conf
When restarting ssh everything seemed fine, but this morning I found out that our Jenkins server didn't connect to any of the dev servers. So I tried logging into the system, finding out that I cannot log in to any of our dev servers anymore.
Looks like I made a F##$up in the sshd.conf and created a lockout for all the dev servers.
When trying to login I get an "port 22: Connection refused" error.
Is there any other way to get into the systems without having to connect every disk to another server to adjust the sshd.conf??
There are several options available for recovery in this situation:
Use the interactive serial console. This requires advanced preparation.
Add a startup script to fix the file, and then reboot to trigger the script.
Shutdown the instance, attach the disk to a recovery instance, use the recovery instance to mount the disk and fix the file, then detatch the disk and recreate the instance using the fixed disk. (you can also do this from a snapshot for added safety).
I have a google compute engine VM, running ubuntu, and utilising Laravel Forge.
I seem to get blocked by the VM after accessing SSH a few times (2-4), even if I'm logging in correctly. Restarting the VM unblocks me.
I first noticed the issue as I was having trouble logging into SSH, after a few attempts it would become unreachable. My website hosted on it also wouldn't resolve. After restarting the vm, I could try log into ssh again and my website works. This happened a couple time before I figured out how to correctly log in with SSH.
Next, trying to log in to the database with HeidiSQL, which uses plink, I log in fine. But it seems to keep reconnecting via SSH every time I do something, and after 2-4 of these reconnects, I get the same problem with the VM being unreachable by SSH and my website hosted on it being down.
Using SQLyog, which seems to maintain the one SSH connection, rather than constantly reconnecting like HeidiSQL, I have no problems.
When my website is down, I use those "down for everyone or just me" websites to see if it is down, and apparently it's just down for me, so I must be getting blocked.
So I guess my questions are:
1. Is this normal?
2. Can I unblock myself without restarting the VM?
3. Can I make blocking occur in a less strict way?
4. Why does HeidiSQL keep reconnecting via SSH rather than maintaining the one connection like SQLyog seems to?
You have encountered sshguard, which is enabled by default on the GCE Ubuntu images (at least on the 14.10 image, where I encountered it myself). There is a whitelist file at /etc/sshguard/whitelist.
The sshguard default configuration on my VM has a "dangerousness" threshold of 40. Most "attacks" that sshguard detects incur dangerousness of 10, so getting blocked after 4 reconnects sounds about right.
The attack signatures are listed here: http://www.sshguard.net/docs/reference/attack-signatures/
I would bet that you are connecting from an IP that has an invalid reverse DNS configuration (I was). Four connects like that and the default config blocks you for 20 minutes.
I have Windows and using cygwin.
I have an Amazon Ubuntu instance I can log in just fine from my system using
ssh -i keyfile \ username#AmazonHost.
However when I ssh to a CentOS server I have at my office and try to SSH to the Amazon instance from there using the same commands I always get a public key error. I have copied my keyfile over and set permissions to chmod 400 just like I did on my Cygwin client. Also on the CentOS I verified I can access the amazon instance over port 22(telnet AmazonHost 22).
Is there some other configuration on the CentOS or Office firewall that needs to be done to allow me to connect to Amazon?
If you get a public key error on one machine and not the other, then the two secret keys are different, even though you think they are the same. (Unless one machine's ssh client is totally broken.)
The file might have been corrupted in transit, but since one of the machines is Windows (though with Cygwin) and the other is Linux, my guess is that something went wrong with line endings when the key was copied from one to the other. The keys are usually encoded as text (that's how Amazon's console does it), and are fairly immune to line ending changes, but this seems to be a possible cause.
How did you transfer the file from one machine to the other? If you can adjust it, try the transfer once in binary mode and once in text mode, to see if either works. Also, just look at the files on each machine in a text editor. Do they look the same?
We are running Weblogic 7sp6. We have a working single node cluster with an Admin and two Managed servers. We are re-creating a 2nd standalone cluster on a 2nd server. We reinstalled Weblogic and have copied over all the configuration files to make thing. Its the same on both clusters. We changed all the references to IP and hostnames. We have used this method before without problems.
In the current case I can startup the Admin which listens on port 7001,7002. But when I try and startup either of the Managed servers it tells me that myserver1/2 is already up. (Managed Servers). I confirmed that myserver is configured to use port 7012,7013 and I cannot find any port conflicts especially because these exact ports worked on the first cluster. Any ideas of what else to look at? I have logged in the admin console and can see the ports are all unique. Thanks
The current version of WebLogic is 10.3. I'd strongly urge you to upgrade your WebLogic as soon as possible, especially if you're still using the version of JDK that it was certified for. If you're running JDK 1.4, you're crazy.