Good day,
I have SLES 10 with syslog-ng (syslog-ng-1.6.8-20.23.1)
and I cannot get the proper configuration so the file /var/log/audit/audit.log is sent to the remote syslog server.
I used tcpdump and I can see some details in the packets that are sent to the remote server, but I am not seeing anything with the audit format in the tcp packet.
filter f_audit { facility(13); };
filter f_audit2 {facility(security);};
destination d_local_facility {
file("/var/log/$FACILITY/$FACILITY.log");
destination d_remote_loghost { tcp("$hostname" port(514)); };
log {
source(s_local);
destination(d_remote_loghost5);
};
What am I doing wrong?
You have to configure a file source in syslog-ng that reads the /var/log/audit/audit.log file, and include this source in a log statement. I can't see that in your config file.
BTW, syslog-ng version 1.6 is ancient beyond words. syslog-ng 3.7 can parse auditd logs to extract information, so you might want to upgrade. You can find some SLES packages for syslog-ng at https://syslog-ng.org/3rd-party-binaries/
In SLES 10, I finally used a script at boot. The file after.local is the equivalent of rc.local
So, in /etc/init.d/after.local:
nohup /usr/bin/tailf /var/log/audit/audit.log | /bin/logger -t audispd -p local6.info &
Its easier in SLES 11 because the auditspd dispatcher exists.
Related
I was wondering if there's a way to send files using SFTP to a remote machine through a jump server.
As you can see in the image below first it's needed an SSH connection and after that an SFTP connection.
My main problem here comes after the SSH connection, my workspace has changed and I cannot retrieve the necessary files to execute the SFTP successfully.
I've tried the following code:
ssh jump-server-user#ip-jump-server 'echo "put /source/files /remote/files" | sftp -v remote-machine-user#ip-remote-machine'
But it does not work.
I've tried to execute a simple command like pwd using the SFTP connection and it works so I think the problem here is how the workspace change.
There would probably be an easier solution but I cannot use SSH on the jump server-remote machine connection and I cannot store the local files in the jump server to send them later to the remote machine.
If you have a recent OpenSSH (at least 8.0) locally, you can use the -J (jump) switch:
sftp -J jump-server-user#ip-jump-server remote-machine-user#ip-remote-machine
With older version (but at least 7.3), you can use ProxyJump directive:
sftp -o ProxyJump=jump-server-user#ip-jump-server remote-machine-user#ip-remote-machine
There are other options like ProxyCommand or port forwarding, which you can use on even older versions of OpenSSH. These are covered in Does OpenSSH support multihop login?
After running fine for a while, I am getting write error on my redis instance:
(error) MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
In the log I see:
9948:C 22 Mar 20:49:32.241 # Failed opening the RDB file root (in server root dir /var/spool/cron) for saving: Read-only file system
However, my redis config file is /etc/redis/redis.conf as confirmed by:
redis-cli -p 6379 info | grep 'config_file'
config_file:/etc/redis/redis.conf
And there I have:
dir /mnt/data/redis
And indeed, there is a snapshot there.
But despite the above, redis now thinks my data directory is
redis-cli -p 6379 CONFIG GET dir
1) "dir"
2) "/var/spool/cron"
Corresponding to the error I was getting as quoted above.
Can anyone tell me why/how my data directory is changing after redis starts, such that it is no longer what is specified in the config file?
So the answer is that the redis server was hacked and the configuration changed, which is very easy to do as it turns out. (I should point out that I had no reason to think it wasn't easy to do. I just assumed security by obscurity was sufficient in this case--wrong. No matter, this was just a playground not any sort of production server).
So don't open your redis port to the world. Use security groups if on AWS to limit access to machines that need it, or use AUTH (which is still not awesome because then all clients need to know the single password which also apparently gets sent in the clear), or have some middleware controlling access.
Hacking redis is easy to do, can compromise your data, and even enable unauthorized SSH access to your server. And that's why you shouldn't highline.
So I just installed the latest version of rabbitmq and I've been trying to get it to work. The server is running and I've restarted it once just to be sure it's a consistent problem.
If I telnet localhost 5672, I get
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Connection closed by foreign host.
As you can see, the connection is accepted but rabbitmq does not accept any input. The connection is closed immediately. No further information shows up in logs.
rabbitmqctl works without any problems.
This is running on Windows Subsystem for Linux / Ubuntu. I don't have any other options for a local dev environment because I'm on a work computer which is locked down pretty tightly.
I ran into the same issue, using Ubuntu(16.04) as a subsystem on Windows and rabbitmq 3.7.8. I noticed that when running sudo rabbitmqctl status the listeners showed the following:
{listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]}
I fixed this issue by creating a rabbitmq config file and specifying the localhost and port 5762
Here is what i did step by step.
Using sudo && vim, I created a 'rabbitmq.conf' file, located in
/etc/rabbitmq/
sudo vim /etc/rabbimq/rabbitmq.conf
I specified the localhost(127.0.0.1) and port(5672) for the default
tcp listener in the rabbitmq.conf file
listeners.tcp.default = 127.0.0.1:5672
Restart rabbitmq
sudo service rabbitmq-server stop
then
sudo service rabbitmq-server start
Check sudo rabbitmqctl status and look at the listeners, you should see your new tcp listener with the localhost ip sepcified
{listeners,[{clustering,25672,"::"},{amqp,5672,"127.0.0.1"}]}
Here is the config docs from rabbitmq that may help clarify some of these steps.
Telnet lets you confirm the system is listening and allows incoming connections.
But even an "out of the box" install of RabbitMQ expects credentials for connections.
rabbitmqctl list_users to see which users are configured.
If guest present, typical creds are guest / guest
Either install management plugin (or confirm it is installed),
or script your test, most languages have a package available for connecting to RabbitMQ.
I have a trouble with NFS client file caching. The client read the file which was removed from the server many minutes before.
My two servers are both CentOS 6.5 (kernel: 2.6.32-431.el6.x86_64)
I'm using server A as the NFS server, /etc/exports is written as:
/path/folder 192.168.1.20(rw,no_root_squash,no_all_squash,sync)
And server B is used as the client, the mount options are:
nfsstat -m
/mnt/folder from 192.168.1.7:/path/folder
Flags: rw,sync,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,acregmin=0,acregmax=0,acdirmin=0,acdirmax=0,hard,noac,nosharecache,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.20,minorversion=0,lookupcache=none,local_lock=none,addr=192.168.1.7
As you can see, "lookupcache=none,noac" options are already used to disable the caching, but seems doesn't work...
I did the following steps:
Create a simple text file from server A
Print the file from the server B by cmd "cat", and it's there
Remove the file from the server A
Wait couple minutes and print the file from the server B, and it's still there!
But if I do "ls" from the server B at that time, the file is not in the output. The inconsistent state may last a few minutes.
I think I've checked all the NFS mount options...but can't find the solution.
Is there any other options I missed? Or maybe the issue is not about NFS?
Any ideas would be appreciated :)
I have tested the same steps you have given with below parameters. Its working perfectly. I have added one more parameter "fg" in the client side mounting.
sudo mount -t nfs -o fg,noac,lookupcache=none XXX.XX.XX.XX:/var/shared/ /mnt/nfs/fuse-shared/
I am trying to configure loggly in apache in my ubuntu machine.
What I have done is
curl -O https://www.loggly.com/install/configure-apache.sh
sudo bash configure-apache.sh -a XXXXXX -u XXXXXX
After entering the last line it's saying
ERROR: Apache logs did not make to Loggly in time. Please check network and firewall settings and retry.
Manual instructions to configure Apache2 is available at https://www.loggly.com/docs/sending-apache-logs/. Rsyslog troubleshooting instructions are available at https://www.loggly.com/docs/troubleshooting-rsyslog/
Any idea why it's showing and how to solve it?
This is likely a network issue or a delay in sending the logs or even an issue with the script. Check out the following link that has the manual instructions. https://www.loggly.com/docs/sending-apache-logs/ that you can follow and use to verify the script created the configuration files correctly.