MapR volume audit - audit

I have a volume which is part of another volume.For example, I have directory '/' mounted as volume 'rootVol' and directory '/test'(which is part of /) as volume 'testVol'. I have enabled auditing for both the volumes and the underlying directories and files.The problem is when I try to create a folder inside '/test', it is audited as 'LOOKUP' action with status 2 instead of 'MKDIR'. But, folder gets created.
Is there some reason for this? Or, on any suggestions on how to correct it?

You need to check the audit logs generated in all the nodes. Since the audit operation would be logged in which node the master container resides, you have to check the other node audit logs also to see the MKDIR operation.
Reason for LOOKUP getting audited is, for MKDIR operation to be done, a lookup will be done for the new directory name, if the file/directory is not available status 2 will be returned and then file/directory creation will happen.

Related

Removing files from GCS: "gsutil -m rm" throws CommandException: files/objects could not be removed

gsutil -m rm gs://{our_bucket}/{dir}/{subdir}/*
...
Removing gs://our_bucket/dir/subdir/staging-000000000102.json...
Removing gs://our_bucket/dir/subdir/staging-000000000101.json...
CommandException: 103 files/objects could not be removed.
The command is able to find the directory with the 103 .JSON files, and "tries" removing them per the Removing gs://... being output. For what reason might we be receiving CommandException: 103 files/objects could not be removed.?
This works on my local machine
This works in our docker container run locally
This does not work in our docker container on the GCP compute engine where we need it to be working.
Perhaps this is a permissions issue with the compute engine not having permission to remove files in our GCS?
Edit: We have a service account JSON in the /config folder of our Airflow project, and that service account is shared to an IAM user with Storage Admin permission. Perhaps having the JSON in the /config folder is not sufficient for assigning permissions to the entire GCP compute engine? I am particularly confused because this server is able to query from our BQ database, and WRITE to GCS, but cannot delete from GCS...
The solution in this link - https://gist.github.com/ryderdamen/926518ddddd46dd4c8c2e4ef5167243d was exactly what we needed:
Stop the instance
Edit the settings
Remove gsutil cache

Log yum update checks even when there are no packages available for update?

I need to ingest events for nightly yum update checks (using yum-cron) into a SIEM. Unfortunately yum only logs events to yum.log when action is taken, for example updates or installations. There is no event logged when you check for updates and there are none available. Auditors have also specified that ingesting events proving yum-cron ran is not enough so I can't just import the events from the cron log.
I could run a script that runs yum check-update and pipe the output to a file, then have rsyslog ingest lines from that file but that is messy and not ideal. I also want it to be as easy to configure as possible as it will have to be scripted to be able to configure it on new instances quickly.
It is also a special distribution from a vendor and the logger command does not work with rsyslog on the distribution.
Is there an easy way to track, via log, the fact that yum did run and that no packages were found for update? Indicating that all packages are up to date?
Another forum got me started down the path to a solution and this was what I ended up doing to resolve the issue:
yum-cron supports email notifications, unfortunately the SIEM we are using does not ingest events via email. However looking through the yum-cron scripts they redirect output to a temporary file which they then use to email notifications. I ended up editing the /etc/cron.daily/0yum.cron script to redirect output to /var/log/yum.log instead by changing:
} >> $YUMTMP 2>&1
to:
} >> /var/log/yum.log 2>&1
I then used the im_file module of rsyslog to ingest the yum.log and forward it to the SIEM.

How to delete remote file using Kettle Pentaho

I have a directory in remote Linux machine where files are being archived and kept for a certain period of time. I want to delete a file from remote (Linux) machine using kettle transformation based on some condition.
If file does not exists then job should not throw any error but if file exists at remote location, then job should delete file or raise an error in case some other reason, i.e., permission issue.
Here, the file name will be retrieved as a variable from previous steps of transformation and directory path of archived files will be fixed one.
How can I achieve this in Pentaho Kettle transformation?
Make use of "Run SSH commands" utility to pass commands to your remote server.
Assuming you do a rm -f /path/file it won't error for a non-existent file.
You can capture the output and perform an error handling as well (Filter rows and trigger the course of action).
Or you can mount remote directory to machine where kettle is, and try to delete file as regular.
Using ssh, i think, non trivial. It needs a lots of experiments to find out error types, to find way to distinguish errors. It might be and error with ssh connection or error to delete file.

Could not update .ICEauthority file

Recently I changed the permissions of the file system and gave myself all the rights. I logged out of the system and I couldn't log back in. I got the error message
Could not update ICEauthority file /home/marundu/.ICEauthority</>
I did a live boot with a Fed 17 disc and replaced my .ICEauthority file with the live-user version and it worked for a time, until I logged out again. Now, the login progress screen is all that shows. I can log into command mode (Ctrl-Alt-F2) but I can't sudo - I get the error messages:
sudo:/usr/libexec/sudoers.so must be only writable by owner and sudo: fatal error. Unable to load plugins.
I just found a good link on Ubuntu:
Ask Ubuntu: ICEauthority permissions problem
Some things to note:
I tried the obvious things like changing file permissions, but found my whole home directory was somehow owned by root. I believe this was due to a failed package update.
I used a recovery disk (Knoppix ISO) for ease of use: Better UI
When mounting the bad home partition, I used the most common Linux file type (Ext4)
I used 'sudo mount -o r,w -t ext4 /dev/sda1 /mnt'
When changing ownership, I used the numeric user:group specification, since the recovery disk doesn't have the symbolic users and groups: 'sudo chown -R 1000:1000 /mnt/home/userdir'
I verified that /home/userdir had rwx for owner, r-x for group / other. This is noted as a valid set of permissions for ICEauthority; others can work. See the linked discussion.
Hope that helps someone...
I got the “Could not update ICEauthority file” error and found that my home partition was in "Read-Only" mode. Thus, this error made sense.
The real question was what caused the "Read-Only" attribute on the partition. I ran "dmesg | read-only" and found that there were serious errors with the file system on my home partition which the kernel had set to "read-only during the boot process.
I then booted from a USB key (CDROM would do as well) and ran "sudo fsck /dev/sdXY" where /dev/sdXY is the partition containing my home directory. fsck corrected a number of file system errors on my home partition.
I then reboot after removing the USB key/CDROM and the problem went away.
Bottom line: Check if your home partition has file system errors. They might be the cause of this error. If so, run fsck from an external device on the partition containing your home directory.

Group permissions reset on mercurial update

I've configured my hg repository according to the docs described here:
MultipleCommitters.
However, when I execute "hg update -C" to recreate the working copy locally, the file permissions have changed such that it eventually causes errors on push when other developers attempt to commit changes. Supposidly, when configured properly, hg update will preserve file permissions. Yet it doesn't appear to be doing so:
-rwxrwxr-x 1 root mercurial 2948 2010-06-24 15:27 .hg/store/data/src/public/index.php.i
vs. (actual source file, after deleting the working copy and recreating with "hg update -C")
-rw-r--r-- 1 root mercurial 820 2010-06-28 12:07 src/public/index.php
How can mercurial be configured such that when users create new files or modify existing files, the group and it's permissions are preserved?
UPDATE
2010.06.28
Here is a sample of the errors I'm seeing:
remote: resolving manifests
remote: getting src/configs/application.ini
remote: abort: Permission denied: /hg/repo/path/src/configs/application.ini
remote: warning: changegroup hook exited with status 255
remote: calling hook changegroup.notify: hgext.notify.hook
I had this same problem and solved it by setting the sticky bit on the remote repo directory.
chmod +s `find . -type d`
That will solve the problem that the OP ran into.
Which method exactly you've used? Describe more what is your setup.
Yes, mercurial does remember file permissions on commit. When you do hg update -C, it will recreate files with permissions that were set on last commit.
Your error message seems to be telling that repository files on repository server have wrong permissions/owner, so you cannot modify them with hg push. This could be because someone commited and pushed files as a different repository server user.
I would reccomend shared ssh method ( https://www.mercurial-scm.org/wiki/SharedSSH ): you set up separate user account for repository management, add developer's ssh public keys (you should restrict them to be used only with mercurial with specific repositories) and then use ssh://hguser#server/path/to/repository as an url.
BTW: by default mercurial doesn't run any hooks if a user used to do push/pull is not in trusted list. See trusted section in man hgrc.
BTW2: don't run any regular software as a root. use normal account for that.