how to read shadow file in linux using python script - file-io

file = open('/etc/shadow', 'r')
print(file)
getting error like this
file = open('/etc/shadow', 'r')
IOError: [Errno 13] Permission denied: '/etc/shadow'

On most systems /etc/shadow is owned by root with rw permissions.
$ ls -la /etc/shadow
-rw------- 1 root root 692 Jun 10 19:24 /etc/shadow
You need to either:
Change the permissons (don't do this it is not safe)
but you could this by running 'chmod o+r /etc/shadow' as root. This will give the 'other' users read rights to
Run your program as root. Either by
a. Starting it as root
su -c 'python myPython.py' //you will be asked to provide the root password.
b. Starting it with sudo [1]
sudo python myPython.py this all depends on you sudo configuration but is your best bet other then just starting python as root.
Also an example to call sudo from within python[5].
c. Set setuid bit on the program [2]
This will most likely not work as Python is an interpreted language and most modern Unix systems will disallow (exception being Perl) setuid on interpreted programs as opposed to compiled/binaries.
chown root programName # Set owner to be root
chmod +s programName # This gives the program itself the right to run as root.
Regardless of whom starts it.
[1]http://en.wikipedia.org/wiki/Sudo
[2]http://en.wikipedia.org/wiki/Setuid
[3]Open a file as superuser in python
[4]Setuid bit on python script : Linux vs Solaris
[5]Using sudo with Python script
The problem is not with the source code or python. But with not having the correct file system rights to the '/etc/shadow' file.

In python 3 you can do:
import spwd
spwd.getspnam('username')
More information about the spwd module can be found here: https://docs.python.org/3/library/spwd.html#module-spwd

Related

How to access \\wsl$\othercontainer\some\file from within a WSL container?

From Windows, I can access the file systems of all the WSL containers from under \\wsl$.
And from inside a WSL container, I can access the windows C:\ drive as /mnt/c.
But how can I access another container's drive from inside a WSL container?
I'm trying to access \\wsl$\othercontainer\some\file from inside a WSL container.
wslpath can normally convert Windows file paths to paths accessible from WSL:
WSL2#~» wslpath 'C:\Windows\System32\drivers\etc\hosts'
/mnt/c/Windows/System32/drivers/etc/hosts
But it doesn't work for:
WSL2#~» wslpath '\\wsl$\othercontainer\some\file'
wslpath: \\wsl$\othercontainer\some\file
WSL2#~» echo $?
1
And of course:
WSL2#~» ls -l '\\wsl$\othercontainer\some\file'
ls: cannot access '\\wsl$\othercontainer\some\file': No such file or directory
This answer provided the answer:
sudo mkdir /mnt/othercontainer
sudo mount -t drvfs '\\wsl$\othercontainer' /mnt/othercontainer
ls -l /mnt/othercontainer/some/file
NOTE: It looks like symbolic links aren't supported. When one is encountered, we get an error like:
$ ls -l /mnt/othercontainer/bin
ls: cannot read symbolic link '/mnt/othercontainer/bin': Function not implemented
lrwxrwxrwx 1 root root 7 Apr 23 2020 /mnt/othercontainer/bin

Redis: Failed opening .rdb for saving: Permission denied

I have a redis server 2.8 installed using ubuntu apt-get on ubuntu 12.04.
I have copied a dump.rdb from an other database. Now when I try to start the new server, I constantly get:
[35763] 04 Mar 01:51:47.088 * 1 changes in 900 seconds. Saving...
[35763] 04 Mar 01:51:47.088 * Background saving started by pid 43313
[43313] 04 Mar 01:51:47.088 # Failed opening .rdb for saving: Permission denied
How can I solve this?
You should check your redis.conf file to see the permissions in dir and dbfilename. If the file named in the dbfilename which is located in the path specified in the dir path exists and the permission is also right. then the problem should be fixed.
Hope this will help someone.
P.S.
To find the redis.conf file location, you can use the #ps ax | grep redis to check. Usually it will be passed to the redis-server as input file.
For the dir permissions:it should be 755, for the dbfilename, it should be 644
Sometimes you also need to use top command to check whether the user:group of the redis-server and the owner of dir are consistent. i.e. The redis-server is running by redis:redis, but the dir is under root:root. In this case, you need to chown redis:redis -R dir.
Non of the above worked for me.. I've seen everyone around being so concerned on BGSAVE.. but while you're not on production, SAVE gives you a way more straight forward answer: ERR. BGSAVE does not, unless you inspect logs.
After digging dozens of posts I did not find any clue. The only thing that fixed was stopping the redis service and running it manually.
At first I thought it could be related to the user on behalf of redis was running. Not at all: the actual difference was the damn systemd subsystem which at some point in the redis config service file (/etc/systemd/system/redis.service) had the following:
ReadWriteDirectories: -/etc/redis
WoW super cool! ended up this was preventing redis from accessing anywhere in the system at all even though the permissions would perfectly allow it.
How naive of me to think that permission were just enough to ensure something had the proper rights.. (yes, I'm being ironic).
My /lib/system/systemd/redis-server.service file contained the following:
ReadOnlyDirectories=/
ReadWriteDirectories=-/var/lib/redis
My /etc/redis/redis.conf file stated that the database should be located in /data/redis
dir /data/redis
The systemd config file above effectively makes /data/redis read-only.
Once I changed the redis.conf file to read:
dir /var/lib/redis
I stopped getting the error.
My permission issue seemed to be the result of the Redis user being unable to modify the parent folder (/var/lib/redis/6379) for the purposes of creating a temporary file.
This was seen in an strace of the redis-server process:
open("temp-1833.rdb", O_WRONLY|O_CREAT|O_TRUNC, 0666) = -1 EACCES (Permission denied)
The issue was resolved after running the following command:
setfacl -m d:u:redis:rwX,u:redis:rwX /var/lib/redis/6379
For windows only :
This means the user does not have permission for this.
BY default owner of this file is NETWORK SERVICE, which has very limited access and need to changed(as per documentation)
solution :
go to ur redis folder.
right click --> go to properties--> security tab.
click on advanced.
click on Add to add ur user.
click on select a principal.
enter ur user (for eg GLOBAL\xxx).
click on check names and click on ok
give permissions to this user.
finally change the owner to this user.
Check the configuration 'dbfilename' in your redis.conf. Your redis running process have no write permission in the path.
In my case all rights were correct(I mean that the most stared answer doesn't help me). BUT! Redis used an incorrect path to file. In config it was correct, but from rails-cli it returned '/proc'.
This answer helped me - https://serverfault.com/questions/800295/redis-spontaneously-failed-failed-opening-rdb-for-saving-permission-denied
Warning
For exact question it doesn't matter, but my situation looked like someone hacked server. Link to explanation. So check your setup properly.
supervised systemd is intended solely for Type=notify and daemonize yes corresponds to Type=forking.
sudo vim /etc/systemd/system/redis.service
When you see the service file edit the Type=forking
[Unit]
Description=Redis In-Memory Data Store
After=network.target
[Service]
User=redis
Type=forking
Group=redis
ExecStart=/usr/bin/redis-server /etc/redis/redis.conf
ExecStop=/usr/bin/redis-cli shutdown
Restart=always
[Install]
WantedBy=multi-user.target
Open up this file
sudo vim /etc/redis/redis.conf
Add these changes to it
daemonize yes
supervised no
In my case when I typed the following command sudo tail -F /var/log/redis/redis-server.log I get this log:
987:C 08 Dec 22:28:30.344 # Can't chdir to '/var/lib/redis': Permission denied
1047:C 08 Dec 22:28:30.565 # Can't chdir to '/var/lib/redis': Permission denied
1095:C 08 Dec 22:28:30.876 # Can't chdir to '/var/lib/redis': Permission denied
1119:C 08 Dec 22:28:31.165 # Can't chdir to '/var/lib/redis': Permission denied
1151:C 08 Dec 22:28:31.413 # Can't chdir to '/var/lib/redis': Permission denied
1500:C 08 Dec 22:30:44.706 # Can't chdir to '/var/lib/redis': Permission denied
1523:C 08 Dec 22:30:45.194 # Can't chdir to '/var/lib/redis': Permission denied
1545:C 08 Dec 22:30:45.442 # Can't chdir to '/var/lib/redis': Permission denied
1568:C 08 Dec 22:30:45.696 # Can't chdir to '/var/lib/redis': Permission denied
1590:C 08 Dec 22:30:45.940 # Can't chdir to '/var/lib/redis': Permission denied
That means the user redis doesn't have permission on /var/lib/redis.
That's why I typed this command sudo ls -l /var/lib/redis to see the permissions in this directory. I get the following log:
-rw-r--r-- 1 root root 885 Dec 8 22:12 dump.rdb
That means it was associated with root instead of redis.
Then I typed the following command to change the owner of that directory: sudo chown -R redis:redis /var/lib/redis/.
Then I restarted redis-server by the following command: sudo systemctl restart redis-server.
Boom!! It worked.
Hope this will work for someone, who have similar issue as mine.
If anyone encounters this again and doesn't have a problem upgrading, just upgrade your Redis installation to the latest version. I encountered this problem with Redis 2.8.15, and upgraded to Redis 2.8.22 that was available at the time of this writing. A sysadmin in my company assured me that this was a bug with Redis 2.8.15, and the problem just went away after upgrading.
I had the same issue with redis used by Sidekiq in Rails application, rm -rf ./tmp/ worked like charm.
I spent some time on this until i realised that my cmd line session was running in the wrong drive :/. Just in case this might help anyone else!
The lock file in the log directory is what was causing this error for me. I was able to clear the error by deleting the lock file:
rm /var/log/redis/lock.
This happened when another system was restored to this one while redis was still running.
No one hasn't mentioned about SELinux.
On Centos most probably you will have such error when selinux mode = enforcing.
Just check getenforce and if it set to 'enforcing' , hit setenforce 0 and try one more time to run service.
If you are on Windows and the Redis folder was installed in C: \ Program Files \ Redis for example, you will have a problem with access permission. Modifying files within the Program Files folder, usually requires administrator permission and dump.rdb is within this context. In your redis.conf, modify the default directory to anywhere outside the Program Files folder:
from: dir ./
to: dir ../../Exceptions/Redis/
Note that I went up the directory twice to leave the Program Files folder and outside of it I created any directory (C:\Exceptions\Redis). In this directory Redis can save the .rdb file without permission problems.

Why doesn't setting the SUID bit in OpenBSD set effective and saved UIDs to executable file owner?

I am using a fresh install of OpenBSD 5.3 as a guest OS on Parallels for Mac:
$ uname -a
OpenBSD openbsd.localdomain 5.3 GENERIC#53 amd64
To my surprise, a binary file owned by root with its SUID bit set runs with UIDs as if the SUID was not set. That is, when UID 1000 runs such a program, the program starts in state:
<real_uid, effective_uid, saved_uid> = <1000, 1000, 1000>
and not in state:
<real_uid, effective_uid, saved_uid> = <1000, 0, 0>
as expected.
Why is this the case?
Here are the details regarding how I found the issue:
I have written an interactive C program (compiled as setuid_min.bin) for evaluating setuid behaviour in different Unix systems. The program lives in a subdirectory of UID 1000's home directory, and the sudo command is used to change ownership and SUID; then the program is run and I enter the uid to report the real, effective, and saved UIDs of the process:
$ sudo chown root:staff setuid_min.bin
$ ls -l | grep 'setuid_min\.bin$'
-rwxr-xr-x 1 root staff [...] setuid_min.bin
$ sudo chmod a+s setuid_min.bin
$ ls -l | grep 'setuid_min\.bin$'
-rwsr-sr-x 1 root staff [...] setuid_min.bin
$ ./setuid_min.bin
uid
1000 1000 1000 some_pid
exit
$
Note that some_pid above is the pid of the setuid_min.bin process. The program reports the real UID, effective UID, and saved UID by reporting the output of the following shell command:
ps -ao ruid,uid,svuid,pid | grep '[ ]my_pid$'
where my_pid is the pid is reported by getpid(). My only guess as to why this might be the case is that OpenBSD has some underlying permissions structure that is using the ownership/permissions of the directory where setuid_min.bin resides, or that is not actually changing ownership/SUID bit when an unprivileged user uses sudo to change file permissions.
Most likely your binary is in one of the default partitions that are mounted "nosuid". The default fstab the install script creates will by mount everything nosuid unless it's known to contain suid binaries.

How can I mount an S3 volume with proper permissions using FUSE

I have an Amazon S3 bucket (let's call it static.example.com) that I need to mount on an EC2 instance (Ubuntu 12.04.2). I've installed s3fs. I'm able to mount the volume, but I can't write to the bucket. I have tried:
sudo s3fs static.example.com -o use_cache=/tmp,allow_other,uid=33,gid=33 /mnt/static.example.com
I can then cd /mnt and ls -la to see:
drwxr-xr-x 5 root root 4096 Mar 28 18:03 .
drwxr-xr-x 25 root root 4096 Feb 19 19:22 ..
lrwxrwxrwx 1 root root 7 Feb 21 19:19 httpd -> /httpd/
drwx------ 2 root root 16384 Oct 9 2012 lost+found
drwxr-xr-x 1 www-data www-data 0 Jan 1 1970 static.example.com
This all looks good, but when I cd static.example.com and mkdir test, I get:
mkdir: cannot create directory `test': Permission denied
The only way I can actually create a directory or touch a file is to force it with sudo. This is not a viable option, however, because I want to write files to the bucket from Apache. My Apache server runs as user:group www-data. Running mount yields:
s3fs on /mnt/static.example.com type fuse.s3fs (rw,nosuid,nodev,allow_other)
How can I mount this bucket in a manner that will allow me to write to the bucket?
I'm the lead developer and maintainer of Open source project RioFS: a userspace filesystem to mount Amazon S3 buckets.
Our project is an alternative to “s3fs” project, main advantages comparing to “s3fs” are: simplicity, the speed of operations and bugs-free code. Currently the project is in the “beta” state, but it's been running on several high-loaded fileservers for quite some time.
We are seeking for more people to join our project and help with the testing. From our side we offer quick bugs fix and will listen to your requests to add new features.
Regarding your issue:
if'd you use RioFS, you could mount a bucket and have a write access to it using the following command (assuming you have installed RioFS and have exported AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables):
riofs -o allow_other http://s3.amazonaws.com bucket_name /mnt/static.example.com
(please refer to project description for command line arguments)
Please note that the project is still in the development, there are could be still a number of bugs left.
If you find that something doesn't work as expected: please fill a issue report on the project's GitHub page.
Hope it helps and we are looking forward to seeing you joined our community !
This works for me:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache
If you need to debug, just add ,f2 -f -d:
sudo s3fs bucketname /mnt/folder -o allow_other,nosuid,use_cache=/mnt/foldercache,f2 -f -d
Try this method using S3Backer:
mountpoint/
file # (e.g., can be used as a virtual loopback)
stats # human readable statistics
Read more about it hurr:
http://www.turnkeylinux.org/blog/exploring-s3-based-filesystems-s3fs-and-s3backer

~/.ec2/id_rsa-gsg-keypair not accessible: No such file or directory

I'm trying to launch a Hadoop cluster on Amazon ec2, using the instructions in "Hadoop in Action" (also here: http://wiki.apache.org/hadoop/AmazonEC2).
I've set up my private ssh key and configurations, but when I try to launch a cluster using the command-line tools:
hadoop-ec2 launch-cluster test-cluster 2
I repeatedly get this error:
Warning: Identity file ~/.ec2/id_rsa-gsg-keypair not accessible: No such file or directory.
Permission denied (publickey,gssapi-with-mic).
The ~/.ec2/id_rsa-gsg-keypair definitely exists, though, and I did chmod 600 it:
> chmod 600 ~/.ec2/id_rsa-gsg-keypair
> ls -l id_rsa-gsg-keypair
-rw------- 1 my-username
Any idea what's wrong?
You may have already realized this, but the problem is possibly related to the ~/ path usage. Try using the absolute path /home/username/.ec2