How do you automatically set permissions on a file when uploaded using a sftp - permissions

Hello i currently have a folder set up that can have files uploaded to using a sftp.
drwxrwxr-x. 2 cypress cypress 4096 Apr 30 15:24 sourceit
But when i file gets uploaded it gets uploaded as
-rw-r--r--. 1 cypress sftpusrs 7 Apr 30 15:24 test.file
what do i have to do to set it up so when i file gets uploaded it will automatically set the permissions to
drwxrwxr-x. 1 cypress sftpusrs 7 Apr 30 15:24 test.file
Thank you for your help.
I currently have everything set up in openssh sshd_config for ftping
Match user cypress
ChrootDirectory /mnt/cypress
AllowTCPForwarding no
X11Forwarding no
ForceCommand internal-sftp

Modify or Add this line to your sshd_config
ForceCommand internal-sftp -u 2
which should apply a umask of 002.
With umask or SFTP, there is no way to automaticaly put a file executable, it would be a huge security risk. You must run chmod in a separate command in order to do that.

Related

How to authenticate ldap user and login on server as GUI ,it should login on server directly via GUI

I am new to System admin My problem is : In my department there are 30 students in 1st year and 30 students in 2nd year which are divided into two groups lets say group1 and group2 which need to login as ldap user via Ubuntu(14.04) GUI through any System connected to LAN.Every users home directory should be created on server side ,It should mount while login as GUI in ubuntu14.04, No other user should access anyone else home directory except by self.
[I don't want authenticating user to ldap-server and creating home directory on local machine ,instead I want central directory on server side,It should looks like login to server.]
Server Side : Ubuntu 14.04
I tried this and it works fine for me.
Client side : Ubuntu14.04
I tried this , it also works
but the issue is , this tutorial creates home directory on local machine instead of mounting server directory.I know from where it does.
I want : If i login through ldap user It should login on server via GUI not on local machine home directory.
on client side file "/var/log/auth.log"
Jul 28 11:53:06 issc systemd-logind[674]: System is rebooting.
Jul 28 11:53:23 issc systemd-logind[650]: New seat seat0.
Jul 28 11:53:23 issc systemd-logind[650]: Watching system buttons on /dev/input/event1 (Power Button)
Jul 28 11:53:23 issc systemd-logind[650]: Watching system buttons on /dev/input/event4 (Video Bus)
Jul 28 11:53:23 issc systemd-logind[650]: Watching system buttons on /dev/input/event0 (Power Button)
Jul 28 11:53:24 issc sshd[833]: Server listening on 0.0.0.0 port 22.
Jul 28 11:53:24 issc sshd[833]: Server listening on :: port 22.
Jul 28 11:53:25 issc lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory
Jul 28 11:53:25 issc lightdm: PAM adding faulty module: pam_kwallet.so
Jul 28 11:53:25 issc lightdm: pam_unix(lightdm-greeter:session): session opened for user lightdm by (uid=0)
Jul 28 11:53:25 issc systemd-logind[650]: New session c1 of user lightdm.
Jul 28 11:53:25 issc systemd-logind[650]: Linked /tmp/.X11-unix/X0 to /run/user/112/X11-display.
Jul 28 11:53:26 issc lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory
Jul 28 11:53:26 issc lightdm: PAM adding faulty module: pam_kwallet.so
Jul 28 11:53:26 issc lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "scicomp"
Jul 28 11:53:29 issc lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory
Please help me, i tried many tutorial online and every tutorial looks like same ,like this one.I am trying from last 2 weeks its not working.Thank you for your time.
You need to install and configure autofs for this to work. autofs will automatically mount user's home directories on the client machine from an NFS server. I'm not sure about creating them on the server on the fly, but if it does work, you will likely need to enable the pam_mkhomedir module in the appropriate /etc/pam.d file(s), as described here
Yep! I tried and worked for me.
**Server Side :** Package require to install :
$ sudo apt-get install nfs-kernel-server
Updated in below file like this
abdulrahim#issc-ldap:/ldap/batch2016part2$ sudo vi /etc/exports
#/homes 198.1.10.*(fsid=0,rw,insecure,no_subtree_check,sync)
/ldap/batch2015part1/home 198.1.10.*(fsid=1,rw,insecure,no_subtree_check,sync)
/ldap/batch2015part2/home 198.1.10.*(fsid=2,rw,insecure,no_subtree_check,sync)
Exported as per below::::
abdulrahim#issc-ldap:/ldap/batch2016part2$ sudo exportfs -r
root#issc-ldap:/ldap/rnd# showmount -e 198.1.10.45
Export list for 198.1.10.45:
/ldap/batch2015part1/home
/ldap/batch2015part2/home
**On Client Side :** Package require to install :
$ sudo apt-get install nfs-kernel-client
NOW ON CLIENT SIDE mount,permission ,ownership::::::
$ sudo gedit /etc/fstab
#####below are partition mounted from server
198.1.10.45:/ldap/batch2015part1/home /ldap/batch2015part1/home nfs nfsvers=3,sync 0 3
198.1.10.45:/ldap/batch2015part2/home /ldap/batch2015part2/home nfs nfsvers=3,sync 0 4
### or like this below
198.1.10.45:/ldap/batch2015part1/home /ldap/batch2015part1/home nfs noauto,x-systemd.automount 0 3
198.1.10.45:/ldap/batch2015part2/home /ldap/batch2015part2/home nfs noauto,x-systemd.automount 0 4
Now mount all pertition from server side as per below : :::::
$ sudo mount -a
Check mounted partion by below commands
$ df -h

Test SSH connection between Avi Vantage Controller and Service Engine Host

The Avi docs say to add an ssh public key to the known_hosts file on the SE hosts so the controller can login and install and start the service engine host.
I'm pretty sure this isn't working properly. How can I test the ssh connection between the controller and the service engine host(s)? Where is the controller's private key stored?
We will automatically test the SSH connection and display status as appropriate. For security reasons, the private key configured is stored in plain key format anywhere on the file system.
Did you "create" a ssh key or "import" a ssh key - if you imported, you could use plain ssh -i <path-to-imported-private-key user#host from your workstation where the private key resides.
Refer to #Aziz comment for details on host status display. Also note the correction about authorized_keys (not authorized_hosts)
I am guessing this is in reference to creating a "LinuxServer" Cloud in Avi. On Avi, you have to do the following:
1) configure a SSHUser (Administration > Settings > SSH Key Settings). alternatively, this can also be created from UI during LinuxServer cloud creation.
2) Create the LinuxServer cloud (Infrastructure > Clouds) with appropriate hosts and select the SSHUser from the dropdown.
The SSH keys configured are stored encrypted in Avi controller DB and not exposed via API/REST or on file system. The Avi Controller modules use the decrypted key to connect to each host and provision the SE.
I suppose the docs are not clear - you dont add the Avi Controller's public key to each host, instead you add "your" custom SSH key pair into Avi Controller (via step 1 above) and add the correspinding public key on each host.
With regards to "testing" the SSH connection, since these are your owned keys, you can plain "ssh -i username#host" to test the SSH. Alternatively, the Cloud status will also provide information if SSH using the configured key failed for any reason.
Please refer: http://kb.avinetworks.com/installing-avi-vantage-for-a-linux-server-cloud/ for complete install guide.
Let me know if your question was related to a different Cloud/Topic.
Adding to what #Siva explained, the status of the connection is displayed in the controller cloud page (From menu Infrastructure->Clouds, click on the cloud where host are added). Also if you hover the mouse over the State column of the host then you can see the detailed reason of the failure.
This Host Status in linux server cloud, in this case "Default-Cloud" is a linux server cloud with 3 host, out of which on one of the host ssh fails. In this example the host 10.10.99.199 is a fake entry i.e. there is no host with that IP hence SSH fails, where as 10.10.22.71 and 10.10.22.35 are the host for which SSH credentials passed, then the Service Engine was deployed on them and are ready for Virtual Services(load balancer or SSL termination etc..) to be placed on them.
#Davidn Coleman, In the comment you mentioned that you added the public key to authorized_hosts (you need to add the key to authorized_keys), and also the user for whom you added the ssh authorization is not root(i.e. /home/user/.ssh/authorized_keys) then make the user is sudoer (add the entry in /etc/sudoers for this user) and also make sure the permission for .ssh dir and authorized_keys are set correctly (for security reasons and good practise).
The following is the snippet for the host 10.10.22.35.
[root#localhost ~]# ls -lrtha
total 318M
-rw-r--r--. 1 root root 129 Dec 28 2013 .tcshrc
-rw-r--r--. 1 root root 100 Dec 28 2013 .cshrc
-rw-r--r--. 1 root root 176 Dec 28 2013 .bashrc
-rw-r--r--. 1 root root 176 Dec 28 2013 .bash_profile
-rw-r--r--. 1 root root 18 Dec 28 2013 .bash_logout
-rw-------. 1 root root 1.2K May 27 13:56 anaconda-ks.cfg
drwxr-xr-x. 3 root root 17 May 27 14:07 .cache
drwxr-xr-x. 3 root root 17 May 27 14:07 .config
dr-xr-xr-x. 17 root root 4.0K May 31 08:15 ..
drwxr-----. 3 root root 18 May 31 08:25 .pki
-rw-------. 1 root root 1.9K May 31 08:46 .viminfo
drwx------. 2 root root 28 May 31 09:09 .ssh
-rw-r--r--. 1 root root 317M May 31 09:13 se_docker.tgz
-rw-r--r--. 1 root root 1.2M May 31 09:13 dpdk_klms.tar.gz
dr-xr-x---. 6 root root 4.0K May 31 09:14 .
-rw-r--r--. 1 root root 1.1K May 31 09:14 avise.service
-rw-------. 1 root root 3.4K Jun 1 09:14 .bash_history
[root#localhost ~]# ls -lrtha .ssh/
total 8.0K
-rw-r--r--. 1 root root 399 May 31 09:09 authorized_keys
drwx------. 2 root root 28 May 31 09:09 .
dr-xr-x---. 6 root root 4.0K May 31 09:14 ..
[root#localhost ~]# pwd
/root

Jenkins Slave Permission Denied while copying slave.jar

I get a permissions denied but don't know why. From my jenkins master I was able to run the following command using ssh-rsa-key
scp /var/cache/jenkins/war/WEB-INF/slave.jar jenkins#<my_slave_host>:/var/jenkins/
Note: I did manually create /var/jenkins/ on the my slave host when i saw it didn't exist and made it owned by the jenkins user. My master jenkins is configured to use the jenkins#mySlaveHost using .ssh keys.
Any ideas why I'm getting a permissions denied? What is it trying to do?
Here's the log from master jenkins after clicking [Lauch slave agent] button:
[02/27/15 15:18:01] [SSH] Opening SSH connection to <my_slave_host>:22.
[02/27/15 15:18:02] [SSH] Authentication successful.
[02/27/15 15:18:03] [SSH] The remote users environment is:
BASH=/bin/bash
BASHOPTS=cmdhist:complete_fullquote:extquote:force_fignore:hostcomplete:interactive_comments:progcomp:promptvars:sourcepath
BASH_ALIASES=()
BASH_ARGC=()
BASH_ARGV=()
BASH_CMDS=()
BASH_EXECUTION_STRING=set
BASH_LINENO=()
BASH_SOURCE=()
BASH_VERSINFO=([0]="4" [1]="3" [2]="11" [3]="1" [4]="release" [5]="x86_64-pc-linux-gnu")
BASH_VERSION='4.3.11(1)-release'
CATALINA_HOME=/opt/tomcat/current
DIRSTACK=()
EUID=107
GROUPS=()
HOME=/var/lib/jenkins
HOSTNAME=*********** REMOVED***********
HOSTTYPE=x86_64
IFS=$' \t\n'
JAVA_HOME=/usr/lib/jvm/java-7-oracle
LANG=en_US.UTF-8
LOGNAME=jenkins
MACHTYPE=x86_64-pc-linux-gnu
MAIL=/var/mail/jenkins
OPTERR=1
OPTIND=1
OSTYPE=linux-gnu
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
PIPESTATUS=([0]="0")
PPID=10592
PS4='+ '
PWD=/var/lib/jenkins
SHELL=/bin/bash
SHELLOPTS=braceexpand:hashall:interactive-comments
SHLVL=1
SSH_CLIENT='10.57.13.32 34436 22'
SSH_CONNECTION='10.57.13.32 34436 10.57.6.42 22'
TERM=dumb
UID=107
USER=jenkins
XDG_RUNTIME_DIR=/run/user/107
XDG_SESSION_ID=42
_=']'
[02/27/15 15:18:03] [SSH] Checking java version of java
[02/27/15 15:18:04] [SSH] java -version returned 1.7.0_76.
[02/27/15 15:18:04] [SSH] Starting sftp client.
[02/27/15 15:18:04] [SSH] Copying latest slave.jar...
hudson.util.IOException2: Could not copy slave.jar into '/var/jenkins' on slave
at hudson.plugins.sshslaves.SSHLauncher.copySlaveJar(SSHLauncher.java:1019)
at hudson.plugins.sshslaves.SSHLauncher.access$300(SSHLauncher.java:133)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:709)
at hudson.plugins.sshslaves.SSHLauncher$2.call(SSHLauncher.java:696)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: hudson.util.IOException2: Could not copy slave.jar to '/var/jenkins/slave.jar' on slave
at hudson.plugins.sshslaves.SSHLauncher.copySlaveJar(SSHLauncher.java:1016)
... 7 more
Caused by: com.trilead.ssh2.SFTPException: Permission denied (SSH_FX_PERMISSION_DENIED: The user does not have sufficient permissions to perform the operation.)
at com.trilead.ssh2.SFTPv3Client.openFile(SFTPv3Client.java:1201)
at com.trilead.ssh2.SFTPv3Client.createFile(SFTPv3Client.java:1074)
at com.trilead.ssh2.SFTPv3Client.createFile(SFTPv3Client.java:1055)
at hudson.plugins.sshslaves.SFTPClient.writeToFile(SFTPClient.java:93)
at hudson.plugins.sshslaves.SSHLauncher.copySlaveJar(SSHLauncher.java:1008)
... 7 more
[02/27/15 15:18:04] Launch failed - cleaning up connection
[02/27/15 15:18:04] [SSH] Connection closed.
Edit:
Here's /var/jenkins on the slave:
$ ls -al
total 436
drwxr-xr-x 2 jenkins jenkins 22 Feb 27 15:17 .
drwxr-xr-x 14 root root 4096 Feb 27 15:12 ..
-rw-r--r-- 1 jenkins jenkins 439584 Feb 27 15:17 slave.jar
As for SFTP, I do not think it is enabled, can you point me to any docs that says SFTP is a pre-requisite for a slave? All the pages I've seen do not mention SFTP.
It looks like the problem is tied to your Remote root directory setting. That needs to be the location of the slave.jar, as Jenkins will try to execute it from there.
As for the permissions, the Remote Root Directory (whatever you set it to) needs to be configured to allow Jenkins to access it.
Therefore, if you change your Remote root directory setting to be /var/jenkins/ in your case, it should launch the Jenkins slave successfully.
Granting
sudo chmod -R 777 /var/lib/jenkins
works for me
sudo chmod -R 777 /var/jenkins
Make sure that the location's permissions where the jar needs to be copied is as the logged in user (i.e. jenkins).
check for permisison using:
ls -l directory_name
Most probably you'll find another owner, so change the owner with:
chown -R username:username directory_name
That worked for me !
For anyone with an external drive, check that it's mounted correct:
drwxrwxrwx+ 2 App admin 68 Aug 25 19:33 Jenkins_Support
drwxrwxr-x 19 App staff 714 Sep 25 10:46 Jenkins_Support 1
This might be a problem
If you changed a user to connect to slave, please also make sure slave destination directory is empty (not containing slave.jar copied there by previous user).
This is kind of stupid, but costed me a time.
In ubuntu terminal check
service ufw status
if active
service ufw stop
In redhat terminal check
service iptables status
if active
service iptables stop
service ip6tables status
if active
service ip6tables stop
then check jenkins-slave-node status

laravel in google compute engine - permission denied for log files

I am trying to install a laravel project in google compute engine with "Red Hat Enterprise Linux Server 7".
I followed this blog: http://tecadmin.net/install-laravel-framework-on-centos/
Completed the laravel project download, set up user permission for user "apache" and group "apache". After all this, I am getting error as
Error in exception handler: The stream or file "/var/www/html/project/app/storage/logs/laravel.log" could not be opened: failed to open stream: Permission denied in /var/www/html/project/bootstrap/compiled.php:9072
Who ever had the problem earlier, mentions the solution as set proper permission for the log files. I have verified that app/storage folder has correct permissions.
I know I am missing something very simple, but could not get this working.
Any help will be greatly appreciated.
UPDATE:
These are the permissions I have applied:
chown -R apache:apache project
chmod 775 project
chmod 775 project/app/storage
chmod -R 777 project/app/storage
And these are the permissions I can see for the folder:
drwxrwxr-x. 7 apache apache 4096 Dec 23 13:54
drwxrwxr-x. 7 apache apache 84 Dec 23 13:53 storage
-rwxrwxrwx. 1 apache apache 0 Dec 23 14:01 laravel.log
Not able to figure out if this is an issue with RHEL linux 7 issue. I gave up on this after a while and created a VM with centOS 6 which is now working properly. Thanks a lot #ykbks for helping me with this.
Needs to disable SELinux.
~]# cat /etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
# SETLOCALDEFS= Check local definition changes
SETLOCALDEFS=0
Changing the value of SELINUX to disabled changes the state of SELinux and the name of the policy to be used the next time the system boots.

User gets instantly disconnected after connection successful on a chrooted SSH

I configured a jail with Chroot in SSH following this tutorial.
I found another question on StackOverflow dealing with the same problem, however the answers didn't work for me either.
The auth.log file contains the following:
Mar 16 18:36:06 *** sshd[30509]: Accepted password for thenewone from x.x.x.x port 49583 ssh2
Mar 16 18:36:06 *** sshd[30509]: pam_unix(sshd:session): session opened for user thenewone by (uid=0)
Mar 16 18:36:07 *** sshd[30509]: lastlog_openseek: Couldn't stat /var/log/lastlog: No such file or directory
Mar 16 18:36:07 *** sshd[30509]: lastlog_openseek: Couldn't stat /var/log/lastlog: No such file or directory
Mar 16 18:36:07 *** sshd[30509]: pam_unix(sshd:session): session closed for user thenewone
My sshd_config file contains the following:
Match User thenewone
ChrootDirectory /home/thenewone
AllowTcpForwarding no
X11Forwarding no
My /home/thenewone directory is owned by root:root and contains the chrooted system (all files but /home/thenewone/home/thenewone owned by root:root)
I don't understand why the connection is successful then simply close.
Problem found: some binaries dependencies were missing, even for the shell associated with the chrooted account...
Shell failed to load --> disconnection!
If you are experiencing the same trouble as mine, use ldd <binary> to find all needed dependencies in the chroot jail