Issue while copying a Configuration File from a TFTP Server to Cisco IOS Router - tftp

I have a Ubuntu machine which acts as a TFTP Server. I want to configure my cisco ios routers to take configuration from this TFTP server at boot time.
I have a few doubts-
Where do I store the configuration file for my cisco router in the TFTP Server?
Currently, I have created two temp folders in /var/lib/tftpboot-
automation#automation:/var/lib/tftpboot$ ls -l
total 8
drwx------ 2 tftp tftp 4096 Mar 31 15:37 ExrZHRa-incoming
drwxr-xr-x 2 root root 4096 Mar 31 15:52 TXJla-outgoing
automation#automation:/var/lib/tftpboot$ tree
.
├── ExrZHRa-incoming [error opening dir]
└── TXJla-outgoing
└── R1.txt
2 directories, 1 file
As per Cisco's documentation, this is the syntax to get a file from TFTP server-
copy tftp: [[[//location ]/directory ]/filename ] nvram:startup-config
Example:
Device# copy tftp://server1/dir10/datasource nvram:startup-config
As per my understanding, the location will be IP of my TFTP Server and filename will be the actual config file I want to load. But what should be configured in the directory? I tried with /var/lib/tftpboot/TXJla-outgoing but it didn't work. Error-
enter image description here

Shouldn't it be just
copy tftp://192.168.1.1/R1.txt running-config
It looks like you are using tftp-hda. Did you follow this guide ?
https://medium.com/#Sciri/configuring-a-tftp-server-on-ubuntu-for-switch-upgrades-and-maintenance-caf5b6833148

Try this:
copy tftp://192.168.1.1/TXJla-outgoing/R1.txt nvram:startup-config
The URL of anything you download via TFTP starts from the root of your TFTP server which is /var/lib/tftpboot

Related

How to start `redis-sentinel` server successfully

Sorry redis newbie here.
When I run redis-sentinel
42533:X 10 Nov 21:21:30.345 # Warning: no config file specified, using
the default config. In order to specify a config file use redis-
sentinel /path/to/sentinel.conf
42533:X 10 Nov 21:21:30.346 * Increased maximum number of open files to
10032 (it was originally set to 7168).
Redis 3.0.4 (00000000/0) 64 bit
Running in sentinel mode
Port: 26379
PID: 42533
http://redis.io
42533:X 10 Nov 21:21:30.347 # Sentinel runid is
733213860cf470431c7441e5d6aaf9ed9b2d7c2f
42533:X 10 Nov 21:21:30.347 # Sentinel started without a config file.
Exiting...
What am I missing? Do I need a configuration file? If so where should my /path/to/sentinel.conf be?
It is mandatory to use a configuration file when running Sentinel, as this file will be used by the system in order to save the current state that will be reloaded in case of restarts. Sentinel will simply refuse to start if no configuration file is given or if the configuration file path is not writable.
you can run Sentinel with the following command line:
redis-sentinel /path/to/sentinel.conf
Otherwise you can use directly the redis-server executable starting it in Sentinel mode:
redis-server /path/to/sentinel.conf --sentinel
You can put the file anywhere you want, just make sure you are providing the right path for that. For example, if you are in linux and if the file is inside your home directory, then the command will be
redis-sentinel ~/sentinel.conf

TFTP Connect Request Failed

I am trying to setup a TFTP server on Windows Server 2012R2 for a university project, and I am running into an issue when trying to GET or PUT anything on the server.
The command tftp -i 192.168.2.10 put C:\test.txt on the server itself results in
Error on server : ???????????????????? .
Connect request failed
I've made sure both ports UDP 69 and TCP 8099 are open on both inbound and outbound, and I've confirmed that the correct path for the test file is entered, because inserting a typo results in a can't read from local file error.
The ReadFilter value in regedit path HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\WDSServer\Providers\WDSTFTP is set to \* and the RootFolder value is set to C:\TFTP which does exist on the server.

How to authenticate ldap user and login on server as GUI ,it should login on server directly via GUI

I am new to System admin My problem is : In my department there are 30 students in 1st year and 30 students in 2nd year which are divided into two groups lets say group1 and group2 which need to login as ldap user via Ubuntu(14.04) GUI through any System connected to LAN.Every users home directory should be created on server side ,It should mount while login as GUI in ubuntu14.04, No other user should access anyone else home directory except by self.
[I don't want authenticating user to ldap-server and creating home directory on local machine ,instead I want central directory on server side,It should looks like login to server.]
Server Side : Ubuntu 14.04
I tried this and it works fine for me.
Client side : Ubuntu14.04
I tried this , it also works
but the issue is , this tutorial creates home directory on local machine instead of mounting server directory.I know from where it does.
I want : If i login through ldap user It should login on server via GUI not on local machine home directory.
on client side file "/var/log/auth.log"
Jul 28 11:53:06 issc systemd-logind[674]: System is rebooting.
Jul 28 11:53:23 issc systemd-logind[650]: New seat seat0.
Jul 28 11:53:23 issc systemd-logind[650]: Watching system buttons on /dev/input/event1 (Power Button)
Jul 28 11:53:23 issc systemd-logind[650]: Watching system buttons on /dev/input/event4 (Video Bus)
Jul 28 11:53:23 issc systemd-logind[650]: Watching system buttons on /dev/input/event0 (Power Button)
Jul 28 11:53:24 issc sshd[833]: Server listening on 0.0.0.0 port 22.
Jul 28 11:53:24 issc sshd[833]: Server listening on :: port 22.
Jul 28 11:53:25 issc lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory
Jul 28 11:53:25 issc lightdm: PAM adding faulty module: pam_kwallet.so
Jul 28 11:53:25 issc lightdm: pam_unix(lightdm-greeter:session): session opened for user lightdm by (uid=0)
Jul 28 11:53:25 issc systemd-logind[650]: New session c1 of user lightdm.
Jul 28 11:53:25 issc systemd-logind[650]: Linked /tmp/.X11-unix/X0 to /run/user/112/X11-display.
Jul 28 11:53:26 issc lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory
Jul 28 11:53:26 issc lightdm: PAM adding faulty module: pam_kwallet.so
Jul 28 11:53:26 issc lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "scicomp"
Jul 28 11:53:29 issc lightdm: PAM unable to dlopen(pam_kwallet.so): /lib/security/pam_kwallet.so: cannot open shared object file: No such file or directory
Please help me, i tried many tutorial online and every tutorial looks like same ,like this one.I am trying from last 2 weeks its not working.Thank you for your time.
You need to install and configure autofs for this to work. autofs will automatically mount user's home directories on the client machine from an NFS server. I'm not sure about creating them on the server on the fly, but if it does work, you will likely need to enable the pam_mkhomedir module in the appropriate /etc/pam.d file(s), as described here
Yep! I tried and worked for me.
**Server Side :** Package require to install :
$ sudo apt-get install nfs-kernel-server
Updated in below file like this
abdulrahim#issc-ldap:/ldap/batch2016part2$ sudo vi /etc/exports
#/homes 198.1.10.*(fsid=0,rw,insecure,no_subtree_check,sync)
/ldap/batch2015part1/home 198.1.10.*(fsid=1,rw,insecure,no_subtree_check,sync)
/ldap/batch2015part2/home 198.1.10.*(fsid=2,rw,insecure,no_subtree_check,sync)
Exported as per below::::
abdulrahim#issc-ldap:/ldap/batch2016part2$ sudo exportfs -r
root#issc-ldap:/ldap/rnd# showmount -e 198.1.10.45
Export list for 198.1.10.45:
/ldap/batch2015part1/home
/ldap/batch2015part2/home
**On Client Side :** Package require to install :
$ sudo apt-get install nfs-kernel-client
NOW ON CLIENT SIDE mount,permission ,ownership::::::
$ sudo gedit /etc/fstab
#####below are partition mounted from server
198.1.10.45:/ldap/batch2015part1/home /ldap/batch2015part1/home nfs nfsvers=3,sync 0 3
198.1.10.45:/ldap/batch2015part2/home /ldap/batch2015part2/home nfs nfsvers=3,sync 0 4
### or like this below
198.1.10.45:/ldap/batch2015part1/home /ldap/batch2015part1/home nfs noauto,x-systemd.automount 0 3
198.1.10.45:/ldap/batch2015part2/home /ldap/batch2015part2/home nfs noauto,x-systemd.automount 0 4
Now mount all pertition from server side as per below : :::::
$ sudo mount -a
Check mounted partion by below commands
$ df -h

Test SSH connection between Avi Vantage Controller and Service Engine Host

The Avi docs say to add an ssh public key to the known_hosts file on the SE hosts so the controller can login and install and start the service engine host.
I'm pretty sure this isn't working properly. How can I test the ssh connection between the controller and the service engine host(s)? Where is the controller's private key stored?
We will automatically test the SSH connection and display status as appropriate. For security reasons, the private key configured is stored in plain key format anywhere on the file system.
Did you "create" a ssh key or "import" a ssh key - if you imported, you could use plain ssh -i <path-to-imported-private-key user#host from your workstation where the private key resides.
Refer to #Aziz comment for details on host status display. Also note the correction about authorized_keys (not authorized_hosts)
I am guessing this is in reference to creating a "LinuxServer" Cloud in Avi. On Avi, you have to do the following:
1) configure a SSHUser (Administration > Settings > SSH Key Settings). alternatively, this can also be created from UI during LinuxServer cloud creation.
2) Create the LinuxServer cloud (Infrastructure > Clouds) with appropriate hosts and select the SSHUser from the dropdown.
The SSH keys configured are stored encrypted in Avi controller DB and not exposed via API/REST or on file system. The Avi Controller modules use the decrypted key to connect to each host and provision the SE.
I suppose the docs are not clear - you dont add the Avi Controller's public key to each host, instead you add "your" custom SSH key pair into Avi Controller (via step 1 above) and add the correspinding public key on each host.
With regards to "testing" the SSH connection, since these are your owned keys, you can plain "ssh -i username#host" to test the SSH. Alternatively, the Cloud status will also provide information if SSH using the configured key failed for any reason.
Please refer: http://kb.avinetworks.com/installing-avi-vantage-for-a-linux-server-cloud/ for complete install guide.
Let me know if your question was related to a different Cloud/Topic.
Adding to what #Siva explained, the status of the connection is displayed in the controller cloud page (From menu Infrastructure->Clouds, click on the cloud where host are added). Also if you hover the mouse over the State column of the host then you can see the detailed reason of the failure.
This Host Status in linux server cloud, in this case "Default-Cloud" is a linux server cloud with 3 host, out of which on one of the host ssh fails. In this example the host 10.10.99.199 is a fake entry i.e. there is no host with that IP hence SSH fails, where as 10.10.22.71 and 10.10.22.35 are the host for which SSH credentials passed, then the Service Engine was deployed on them and are ready for Virtual Services(load balancer or SSL termination etc..) to be placed on them.
#Davidn Coleman, In the comment you mentioned that you added the public key to authorized_hosts (you need to add the key to authorized_keys), and also the user for whom you added the ssh authorization is not root(i.e. /home/user/.ssh/authorized_keys) then make the user is sudoer (add the entry in /etc/sudoers for this user) and also make sure the permission for .ssh dir and authorized_keys are set correctly (for security reasons and good practise).
The following is the snippet for the host 10.10.22.35.
[root#localhost ~]# ls -lrtha
total 318M
-rw-r--r--. 1 root root 129 Dec 28 2013 .tcshrc
-rw-r--r--. 1 root root 100 Dec 28 2013 .cshrc
-rw-r--r--. 1 root root 176 Dec 28 2013 .bashrc
-rw-r--r--. 1 root root 176 Dec 28 2013 .bash_profile
-rw-r--r--. 1 root root 18 Dec 28 2013 .bash_logout
-rw-------. 1 root root 1.2K May 27 13:56 anaconda-ks.cfg
drwxr-xr-x. 3 root root 17 May 27 14:07 .cache
drwxr-xr-x. 3 root root 17 May 27 14:07 .config
dr-xr-xr-x. 17 root root 4.0K May 31 08:15 ..
drwxr-----. 3 root root 18 May 31 08:25 .pki
-rw-------. 1 root root 1.9K May 31 08:46 .viminfo
drwx------. 2 root root 28 May 31 09:09 .ssh
-rw-r--r--. 1 root root 317M May 31 09:13 se_docker.tgz
-rw-r--r--. 1 root root 1.2M May 31 09:13 dpdk_klms.tar.gz
dr-xr-x---. 6 root root 4.0K May 31 09:14 .
-rw-r--r--. 1 root root 1.1K May 31 09:14 avise.service
-rw-------. 1 root root 3.4K Jun 1 09:14 .bash_history
[root#localhost ~]# ls -lrtha .ssh/
total 8.0K
-rw-r--r--. 1 root root 399 May 31 09:09 authorized_keys
drwx------. 2 root root 28 May 31 09:09 .
dr-xr-x---. 6 root root 4.0K May 31 09:14 ..
[root#localhost ~]# pwd
/root

Apache on Centos 6.5 cant access a mounted network directory

I am having trouble getting Apache access to a network share that I have mounted using fstab. I am trying to learn Apache by building an image server. The script parsing all of the images can access the mounted directories and I can see thumbnails on the webpage from the browser however if I try to get a link directly to the images then it claims it doesn't have access. Does anyone have any ideas?
Thanks!
EDIT: All users have rw- access. :p
I ran a ll on the box before and after mounting the drives and got this
Before Mounting: drwxrwxrwx. 2 root apache 4096 Oct 24 19:20 TestFolder
After Mounting: drwxrwxrw-+ 4 173754 171535 0 Feb 24 2007 TestFolder
This is my fstab command to mount the drive on boot
<network drive> <local mount point> cifs username=*******,password=*******,iocharset=utf8,rw,context=unconfined_u:object_r:httpd_sys_content_t:s0