As part of the Apache Guacamole setup you create a GUACAMOLE_HOME directory for the extension/configuration files etc
I used /etc/guacamole
Then I exported it export GUACAMOLE_HOME=/etc/guacamole
And chmod 0777 /etc/guacamole -R
the printenv command shows the variable GUACAMOLE_HOME=/etc/guacamole
But, when I start the Tomcat7 service, it ignores my guacamole.properties file which is in the GUACAMOLE_HOME:
16:33:56.389 [localhost-startStop-1] INFO
o.a.g.environment.LocalEnvironment - No guacamole.properties file
found within GUACAMOLE_HOME or the classpath. Using defaults.
16:33:57.013 [localhost-startStop-1] INFO
o.a.g.environment.LocalEnvironment - No guacamole.properties file
found within GUACAMOLE_HOME or the classpath. Using defaults.
The service seems to start, but Guacamole is running with defaults...
What is the missing step here??
Guacamole is running under tomcat, and the tomcat server is probably starting under tomcat user. It might be the case that you have defined GUACAMOLE_HOME in your shell, but this is not visible by tomcat user.
I prefer to store guacamole.properties file under .guacamole directory, the 3rd option in the manual:
The directory .guacamole, located within the home directory of the user running the servlet container.
On ubuntu-like systems, default tomcat installation is started under tomcat7 or tomcat8 user, depending on the version. You may do the following:
cd ~tomcat7
sudo ln -s /etc/guacamole .guacamole
This will make a .guacamole link to your configuration directory, in the home directory of the user running the servlet container, as described in the manual.
Related
The problem is my Windows 10 does not understand redis commands. I downlaoded and installed on D/Program Files/Redis my cli and server .msi files.
I run with command:
redis-server D:/Program Files/Redis/redis-slave.windows.conf
and expect to get an instance on redis slave with configuration provided in course in .conf file but ger error:
Invalid argument during startup: Failed to open the .conf file: Files/Redis/redis-slave.windows.conf CWD=D:\Program Files\Redis
and the problem is not in wrong configuration because I can copy default .conf redis file and it is the same
Another problem - but not so important as above to me -- but similar when I try to run a cluster Windows 10 does not know how to open this file. I run command:
D:\Program Files\Redis2\redis-7.0.4\utils\create-cluster>./create-cluster start
and receive windows where I need to choose a program to run this file but does not seen it in your MacOS case, anyway this create-cluster file does not have extension so I do not know what to do to make it run
I just tried to install Apache Zookeeper standalone in Ubuntu OS. I have installed Java environment and install Zookeeper 3.4.6.
However, when I typed JPS here are only I got.
following are the configuration for bashrc and zoo.cfg file:
[terminal~] vim .bashrc
[terminal~]vim /usr/local/zookeeper/conf/zoo.cfg
Please anyone help me. I wasted 2 days for only Zookeeper. It was really frustrated.
I have a fresh installed Ubuntu machine, and this is what I did to get ZooKeeper working as a standalone program (I assume you mean that you didn't install it with the package manager.)
Download ZooKeeper tar. (I used 3.4.8)
Extract the folder zookeeper-3.4.8 somewhere. (I placed it on my desktop for now)
Copy .../zookeeper-3.4.8/conf/zoo_sample.cfg to /zookeeper-3.4.8/conf/zoo.cfg
And change the dataDir=... line to whatever you want. (I made a data dir inside the zookeeper-3.4.8 folder)
Now you can run zookeeper by executing the script ../zookeeper-3.4.8/bin/zkServer.sh start
foo#bar:~$ /home/foo/Desktop/zookeeper-3.4.8/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /home/foo/Desktop/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
foo#bar:~$ /home/foo/Desktop/zookeeper-3.4.8/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /home/foo/Desktop/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: standalone
In the Docker best practices guide it states:
You are strongly encouraged to use VOLUME for any mutable and/or user-serviceable parts of your image.
And by looking at the source code for e.g. the cpuguy83/nagios image this can clearly be seen done, as everything from nagios to apache config directories are made available as volumes.
However, looking at the same image the apache service (and cgi-scripts for nagios) are run as the nagios user by default. So now I'm in a pickle, as I can't seem to figure how to add my own config files in order to e.g. define more hosts for nagios monitoring. I've tried:
FROM cpuguy83/nagios
ADD my_custom_config.cfg /opt/nagios/etc/conf.d/
RUN chown nagios: /opt/nagios/etc/conf.d/my_custom_config.cfg
CMD ["/opt/local/bin/start_nagios"]
I build as normal, and try to run it with docker run -d -p 8000:80 <image_hash>, however I get the following error:
Error: Cannot open config file '/opt/nagios/etc/conf.d/my_custom_config.cfg' for reading: Permission denied
And sure enough, the permissions in the folder looks like (whist the apache process runs as nagios):
# ls -l /opt/nagios/etc/conf.d/
-rw-rw---- 1 root root 861 Jan 5 13:43 my_custom_config.cfg
Now, this has been answered before (why doesn't chown work in Dockerfile), but no proper solution other than "change the original Dockerfile" has been proposed.
To be honest, I think there's some core concept here I haven't grasped (as I can't see the point of declaring config directories as VOLUME nor running services as anything other than root) - so provided a Dockerfile as above (which follows Docker best practices by adding multiple volumes) is the solution/problem:
To change NAGIOS_USER/APACHE_RUN_USER to 'root' and run everything as root?
To remove the VOLUME declarations in the Dockerfile for nagios?
Other approaches?
How would you extend the nagios dockerfile above with your own config file?
Since you are adding your own my_custom_config.cfg file directly into the container at build time just change the permissions of the my_custom_config.cfg file on your host machine and then build your image using docker build. The host machine permissions are copied into the container image.
I have setup an NFS file share between two CentOS 6, 64 machines. On the server the folder being shared was originally owned by the root user. On the client it turned up as being owned by nfsnobody. When I tried to write to the folder from the client I got a permissions error. So I changed the folder ownership on the server to nfsnobody and chmod'd it to 777. However, still no joy - I continue to get a permissions error. Clearly, there is more to this. I would be much obliged to any Linux gurus out there (I personally wouldn't merit being called anything more than a newbie) who might be able to help fix this issue.
Edit - I should have mentioned that trying to write to the shared folder from the client actually manages to create a file entry. However, the file size is 0 and the permissions error is reported.
The issue here is to do with the entry in /etc/exports. It should read
folder ip(rw,**all_squash**,sync,no_subtree_check)
I had missed the all_squash bit. That apart, make sure that the folder on the server is owned by nfsnobody. On my setup both my client and server nfsnobodies ended up with a user id if 65534. However, it is well worth checking this (/etc/groups) or else... .
Here are a couple of useful references
How to setup an NFS SErver
NFS on CentOS
For the benefit of anyone looking to setup an NFS server I give below what worked for me on my CentOS 6 64bit machines.
SERVER
yum install nfs-utils nfs-utils-lib - install NFS
rpm -q nfs-utils - check the install
/etc/init.d/rpcbind start
chkconfig --levels 235 nfs on
/etc/init.d/nfs start
chkconfig --level 35 rpcbind on
With this done you should create the folder you want to share
mkdir folder
chown 65534:65534 folder
chmod 755 folder
Now define the folder to be shared/exported. Use your favorite text editor (vi or whatever) to
open/create /etc/exports
folder clientIP (rw,all_squash,sync,no_subtree_check)
Client
Install, check, bind and start as above
mount -t nfs serverIP:folder clientFolderLocation
If all goes well you should now be able to write a little script on your client
<?php
$file = $_SERVER['DOCUMENT_ROOT']."/../nfsfolder/test.txt";
file_put_contents($file,'Hello world of NFS!');
?>
browse to it and find that test.txt now exists on the server with the content "Hello world of NFS". In the example I have placed my mounted drive one level before document_root.
I am trying to set up a WebDAV folder on my CentOS server. I have for the most part succeeded. My problem is that I am trying to set up a size limit (quota) on the folder. I found a blog that spelled out how to do that using the "DAVSATMaxAreaSize" command. However, when I restart Apache, I get the error: "Invalid command 'DAVSATMaxAreaSize', perhaps misspelled or defined by a module not included in the server configuration". Does this mean the module that supports this command is not installed? How can I fix this?
You need to recompile your apache.
Download patch from http://www.geocities.jp/t_sat7/webdav/webdav.html
Download rpm source for apache from centos repos. Patch it with patch u downloaded and recompile apache.
I had the problem on my Ubuntu 12.04 server but I didn't want to recompile my apache. I "solved" it as follows:
I created a file container using dd (for 100MB):
dd if=/dev/zero of=/var/webdav-file-container bs=1048576 count=100
And created a filesystem in that container:
mkfs.ext4 /var/webdav-file-container
Then I mounted this container as folder for my share:
mount /var/webdav-file-container /var/webdav-share
So, now the filesystem in the container has a fixed size and apache cannot write more than the 100MB.
The only thing is that the user does not know how much space is left on that share. The Windows client report the size of it's own system drive ...