Windows containers volume file permission issue - apache

I am trying to set up a PHP+Apache development environment using Docker containers on Windows 10.
PS H:\> docker version
Client:
Version: 17.12.0-ce
API version: 1.35
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:05:22 2017
OS/Arch: windows/amd64
Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.24)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:15:52 2017
OS/Arch: windows/amd64
Experimental: true
My container is a window's container with microsoft/windowsservercore as the base image.
One issue I face is when the source code is mounted via volume, the apache doesn't like the file permissions. No input file specified. is the displayed error. access.log gives "GET /info.php HTTP/1.1" 404 25
This is the file permission inside the container with the mounted volume.
PS C:\www> get-acl .\info.php | format-list
Path : Microsoft.PowerShell.Core\FileSystem::C:\www\info.php
Owner : O:S-1-5-21-1258723895-351397710-2907126007-1740
Group : G:S-1-5-21-1258723895-351397710-2907126007-513
Access : BUILTIN\Administrators Allow FullControl
NT AUTHORITY\SYSTEM Allow FullControl
BUILTIN\Users Allow ReadAndExecute, Synchronize
NT AUTHORITY\Authenticated Users Allow Modify, Synchronize
Audit :
Sddl : O:S-1-5-21-1258723895-351397710-2907126007-1740G:S-1-5-21 .
-1258723895-351397710-2907126007-513D:AI(A;ID;FA;;;BA
)(A;ID;FA;;;SY)(A;ID;0x1200a9;;;BU)(A;ID;0x1301bf;;;AU)
When I copy info.php into the container, apache works normally.
PS C:\www>
Path : Microsoft.PowerShell.Core\FileSystem::C:\www\info.php
Owner : User Manager\ContainerAdministrator
Group : User Manager\ContainerAdministrator
Access : BUILTIN\Administrators Allow FullControl
NT AUTHORITY\SYSTEM Allow FullControl
User Manager\ContainerAdministrator Allow FullControl
BUILTIN\Users Allow ReadAndExecute, Synchronize
Audit :
Sddl : O:S-1-5-93-2-1G:S-1-5-93-2-1D:(A;ID;FA;;;BA)(A;ID;FA;;;SY)
(A;ID;FA;;;S
-1-5-93-2-1)(A;ID;0x1200a9;;;BU)
My workaround is the to use Robocopy to copy from the mounted volume folder to the DocumentRoot. The problem is I have to wait for a few seconds for source code changes to be reflected in the container.
Is there a better solution?

Related

Apache cgi script invoked from browser but not embedded device

I am working on a project that involves two embedded devices, let's call them A and B. Device A is the controller and B is being controlled. My goal is to make an emulator for device B, i.e., something that acts like B so A thinks it's controlling B but in reality, it is controlling my own emulator. I don't control or can change A.
Control occurs via the controller posting GET commands invoking various cgi scripts so the plan is to install apache on "my" device, setup CGI and replicate the various scripts. I am running apache version 2.4.18 on Ubuntu 16.04.5 and have configured Apache2 so it successfully runs the various scripts depending on the URL. As an example, one of the scripts is called 'man_session' and a typical URL issued by device A looks like this: http://192.168.0.14/cgi-bin/man_session?command=get&page=122
I have build a C/C++ program named 'man_session' and have successfully configured Apache to invoke my script when this URL is submitted. I can see this based on the apache log:
192.168.0.2 - - [24/Jan/2019:14:38:38 +0000] "GET /cgi-bin/man_session?command=get&page=122 HTTP/1.1" 200 206 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
Also, my script writes to stderr and I can see the output in the log file:
[Thu Jan 24 14:46:10.850123 2019] [cgi:error] [pid 23346:tid 4071617584] [client 192.168.0.2:62339] AH01215: Received man_session command 'command=get&page=122': /home/pi/cgi-bin/man_session
So far so good. The problem I am having is that the script does not get invoked when device A makes the request, only when I make the request via a browser (both Chrome and Internet Explorer work) or curl. The browsers run on my Windows PC and curl runs on the embedded device "B" itself.
When I turn on device A, I can see the URL activity on the log but the script does not get invoked. Below is a log entry showing the URL but which that does not invoke the 'man_session' script. It shows a code of 400 which according to the HTTP specification is an error "due to malformed syntax". Other differences are the missing referrer and user-agent information and http 1.0 vs http 1.1, but I don't see why these would matter.
192.168.0.9 - - [24/Jan/2019:14:38:12 +0000] "GET /cgi-bin/man_session?command=get&page=7 HTTP/1.0" 400 0 "-" "-"
Note that device A is 192.168.0.9 and my PC is 192.168.0.2. What am I missing here, why doesn't the above URL invoke the script as when issued by the browser? Is there any place where I can get more information about why the code 400 occurs in this case?
After a lot of back and forth, I finally figured out the issue. Steps taken:
Increased log level to debug (instead of the default 'warn' in apache2.conf
This caused the following error message to show up in the log
[Sat Jan 26 02:47:56.974353 2019] [core:debug] [pid 15603:tid 4109366320] vhost.c(794): [client 192.168.0.9:61001] AH02415: [strict] Invalid host name '192.168.000.014'
After a bit of research, added the following line to the apache2.conf file
HttpProtocolOptions Unsafe
This fixed it and the scripts are now called as expected.

Kubernetes dashboard authentication on atomic host

I am a total newbie in terms of kubernetes/atomic host, so my question may be really trivial or well discussed already - but unfortunately i couldn't find any clues how to achieve my goal - that's why i am here.
I have set up kubernetes cluster on atomic hosts (right now i have just one master and one node). I am working in the cloud network, on the virtual machines.
[root#master ~]# kubectl get node
NAME STATUS AGE
192.168.2.3 Ready 9d
After a lot of fuss i managed to set up the kubernetes dashboard UI on my master.
[root#master ~]# kubectl describe pod --namespace=kube-system
Name: kubernetes-dashboard-3791223240-8jvs8
Namespace: kube-system
Node: 192.168.2.3/192.168.2.3
Start Time: Thu, 07 Sep 2017 10:37:31 +0200
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=3791223240
Status: Running
IP: 172.16.43.2
Controllers: ReplicaSet/kubernetes-dashboard-3791223240
Containers:
kubernetes-dashboard:
Container ID: docker://8fddde282e41d25c59f51a5a4687c73e79e37828c4f7e960c1bf4a612966420b
Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
Image ID: docker-pullable://gcr.io/google_containers/kubernetes-dashboard-amd64#sha256:2c4421ed80358a0ee97b44357b6cd6dc09be6ccc27dfe9d50c9bfc39a760e5fe
Port: 9090/TCP
Args:
--apiserver-host=http://192.168.2.2:8080
Limits:
cpu: 100m
memory: 300Mi
Requests:
cpu: 100m
memory: 100Mi
State: Running
Started: Fri, 08 Sep 2017 10:54:46 +0200
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Thu, 07 Sep 2017 10:37:32 +0200
Finished: Fri, 08 Sep 2017 10:54:44 +0200
Ready: True
Restart Count: 1
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
No volumes.
QoS Class: Burstable
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1d 32m 3 {kubelet 192.168.2.3} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
1d 32m 2 {kubelet 192.168.2.3} spec.containers{kubernetes-dashboard} Normal Pulled Container image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3" already present on machine
32m 32m 1 {kubelet 192.168.2.3} spec.containers{kubernetes-dashboard} Normal Created Created container with docker id 8fddde282e41; Security:[seccomp=unconfined]
32m 32m 1 {kubelet 192.168.2.3} spec.containers{kubernetes-dashboard} Normal Started Started container with docker id 8fddde282e41
also
[root#master ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
kubernetes-dashboard is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Now, when i tried connecting to the dashboard (i tried accessing the dashbord via the browser on windows virtual machine in the same cloud network) using the adress:
https://192.168.218.2:6443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
I am getting the "unauthorized". I believe it proves that the dashboard is indeed running under this address, but i need to set up some way of accessing it?
What i want to achieve in the long term:
i want to enable connecting to the dashboard using the login/password (later, when i learn a bit more, i will think about authenticating by certs or somehting more safe than password) from the outside of the cloud network. For now, connecting to the dashboard at all would do.
I know there are threads about authenticating, but most of them are mentioning something like:
Basic authentication is enabled by passing the
--basic-auth-file=SOMEFILE option to API server
And this is the part i cannot cope with - i have no idea how to pass options to API server.
On the atomic host the api-server,kube-controller-manager and kube-scheduler are running in containers, so I get into the api-server container with command:
docker exec -it kube-apiserver.service bash
I saw few times that i should edit .json file in /etc/kubernetes/manifest directory, but unfortunately there is no such file (or even a directory).
I apologize if my problem is too trivial or not described well enough, but im new to (both) IT world and the stackoverflow.
I would love to provide more info, but I am afraid I would end up including lots of useless information, so i decided to wait for your instructions in that regard.
Check out wiki pages of kubernetes dashboard they describe how to get access to dashboard and how to authenticate to it. For quick access you can run:
kubectl proxy
And then go to following address:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
You'll see two options, one of them is uploading your ~/.kube/config file and the other one is using a token. You can get a token by running following command:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep service-account-token | head -n 1 | awk '{print $1}')
Now just copy and paste the long token string into dashboard prompt and you're done.

RFC subsystem not yet initialized

I have installed saprfc-1.4.1 on linux 4.0.4-x86_64 (ubuntu14) machine.
PHP Version 5.5.9
RFCSDK 7.20
After installation, I added the extension (extension = saprfc.so) in php.ini file and then restarted apache.
To test if installation was successful i uploaded saprfc_test.php to my www directory.
Entered following details to login to SAP R/3
Application Server - localhost
System Number - 00
Client - 300
User - myusername
Password - *********
Message Server - MSPLECP00_ECP_00
R/3 system Name - ECP
Language - EN
When i tried to Login i got following error:
RFC Error Info : Key : RFC_NOT_INIT Status :
Message : RFC subsystem not yet initialized. Internal:

LDAP over SSL using Wordpress plugin

I'm trying to integrate LDAP over SSL on a Wordpress site using the plugin here:
http://wordpress.org/plugins/active-directory-integration/
The site is hosted on MediaTemple and out Active Directory server is hosted locally behind our firewall.
I successfully tested the connection using LDAP over SSL outside of my firewall - so I think the issue resides somewhere on the MediaTemple server.
Using plugin version 1.1.4 with WP 3.7.1
note: my site is not an adult site, I just replaced the real site with x's :)
[INFO] method authenticate() called
[INFO] ------------------------------------------
PHP version: 5.4.13
WP version: 3.7.1
ADI version: 1.1.4
OS Info : Linux xxxxxxxxxx.com 2.6.32-042stab083.2 #1 SMP Fri Nov 8 18:08:40 MSK 2013 x86_64
Web Server : cgi-fcgi
adLDAP ver.: 3.3.2 Extended (201104081456)
------------------------------------------
[NOTICE] username: murphyd
[NOTICE] password: not shown
[INFO] Options for adLDAP connection:
- account_suffix:
- base_dn: cn=users,dc=xxxxxxxxx,dc=local
- domain_controllers: ldaps://firewall.xxxxxxxx.com
- ad_port: 636
- use_tls: 0
- network timeout: 5
[NOTICE] adLDAP object created.
[INFO] max_login_attempts: 50
[INFO] users failed logins: 0
[NOTICE] trying account suffix ""
[ERROR] Authentication failed
[WARN] storing failed login for user "murphyd"
Any suggestions?

SELinux permission denied to Phusion Passenger for redmine

I am trying to install Redmine on CentOS 6.3 but I continue to get this error in the log file
Passenger could not be initialized because of this error: Unable to start
the Phusion Passenger watchdog (/usr/lib/ruby/gems/1.8/gems/passenger-4.0.20/buildout
/agents/PassengerWatchdog): Permission denied (errno=13)
I have been looking online and cannot find this error anywhere or any way to fix it. I have tried changing permissions to the folder to 777 and apache:apache but neither work.
The only solution that I have come up with to get redmine to work is to set SELinux to disabled or permissive (which I do not want to do).
Does anyone have another way to fix this problem that leaves SELinux enabled?
Found the SELinux log file under /var/log/messages
here is the end of the file
Oct 16 14:07:30 localhost pulseaudio[2329]: alsa-util.c: Disabling timer-based scheduling because running inside a VM.
Oct 16 14:07:30 localhost rtkit-daemon[2183]: Sucessfully made thread 2331 of process 2329 (/usr/bin/pulseaudio) owned by '500' RT at priority 5.
Oct 16 14:07:30 localhost pulseaudio[2329]: alsa-util.c: Disabling timer-based scheduling because running inside a VM.
Oct 16 14:07:30 localhost rtkit-daemon[2183]: Sucessfully made thread 2332 of process 2329 (/usr/bin/pulseaudio) owned by '500' RT at priority 5.
Oct 16 14:07:31 localhost rtkit-daemon[2183]: Sucessfully made thread 2427 of process 2427 (/usr/bin/pulseaudio) owned by '500' high priority at nice level -11.
Oct 16 14:07:31 localhost pulseaudio[2427]: pid.c: Daemon already running.
Oct 16 14:08:04 localhost kernel: type=1400 audit(1381957684.726:5): avc: denied { execute_no_trans } for pid=2663 comm="httpd" path="/usr/lib/ruby/gems/1.8/gems/passenger-4.0.20/buildout/agents/PassengerWatchdog" dev=dm-0 ino=1048752 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:lib_t:s0 tclass=file
Oct 16 14:08:04 localhost kernel: type=1400 audit(1381957684.760:6): avc: denied { execute_no_trans } for pid=2668 comm="httpd" path="/usr/lib/ruby/gems/1.8/gems/passenger-4.0.20/buildout/agents/PassengerWatchdog" dev=dm-0 ino=1048752 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:lib_t:s0 tclass=file
Oct 16 14:09:11 localhost pulseaudio[2329]: alsa-sink.c: ALSA woke us up to write new data to the device, but there was actually nothing to write!
Oct 16 14:09:11 localhost pulseaudio[2329]: alsa-sink.c: Most likely this is a bug in the ALSA driver 'snd_intel8x0'. Please report this issue to the ALSA developers.
Oct 16 14:09:11 localhost pulseaudio[2329]: alsa-sink.c: We were woken up with POLLOUT set -- however a subsequent snd_pcm_avail() returned 0 or another value < min_avail.
any suggestions?
So, you can fix this by using audit2allow (yum install audit-libs-python audit-libs).
SELinux logs to /var/log/audit/audit.log. If you tail and capture the output from restarting the web service (service httpd restart) you can then run the new output through audit2allow and make a module to install under selinux...
So, assuming you have captured it into a file called "audit_tmp":
cat audit_tmp | audit2allow -D -M passenger
This will create a file called passenger.pp which you can apply using:
semodule -i passenger.pp
Doing this will unblock the first thing that was stopping passenger from loading - but be aware that there will probably be more so you will need to repeats the process again until it works. I hope that makes sense!
Take a look at /var/log/syslog. That file contains SELinux error messages, which tell you how to fix up any permission problems.