Background: I need to change the payara-server master-password. According to the docs the master-password must match the password in the keystore & truststore for the SSL Certificates to work properly. To make my website run on https instead of http.
I got Payara-Server running in a Docker Container through the guide:
I tried to change the payaradomain master-password, but I get an acyclic error.
1. made sure the payara-domain isn't running.
- ./asadmin stop-domain --force=true payaradomain
When I run this command, instead domain1 gets killed. & then kicked out of the docker container:
./asadmin stop-domain --kill=true payaradomain
When I execute this command:
./asadmin list-domains
Response:
domain1 running
payaradomain not running
Command list-domains executed successfully.
Then tried command:
./asadmin stop-domain --force=true payaradomain
Response:
CLI306: Warning - The server located at /opt/payara41/glassfish/domains/payaradomain is not running.
I'm happy with that, but when I try:
./asadmin change-master-password payaradomain
I get this response:
Domain payaradomain at /opt/payara41/glassfish/domains/payaradomain is running. Stop it first.
I have attached the picture below: please help...
If you want to configure Payara server in docker, including the master password, you should do it by creating your own docker image by extending the default Payara docker image. This is the simplest Dockerfile:
FROM payara/server-full
# specify a new master password "newpassword" instead of the default password "changeit"
RUN echo 'AS_ADMIN_MASTERPASSWORD=changeit\nAS_ADMIN_NEWMASTERPASSWORD=newpassword' >> /opt/masterpwdfile
# execute asadmin command to apply the new master password
RUN ${PAYARA_PATH}/bin/asadmin change-master-password --passwordfile=/opt/masterpwdfile payaradomain
Then you can build your custom docker image with:
docker build -t my-payara/server-full .
And then run my-payara/server-full instead of payara/server-full.
Also note that with the default Payara docker image, you should specify the PAYARA_DOMAIN variable to run payaradomain instead of domain1, such as:
docker run --env PAYARA_DOMAIN=payaradomain payara/server-full
The sample Dockerfile above redefines this variable so that payaradomain is used by default, without need to specify it when running the container.
Alternative way to change master password
You cn alternatively run the docker image without running Payara Server. Instead, you can run bash shell first, perform necessary commands in the console and the run the server from the shell.
To do that, you would run the docker image with:
docker run -t -i --entrypoint /bin/bash payara/server-full
The downside of this approach is that the docker container runs in foreground and if you restart it then payara server has to be started again manually, so it's really only for testing purposes.
The reason you get the messages saying payaradomain is running is because you have started domain1. payaradomain and domain1 use the same ports and the check to see if a domain is running looks to see if the admin port for a given domain are in use.
In order to change the master password you must either have both domains stopped or change the admin port for payaradomain.
instead of echoing passwords in the dockerfile it is safer to COPY a file during build containing the passwords and remove that when the build is finished.
Related
I am new to Kubernetes and I am experimenting with some of these in my local development. Before I give my problem statement here is my environment and the state of my project.
I have Windows 10 with WSL2 enable with Ubuntu running through VS Code.
I have enabled the required plugins in VS Code (like Kubernetes, Docker and such of those)
I have Docker desktop installed which has WSL2 + Ubuntu + Kubernetes enabled.
I have ASP.Net Core 5 working version from my local system + ubuntu through Docker
I have dockerfile + docker compose file and I have tested them all with and without SSL port and those are working with SSL and without SSL as well. (for that I have modified the program to accept non-SSL request).
coming to docker file
-- It has required ports exposed like 5000 (for not SSL) and 5001 (for SSL)
coming to docker compose file
-- It has reuqired mapping like 5000:80 and 5000:443
-- It also has environment variable for URLs like
ASPNETCORE_URLS=https://+5001;http://+5000
-- It also has environment variable for Certificate path like
ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
-- It also has environment variable for Certificate password like
ASPNETCORE_Kestrel__Certificates__Default__Password=SECRETPASSWORD
Now, when I says docker compose up --build It build the project and also start the containers.
I am able to access the site through https://localhost:5001 as well as http://localhost:5000
Now, coming to kubernets
-- I have used kompose tool to generate kubernetes specific yaml files
-- I haven't made any change in that. I ran the command kompose convert -f docker-compose.yaml -o ./.k8
-- finally, I ran kubectl apply -f .k8
It starts the container but immediately failed. I checked the logs and it says the following:
crit: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to start Kestrel.
Interop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file
at Interop.Crypto.CheckValidOpenSslHandle(SafeHandle handle)
at Internal.Cryptography.Pal.OpenSslX509CertificateReader.FromFile(String fileName, SafePasswordHandle password, X509KeyStorageFlags keyStorageFlags)
at System.Security.Cryptography.X509Certificates.X509Certificate..ctor(String fileName, String password, X509KeyStorageFlags keyStorageFlags)
at System.Security.Cryptography.X509Certificates.X509Certificate2..ctor(String fileName, String password)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Certificates.CertificateConfigLoader.LoadCertificate(CertificateConfig certInfo, String endpointName)
at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.LoadDefaultCert()
at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.Reload()
at Microsoft.AspNetCore.Server.Kestrel.KestrelConfigurationLoader.Load()
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.BindAsync(CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServerImpl.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
Unhandled exception. Interop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file
In "It has required mapping like 5000:80 and 5000:443", actually it should be 5001:443 (as the port 5001 is used to map to the https 443 port).
Based on this error message
"nterop+Crypto+OpenSslCryptographicException: error:2006D080:BIO routines:BIO_new_file:no such file",
It seems the certificate file doesn't exist in the following location: /https/aspnetapp.pfx
Run the image, using the following Docker command:
docker run -it --entrypoint sh <image name>
You will access the container without running the entrypoint, do a cd /https/, check if the certificate is located in this folder or not, if not this is probably the problem.
I'm working with Docker to create AEM (Adobe Experience Manager) images on the basis of the following repository https://github.com/AdobeAtAdobe/aem_6-1_docker
I just can't figure out how to open a debug mode for AEM.
I have tried adding a port to EXPOSE EXPOSE 4502 30311 and adding a start file with the new JVM_OPTS CQ_JVM_OPTS="-debug -Xnoagent -Djava.compiler=none -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=30311,server=y,suspend=n ${CQ_JVM_OPTS}"
and I have also tried changing the START_OPTS START_OPTS="${START_OPTS} -debug 30311"
I'm not really comfortable with Docker yet so I'm not sure what I'm missing to startup the debug mode. Do I need to open a port in Docker via ENV or RUN?
You have to bind your host ports to container ports.
So, in your docker run add flag -p 4502:4502 -p 30311:30311
I've installed Docker on CentOS7, now I try to launch the server in a Docker container.
$ docker run -d --name "openshift-origin" --net=host --privileged \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/openshift:/tmp/openshift \
openshift/origin start
This is the output:
Post http:///var/run/docker.sock/v1.19/containers/create?name=openshift-origin: dial unix /var/run/docker.sock: permission denied. Are you trying to connect to a TLS-enabled daemon without TLS?
I have tried the same command with sudo and that works fine (I can also run images in OpenShift bash etc.) But it feels wrong to use it, am I right? What is a solution to let is work as normal user?
Docker is running (sudo service docker start). Restarting the CentOS did not help.
The error is:
/var/run/docker.sock: permission denied.
That seems pretty clear: the permissions on the Docker socket at /var/run/docker.sock do not permit you to access it. This is reasonably common, because handing someone acccess to the Docker API is effectively the same as giving them sudo privileges, but without any sort of auditing.
If you are the only person using your system, you can:
Create a docker group or similar if one does not already exist.
Make yourself a member of the docker group
Modify the startup configuration of the docker daemon to make the socket owned by that group by adding -G docker to the options. You'll probably want to edit /etc/sysconfig/docker to make this change, unless it's already configured that way.
With these changes in place, you should be able to access docker from your user account with requiring sudo.
Our Docker images ship closed sources, we need to store them somewhere safe, using own private docker registry.
We search the simplest way to deploy a private docker registry with a simple authentication layer.
I found :
this manual way http://www.activestate.com/blog/2014/01/deploying-your-own-private-docker-registry
and the shipyard/docker-private-registry docker image based on stackbrew/registry and adding basic auth via Nginx - https://github.com/shipyard/docker-private-registry
I think use shipyard/docker-private-registry, but is there one another best way?
I'm still learning how to run and use Docker, consider this an idea:
# Run the registry on the server, allow only localhost connection
docker run -p 127.0.0.1:5000:5000 registry
# On the client, setup ssh tunneling
ssh -N -L 5000:localhost:5000 user#server
The registry is then accessible at localhost:5000, authentication is done through ssh that you probably already know and use.
Sources:
https://blog.codecentric.de/en/2014/02/docker-registry-run-private-docker-image-repository/
https://docs.docker.com/userguide/dockerlinks/
You can also use an Nginx front-end with a Basic Auth and an SSL certificate.
Regarding the SSL certificate I have tried couple of hours to have a working self-signed certificate but Docker wasn't able to work with the registry. To solve this I have a free signed certificate which work perfectly. (I have used StartSSL but there are others).
Also be careful when generating the certificate. If you want to have the registry running at the URL registry.damienroch.com, you must give this URL with the sub-domain otherwise it's not going to work.
You can perform all this setup using Docker and my nginx-proxy image (See the README on Github: https://github.com/zedtux/nginx-proxy).
This means that in the case you have installed nginx using the distribution package manager, you will replace it by a containerised nginx.
Place your certificate (.crt and .key files) on your server in a folder (I'm using /etc/docker/nginx/ssl/ and the certificate names are private-registry.crt and private-registry.key)
Generate a .htpasswd file and upload it on your server (I'm using /etc/docker/nginx/htpasswd/ and the filename is accounts.htpasswd)
Create a folder where the images will be stored (I'm using /etc/docker/registry/)
Using docker run my nginx-proxy image
Run the docker registry with some environment variable that nginx-proxy will use to configure itself.
Here is an example of the commands to run for the previous steps:
sudo docker run -d --name nginx -p 80:80 -p 443:443 -v /etc/docker/nginx/ssl/:/etc/nginx/ssl/ -v /var/run/docker.sock:/tmp/docker.sock -v /etc/docker/nginx/htpasswd/:/etc/nginx/htpasswd/ zedtux/nginx-proxy:latest
sudo docker run -d --name registry -e VIRTUAL_HOST=registry.damienroch.com -e MAX_UPLOAD_SIZE=0 -e SSL_FILENAME=private-registry -e HTPASSWD_FILENAME=accounts -e DOCKER_REGISTRY=true -v /etc/docker/registry/data/:/tmp/registry registry
The first line starts nginx and the second one the registry. It's important to do it in this order.
When both are up and running you should be able to login with:
docker login https://registry.damienroch.com
I have create an almost ready to use but certainly ready to function setup for running a docker-registry: https://github.com/kwk/docker-registry-setup .
Maybe it helps.
Everything (Registry, Auth server, and LDAP server) is running in containers which makes parts replacable as soon as you're ready to. The setup is fully configured to make it easy to get started. There're even demo certificates for HTTPs but they should be replaced at some point.
If you don't want LDAP authentication but simple static authentication you can disable it in auth/config/config.yml and put in your own combination of usernames and hashed passwords.
I have setup a basic redis image based on the following instructions: http://docs.docker.io/en/latest/examples/running_redis_service/
With my snapshot I have also edited the redis.conf file with requirepass.
My server runs fine and I am able to access it remotely using redis-cli however the authentication isn't working. I am wondering if the config file isn't being used but when I try starting the container with:
docker run -d -p 6379:6379 jwarzech/redis /usr/bin/redis-server /etc/redis/redis.conf
the container immediately crashes.
the default config of redis is set to be a daemon. You can't run a daemon within a docker container, otherwise, lxc will lose track of it and will destroy the namespace.
I just tried doing this within the container:
$>redis-server - << EOF
requirepass foobared
EOF
Now, I can connect to it and I will get a 'ERR operation not permitted'. When I connect with redis-cli -a foobared, then it works fine.