Docker: how to force graylog web interface over https? - ssl

I'm currently struggling to get graylog working over https in a docker environment. I'm using the jwilder/nginx-proxy and I have the certificates in place.
When I run:
docker run --name=graylog-prod --link mongo-prod:mongo --link elastic-prod:elasticsearch -e VIRTUAL_PORT=9000 -e VIRTUAL_HOST=test.myserver.com -e GRAYLOG_WEB_ENDPOINT_URI="http://test.myserver.com/api" -e GRAYLOG_PASSWORD_SECRET=somepasswordpepper -e GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 -d graylog2/server
I get the following error:
We are experiencing problems connecting to the Graylog server running
on http://test.myserver.com:9000/api. Please verify that the server is
healthy and working correctly.
You will be automatically redirected to the previous page once we can
connect to the server.
This is the last response we received from the server:
Error message
Bad request Original Request
GET http://test.myserver.com/api/system/sessions Status code
undefined Full error message
Error: Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
When I go to the URL in the message, I get a reply: {"session_id":null,"username":null,"is_valid":false}
This is the same reply I get when running Graylog without https.
In the docker log file from the graylog is nothing mentioned.
docker ps:
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES 56c9b3b4fc74 graylog2/server "/docker-entrypoint.s" 5
minutes ago Up 5 minutes 9000/tcp, 12900/tcp
graylog-prod
When running docker with the option -p 9000:9000 all is working fine without https, but as soon as I force it to go over https I get this error.
Anyone an idea what I'm doing wrong here?
Thanks a lot!

Did you try GRAYLOG_WEB_ENDPOINT_URI="https://test.myserver.com/api" ?

Related

Change Master Password on Payara/Glassfish Server

Background: I need to change the payara-server master-password. According to the docs the master-password must match the password in the keystore & truststore for the SSL Certificates to work properly. To make my website run on https instead of http.
I got Payara-Server running in a Docker Container through the guide:
I tried to change the payaradomain master-password, but I get an acyclic error.
1. made sure the payara-domain isn't running.
- ./asadmin stop-domain --force=true payaradomain
When I run this command, instead domain1 gets killed. & then kicked out of the docker container:
./asadmin stop-domain --kill=true payaradomain
When I execute this command:
./asadmin list-domains
Response:
domain1 running
payaradomain not running
Command list-domains executed successfully.
Then tried command:
./asadmin stop-domain --force=true payaradomain
Response:
CLI306: Warning - The server located at /opt/payara41/glassfish/domains/payaradomain is not running.
I'm happy with that, but when I try:
./asadmin change-master-password payaradomain
I get this response:
Domain payaradomain at /opt/payara41/glassfish/domains/payaradomain is running. Stop it first.
I have attached the picture below: please help...
If you want to configure Payara server in docker, including the master password, you should do it by creating your own docker image by extending the default Payara docker image. This is the simplest Dockerfile:
FROM payara/server-full
# specify a new master password "newpassword" instead of the default password "changeit"
RUN echo 'AS_ADMIN_MASTERPASSWORD=changeit\nAS_ADMIN_NEWMASTERPASSWORD=newpassword' >> /opt/masterpwdfile
# execute asadmin command to apply the new master password
RUN ${PAYARA_PATH}/bin/asadmin change-master-password --passwordfile=/opt/masterpwdfile payaradomain
Then you can build your custom docker image with:
docker build -t my-payara/server-full .
And then run my-payara/server-full instead of payara/server-full.
Also note that with the default Payara docker image, you should specify the PAYARA_DOMAIN variable to run payaradomain instead of domain1, such as:
docker run --env PAYARA_DOMAIN=payaradomain payara/server-full
The sample Dockerfile above redefines this variable so that payaradomain is used by default, without need to specify it when running the container.
Alternative way to change master password
You cn alternatively run the docker image without running Payara Server. Instead, you can run bash shell first, perform necessary commands in the console and the run the server from the shell.
To do that, you would run the docker image with:
docker run -t -i --entrypoint /bin/bash payara/server-full
The downside of this approach is that the docker container runs in foreground and if you restart it then payara server has to be started again manually, so it's really only for testing purposes.
The reason you get the messages saying payaradomain is running is because you have started domain1. payaradomain and domain1 use the same ports and the check to see if a domain is running looks to see if the admin port for a given domain are in use.
In order to change the master password you must either have both domains stopped or change the admin port for payaradomain.
instead of echoing passwords in the dockerfile it is safer to COPY a file during build containing the passwords and remove that when the build is finished.

Error on running nsolid on Mac OSX

I get the following error when running nsolid on MAC OSX. I am running a simple node REPL application on the node runtime env as specified in the Quick Start Guide.
Error:
{"time":"2016-08-23T13:48:59.943Z","hostname":"xxxxxxx-mbpr","pid":3867,"level":"error","name":"nsolid-proxy","err":{"name":"Error","message":"client request timeout","stack":"Error: client request timeout\n at onTimeout (/usr/local/nsolid/proxy/node_modules/nsolid-rpcclient/node_modules/client-request/request.js:113:17)\n at Timer.listOnTimeout (timers.js:92:15)"}}
Error: client request timeout means that the proxy can't reach the N|Solid process.
First you'll need to know the IP and PORT of the process registered, you can get it by running:
$ nsolid-cli ls
{"pid":2662,"hostname":"ns-work.local","app":"nsolid-default","address":"192.168.0.1:50549","id":"fd1190b2ce8f39e032cb262440dfba5408cde9fc"}
You can try to reach that IP and PORT using curl with:
$ curl http://192.168.0.1:50549/ping
PONG%
And it should return PONG if everything is OK or you can use $ nsolid-cli ping to ping your applications.
If for some reason you don't have network access to that IP registered to the N|Solid Hub, you can define it yourself when running your N|Solid process, a recommended way (when using the developer bundle) is to run it like:
$ NSOLID_SOCKET=localhost node server.js
So it will register with the local interface and the proxy will not have problems to reach it.

SSH tunnel not working - empty server response

I am trying to setup an SSH tunnel but I am new to this process. This is my setup:
Machine B has a web service with restricted access. Machine A has been granted access to Machine B's service, based on a firewall IP whitelist.
I can connect to Machine A using an ssh connection. After that I try to access the webservice on Machine B from my localhost, but I cannot.
The webservice endpoint looks like this:
service.test.organization.com:443/org/v1/sendData
So far, I have created an ssh tunnel like this:
ssh -L 1234:service.test.organization.com:443 myuser#machineb.com
My understanding was that using this approach, I could hit localhost:1234 and it would be forwarded to service.test.organization.com:443, through Machine B.
I have confirmed that from Machine B, I can execute a curl command to send a message to the webservice, and i get a response (so that is working). I have tried using PostMan in my browser, and curl in terminal from localhost, but I have been unsuccessful. (curl -X POST -d #test.xml localhost:1234/org/v1/sendData)
Error message: curl: (52) Empty reply from server
There's a lot of material on SSH and I am sifting through it, but if anyone has any pointers, I would really appreciate it!
Try to add Host HTTP header: curl -H "Host: service.test.organization.com" -X POST -d #test.xml http://localhost:1234/org/v1/sendData
The networking issue was caused by the request format. My request object was built with a destination of 'localhost:1234'. So even though it was reaching the proper machine, the machine ignored it.
To solve this I added a record in my host file, like this:
service.test.organization.com 127.0.0.1
Then I was able send the message. First I opened the tunnel,
ssh -L 443:service.test.organization.com:443 myuser#machineb.com,
Then using using this curl command: curl -X POST -d #test.xml service.test.organization.com:443/org/v1/sendData
The host file causes the address to resolve to localhost, then the ssh tunnel knows to forward it on.

Apache script config with loggly

I am trying to configure loggly in apache in my ubuntu machine.
What I have done is
curl -O https://www.loggly.com/install/configure-apache.sh
sudo bash configure-apache.sh -a XXXXXX -u XXXXXX
After entering the last line it's saying
ERROR: Apache logs did not make to Loggly in time. Please check network and firewall settings and retry.
Manual instructions to configure Apache2 is available at https://www.loggly.com/docs/sending-apache-logs/. Rsyslog troubleshooting instructions are available at https://www.loggly.com/docs/troubleshooting-rsyslog/
Any idea why it's showing and how to solve it?
This is likely a network issue or a delay in sending the logs or even an issue with the script. Check out the following link that has the manual instructions. https://www.loggly.com/docs/sending-apache-logs/ that you can follow and use to verify the script created the configuration files correctly.

docker Mule-server curl: (56) Recv failure: Connection reset by peer

This might just be my rookie knowledge of Docker,
but I can't get the networking to work.
I'm trying to run a Mule-server via the pr3d4t0r/mule repository.
I can run it, hot-swap applications but I can reach it.
I can run a local server without Docker, and it works flawlessly.
But not when I try it with Docker.
When I try to do a simple curl command I get "curl: (56) Recv failure: Connection reset by peer"
curl http://localhost:8090/Sven
I have tried exposing the ports via -P and separately via -p 8090:8090 but no luck.
When the docker is running it blocks the ports (I tried running Docker and the normal server at the same time but the normal one said the ports where already in use).
When I try another Image like jboss/wildfly and I use -p 8080:8080 there's no problem, it works perfectly.
The application in the mule-server will log and respond a simple "hello World", the output says that the application is deployed, but no messages or logging while I try to reach it.
Any suggestions?
In my case it was actually the app that was configured incorrectly. It had localhost as host. It should have been 0.0.0.0 without this it was acting only on localhost aka the docker container but not from outside of it.
You should not need to use -net=host.
So check if there's a configuration
In application.properties need set 0.0.0.0 ip not 127.0.0.0.
error
"curl: (56) Recv failure: Connection reset by peer"
mean that no process in docker image listening to the port. Option -p is bind of port in host system and image.
-p <port in host os to be binded to>:<port in container>
So, check your image, maybe your app in container use different port and you need
-p 8080:8090
if you have this , comment or remove it, server.address=localhost in your application.properties