When trying to run rhc setup, I get the following:
[vagrant#localhost ~]$ rhc setup --debug --server localhost:8443 --insecure
DEBUG: Using config file /home/vagrant/.openshift/express.conf
DEBUG: Running greeting_stage
OpenShift Client Tools (RHC) Setup Wizard
This wizard will help you upload your SSH keys, set your application namespace, and check that other programs like Git are properly installed.
DEBUG: Running server_stage
DEBUG: Running login_stage
DEBUG: Connecting to https://localhost:8443/broker/rest/api
DEBUG: Client supports API versions 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7
DEBUG: Created new httpclient
DEBUG: Request GET https://localhost:8443/broker/rest/api
DEBUG: SSL Verification failed -- Using self signed cert
DEBUG: code 403 25 ms
Is there someway to run setup with the self-signed cert that comes with the origin all-in-one ?
Per Graham Dumpleton's comment, I am trying to solve the wrong problem.
Be aware that the command line client for current Origin all in one is
oc, not rhc. The rhc client is for older OpenShift 2, not latest
OpenShift 3. Perhaps read through to understand how to use latest
Origin all in one VM the free eBook at
openshift.com/promotions/for-developers.html if you are really wanting
OpenShift 3. – Graham Dumpleton Jan 2 at 23:20
Related
I have a server running on linux OS. Docker is installed along with a container on which gitlab is installed too. Everything is working fine. I intend to install and register a runner on a windows 10 to use through my CI CD process (the reason is that I have multiple projects in .NET needed to be complied and build during the deployment time therefore I have decided to place them on windows and by registering a runner in Shell could run a batch script file to build those projects).
When I am going to register the runner I am getting this error :
x509: certificate signed by unknown authority
which it has been explained how to solve it (gitlab doc) through creating a ssl self certificate.
after so much efforts I am still getting this error. I am a little bit new with ssl but I follow this way:
first I created a self certificate with this commad on my gitlab container:
https://docs.bitnami.com/aws/apps/gitlab/administration/create-ssl-certificate-nginx/
then, I use this file on windows to register the gitlab runner. But error is still thrown during registration.
When I use the following command on windows to verify the self certificate:
echo | openssl s_client -CAfile /etc/gitlab-runner/certs/gitlab-hostname.tld.crt -connect gitlab-hostname.tld:443
I run into this error in the last lines:
read R BLOCK
HTTP/1.1 400 Bad Request
Server: nginx
Date: Wed, 01 Jul 2020 07:58:52 GMT
Content-Type: text/html
Content-Length: 150
Connection: close
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>nginx</center>
</body>
</html>
read:errno=0
can anyone provide some steps in details to solve this problem? I was in searching for a right and applicable answer but no result was achieved yet.
PS: gitlab-runner x509: certificate signed by unknown authority did not fix my problem
I was running the code build when it threw this error
[Container] 2018/10/18 00:43:55 Running command $(aws ecs stop-task --task arn:aws:ecs:ap-southeast-1:502776083946:task/207cfc8b-914d-4c4b-9c8a-0ffbfcef6924 --cluster arn:aws:ecs:ap-southeast-1:502776083946:cluster/timesheet-staging-cluster)
/usr/local/lib/python2.7/dist-packages/urllib3/util/ssl_.py:369: SNIMissingWarning: An HTTPS request has been made, but the SNI (Server Name Indication) extension to TLS is not available on this platform. This may cause the server to present an incorrect TLS certificate, which can cause validation failures. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
SNIMissingWarning
/codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: {: not found
How do I resolve this? I need the ECS tasks to be stopped in order for a new task to be deployed
I'm not using localhost to test the serviceworker. The server have self-signed cert and it is working.
While trying to get push token from FCM it shows:
ServiceWorker registration failed: DOMException: Failed to register a ServiceWorker: An SSL certificate error occurred when fetching the script.
Can FCM service worker work with server self-signed cert?
it is a staging server therefore we wont be buying ssl cert for the server.
Looks like you can't use service workers with self signed certs.
Run Chrome with custom flags to white list your domain for testing purposes:
/Applications/Google\ Chrome\ Canary.app/Contents/MacOS/Google\ Chrome\ --user-data-dir=/tmp/foo --unsafely-treat-insecure-origin-as-secure=http://www.your.site
Make sure you use the correct path where Chrome is installed.
See https://stackoverflow.com/a/43484456/545726
And https://deanhume.com/home/blogpost/testing-service-workers-locally-with-self-signed-certificates/10155
To add to aiham's answer for this question
I tested the following latest browsers to work as well with these arguments:
open -a Opera.app --args --user-data-dir=/tmp/foo --ignore-certificate-errors --unsafely-treat-insecure-origin-as-secure=https://localhost:8111
open -a Brave\ Browser.app --args --user-data-dir=/tmp/foo --ignore-certificate-errors --unsafely-treat-insecure-origin-as-secure=https://localhost:8111
open -a Google\ Chrome.app --args --user-data-dir=/tmp/foo --ignore-certificate-errors --unsafely-treat-insecure-origin-as-secure=https://localhost:8111
Chromium browser did not start with these settings to allow to overcome this specific DomException for using SSL with service worker locally.
This person provided some insights as a story as well for this matter: https://deanhume.com/testing-service-workers-locally-with-self-signed-certificates/
My question is, as I understand docker-machine uses docker remote API to do whatever it does, for example to regenerate certificates. I have checked docker API but couldn't find how it's possible to send certificates to that machine using only docker api, can someone help please?
The TLS files are hosted locally on the Docker client. For this reason you should protect the files as if they were a root password.
This page will walk you through generating the files needed to negotiate a connection over TLS. Note that the remote daemon must be running TLS.
https://docs.docker.com/engine/security/https/
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=$HOST:2376 version
Note: Docker over TLS should run on TCP port 2376.
Warning: As shown in the example above, you don’t have to run the
docker client with sudo or the docker group when you use certificate
authentication. That means anyone with the keys can give any
instructions to your Docker daemon, giving them root access to the
machine hosting the daemon. Guard these keys as you would a root
password!
We are trying to use RESTCOMM OLYMPUS by making few customizations as part of our application. The main customization is that we have deployed OLYMPUS war on our Apache TOmcat web server and the OUTBOUND PROXY is properly pointed to the same server where RESTCOMM is running.
So far all is good, but recently we got the issue that "getUserMedia()" deprecation issue because of insecure origin issue by chromium fix.
So, it means we need to use HTTPS and WSS. I can see that just around 7 days back OLYMPUS code has been updated on GITHUB to use WSS if HTTPS has been used in browser location bar.
So first we have installed self signed CERT and enabled SLL config on TOMCAT so that our customized OLYMPUS UI is accessed via https from Tomcat. And then we used WSS protocol to connect to OUTBOUND PROXY. Bt we got the below error
"WebSocket connection to 'wss:/:5082/' failed: Error in connection establishment: net::ERR_TIMED_OUT
WSMessageChannel:createWebSocket(): websocket connection has failed:[object Event]"
Then we thought that in addition to TOMCAT ( where WAR is deployed) we need to install self singed cert and SSL config on RESTCOMM as well. So we did it by following http://docs.telestax.com/restcomm-enable-https-secure-connector-on-jboss-as-7-or-eap-6/ and also we have used WSS protocol.
But this time also we got the error but with a different error code though
"WebSocket connection to 'wss:/:5083/' failed: Error in connection establishment: net::ERR_CONNECTION_CLOSED
WSMessageChannel:createWebSocket(): websocket connection has failed:[object Event]"
Can i request the forums to explain if we are missing any thin here?
Thanks in advance
I would suggest to use the mobicents RestComm docker image instead of using the zip bundle, because for docker image all settings are handled automatically and https/wss should work out of the box. Here are some quick steps to get you started:
Install docker in your Ubuntu if not already there
Download RestComm docker image:
$ docker pull mobicents/restcomm:latest
Start docker image:
$ docker run -e SECURE="true" -e SSL_MODE="allowall" -e USE_STANDARD_PORTS="true" -e VOICERSS_KEY="VOICERSS_KEY_HERE" --name=restcomm -d -p 80:80 -p 443:443 -p 9990:9990 -p 5060:5060 -p 5061:5061 -p 5062:5062 -p 5063:5063 -p 5060:5060/udp -p 65000-65535:65000-65535/udp mobicents/restcomm:latest
Now you should be able to reach your RestComm instance Admin UI at:
https://<host ip address>/
Make sure that you don't have any servers running in your host at the ports used by the docker container above, or you'll have to use different ports (please refer to the docker hub page for such options)
Best regards,
Antonis Tsakiridis