DCOS serverServer responded with an HTTP 'www-authenticate' field of 'oauthjwt', DCOS only supports 'Basic' - dcos

I just create new DCOS cluster and trying to run dcos CLI commands. For any commands I got the following error:
dcos package list
Server responded with an HTTP 'www-authenticate' field of 'oauthjwt', DCOS only supports 'Basic'
I check dcos.toml and it contains correct parameters: dcos_url contains http url, email, token
Any help will be appreciated.

Related

Deploy ASP.NET core web app which used Azure AD authentication on linux server

When I deploy my asp.net core web app which used Azure AD authentication on a Linux server, it runs using an IP address with a particular port number using HTTP protocol. Azure portal does not allow HTTP protocol with IP address. It has to be HTTPS. How can I run my application with HTTP protocol which is using Azure AD authentication?
I tried to add the new IP address on azure ad portal but it does not allow IP addresses with HTTP protocol(only localhost is allowed for HTTP). If I add HTTPS then I am getting error
The redirect URI 'url' specified in the request does not match the redirect URIs configured for the application ''
I tried to add full path for "CallbackPath" in the appsettings.json file. When I run the application, I get error of
ArgumentException: The path in 'value' must start with '/'. (Parameter 'value')
I want to run my applciation on a linux using ip address with HTTP protocol. How can I solve this problem?
I Tried to reproduce the same in my environment to deploy .Net Web Application on Linux Server:
I created a linux virtual machine, like below.
Azure Portal > Virtual machines > Create > Create a virtual machine.
Connected to Linux VM Using Putty.
Install .Net Core on Linux Server by using below commands, See the :MS document here
wget https://packages.microsoft.com/config/ubuntu/22.10/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
rm packages-microsoft-prod.deb
Install required libraries and .Net SDK using below cmds.
sudo apt-get update && \
sudo apt-get install -y dotnet-sdk-7.0
sudo apt-get update && \ sudo apt-get install -y aspnetcore-runtime-7.0
sudo apt-get install -y dotnet-runtime-7.0
Transfer the .Net web application to linux server using winSCP, follow the document here
Once copy the files to linux server, navigate to root folder (publish folder) and run run dll.
dotnet linuxapp.dll
once ran the .dll files, test the web application using below cmd.
curl http://localhost:5000
To check the web application on public IP, we need to install Njnix, follow the document here
sudo apt update
sudo apt install nginx
once install the nginx, test the web application using public IP.
If the nginx application is running, stop the ngnix service using below cmd.
Configre nginx -stop the nginx service
sudo service nginx stop
Go to root folder= etc= nginx= sites- available = default (check the permission) by using WinSCP and update the location with below values.
location / {
proxy_pass http://localhost:5000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
Once update the files, then start the nginx service using below cmd.
sudo service nginx start
Note: Make sure add HTTP Inbound role on VM, like below
Run the dotnet linuxapp.dll again
dotnet linuxapp.dll
Once ran the dll ,test the web application with public IP.
When we create the App Services service, it will automatically assign the certificate and then we have to use HTTPS.
But we can access our Azure app service via http by adding a custom domain name without binding a certificate.
And for this error messgae:
The redirect URI 'url' specified in the request does not match the redirect URIs configured for the application ''
We should check the App registrations in azure portal, not appsettings.json.

In Cloud Foundry, how do I create a service to run my Apache web server?

I'm on Ubuntu 18, running the following version of Cloud Foundry ...
$ cf -v
cf version 7.4.0+e55633fed.2021-11-15
I would to set up several containers, running off Docker image. First is an Apache web server. I have the following Dockerfile
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
COPY ./my-vhosts.conf /usr/local/apache2/conf/extra/httpd-vhosts.conf
COPY ./directory /usr/local/apache2/htdocs/directory
How do I set this up in Cloud foundry? I tried creating a service but got these errors
$ cf cups apache-service -p "localhost, 80"
FAILED
No API endpoint set. Use 'cf login' or 'cf api' to target an endpoint.
When I tried to create this API endpoint I got
$ cf api "http://my_ip_address"
Setting API endpoint to http://my_ip_address...
Request error: Get "http://my_ip_address": dial tcp my_ip_address:80: connect: connection refused
TIP: If you are behind a firewall and require an HTTP proxy, verify the https_proxy environment variable is correctly set. Else, check your network connection.
I'm thinking I'm missing something rather substantial but don't know what the right questions to ask are.
The error message you are providing (dial tcp my_ip_address:80: connect: connection refused ) is related to the cf api $address not responding.
Ensure that your Cloud Foundry API Endpoint is still active and you don't have any firewall preventing you from accessing the API. (port is open, the process is running, and the firewall is allowing traffic from your IP if applicable)

Bad handshake error with hue oozie server

I am trying to connect hue with ssl enabled oozie server, but facing the below SSL issue.
Error submitting workflow Batch job for query-pig: ("bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",)
Created CA certificate from oozie server machine and configured it with hue server.
I could able to get the status information from oozie server using curl command with the help of certificate that i have generated. But issue occurs only when communicating from hue server.
Also added proxy user in oozie-site.xml properties.
hue.ini
[liboozie]
# The URL where the Oozie service runs on. This is required in order for
# users to submit jobs. Empty value disables the config check.
oozie_url=https://Fully Qualified Host name:11443/oozie
# Requires FQDN in oozie_url if enabled
security_enabled=true
use_libpath_for_jars=false
# Location on HDFS where the workflows/coordinator are deployed when submitted.
remote_deployement_dir=/user/hue/oozie/deployments
ssl_cert_ca_verify=true
I dont know what is the difference while connecting from curl and hue server as curl works perfectly for me where hue server doesn't.

Docker message: Automatically disabled Acquire::http::Pipeline-Depth due to incorrect response from server/proxy

Installing apache2 into a Ubuntu 16.04 image of Docker , I am getting the follow message
W: http://archive.ubuntu.com/ubuntu/pool/main/g/gdbm/libgdbm3_1.8.3-13.1_amd64.deb: Automatically disabled Acquire::http::Pipeline-Depth due to incorrect response from server/proxy. (man 5 apt.conf).
That is the Dockerfile:
FROM ubuntu:16.04
#RUN apt-get update
#https://github.com/phusion/baseimage-docker/issues/319
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
RUN apt-get install -y apache2
When I open the image I see the /var/www/html folder, meaning, the apache was installed.
What message is that? Is it an error or can I consider apache as fully installed?
Pipelining is a feature of the HTTP/1.1 protocol. From the RFC 7230 :
A client that supports persistent connections MAY "pipeline" its
requests (i.e., send multiple requests without waiting for each
response). A server MAY process a sequence of pipelined requests in
parallel if they all have safe methods (Section 4.2.1 of [RFC7231]),
but it MUST send the corresponding responses in the same order that
the requests were received.
This feature can be activated in apt with the setting Acquire::http::Pipeline-Depth. From man apt.conf:
The setting Acquire::http::Pipeline-Depth can be used to enable HTTP pipelining (RFC 2616 section 8.1.2.2) which can be beneficial e.g. on high-latency connections. It specifies how many requests are sent in a pipeline. APT tries to detect and workaround misbehaving webservers and proxies at runtime, but if you know that yours does not conform to the HTTP/1.1 specification pipelining can be disabled by setting the value to 0. It is enabled by default with the value 10.
The message you see means that your connection with the apt repository doesn't support pipelining, (probably because of some kind of proxy) and that this feature was automatically disabled by apt. The installation will maybe takes a bit more time but you can consider your apache server fully installed.

artifactory pro registry docker image

I am trying out the 30 day trial version of the artifactory-registry docker image to evaluate the docker repository for our internal use. I am following the documentation https://www.jfrog.com/confluence/display/RTF/Running+with+Docker
After I run the docker image I am able to access the UI on port 8081, however When I try to push an image I get the following error
“The plain HTTP request was sent to HTTPS port”
Heres how I deploy the image
sudo docker pull mysql
sudo docker tag mysql localhost:5002/mysql
sudo docker push localhost:5002/mysql
Also the documentation says that artifactory could be accessed on the following URLS
http://localhost/artifactory
http://localhost:8081/artifactory
https://localhost:5000/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-remote/v2)
https://localhost:5001/v1 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-prod-local/v1)
https://localhost:5002/v1 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-dev-local/v1)
https://localhost:5001/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-prod-local/v2)
https://localhost:5002/v2 (This is mapped to http://localhost:8081/artifactory/api/docker/docker-dev-local/v2)
But I get a 404 trying to access any of the https urls
What am I missing?
This appears to be an NGINX configuration issue (as described here ) with not forwarding HTTPS requests to Artifactory.
Changing the configuration to forward your requests should fix your issue.