OpenShift v3 connect app with redis. Connection Refused - redis

I have created a redis 3.2 application from the default image catalog.
I'm trying to connect a python app that runs inside the same project with the redis db.
This is what the Python application uses to connect to redis:
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
REDIS_PASSWORD = os.environ.get('REDIS_PASSWORD') or 'test'
redis = aioredis.create_redis_pool(
(REDIS_HOST, int(REDIS_PORT)),
password=REDIS_PASSWORD,
minsize=5,
maxsize=10,
loop=loop,
)
The deployment fails with an ConnectionRefusedError: [Errno 111] Connection refused.
My guess is that I need to use another value for REDIS_HOST, but I couldn't figure what to use.
Does anyone know how to fix this?

After you deployed from the image catalog a number of objects will have been created for you. One of those objects is a service, which is used to load balance requests to the Pods it fronts. Service names for a project can be retrieved using the client tools via oc get svc.
This service name should be used to connect to your redis instance. If you deploy redis before your Python application, some environment variables should already be populated which can be used, for example REDIS_SERVICE_HOST and REDIS_SERVICE_PORT.
So from your application you can connect via the service ip or service name, where service name is redis then redis.StrictRedis(host='redis', port=6379, password='secret')
The redis password may have been generated for you. In that case it is retrievable from the redis secret which could also be mounted from your python app

Databases in general do not use standard HTTP, but custom TCP protocols. This is why in Openshift we need to connect directly to the service using Openshift's Service hostname or IP address (caution: only Service hostname is predictable), instead of the usual Route, and this applies also to Redis. Bypassing the Routes in Openshift is like bypassing a reverse proxy such as nginx and directly connecting to the db backend.
There is need to use env variables, because service hostnames are auto-generated by Openshift using this predictable pattern:
container_name.project_name.svc , e.g:
redis.db.svc
More info
"When a web application is made visible outside of the OpenShift cluster a route is created. This enables a user to use a URL to access the web application from a web browser. A route is usually used for web applications which use the HTTP protocol. A route cannot be used to expose a database, as they would typically use their own distinct protocol, and routes would not be able to work with the database protocol."
[https://blog.openshift.com/openshift-connecting-database-using-port-forwarding/ ]

Related

Communication between AWS Fargate Services not working even with route53 setup

I have 2 services running in a single private vpc subnet (same available zone). Each service is based on a container here: https://github.com/spring-petclinic/spring-petclinic-microservices .
I've setup route53 service endpoints for both services.
When I run my tasks (each within their own service) service A times out calling service B over service B's route53 endpoint. Using localhost doesn't work because these containers are in separate services.
When I create a container for my task definition, I assign the port that my container is using (using port mapping field). However I notice in the console there is this note: "Host port mappings are not valid when the network mode for a task definition is host or awsvpc. To specify different host and container port mappings, choose the Bridge network mode."
Since I'm using Fargate I am using awsvpc mode. So is this telling my port mapping setting isnt doing anything ? Is that why my services are timing out ?
Then when I google bridge mode, this seems to tell me that awscpv networking mode support service discovery: https://aws.amazon.com/about-aws/whats-new/2018/05/amazon-ecs-service-discovery-supports-bridge-and-host-container-/
So how does "bridge mode" work here ? Why does port mapping field not work for awsvpc ?
Edit:
I read this How to communicate between Fargate services on AWS ECS? and he just says "I created a new service and things started working." That's a bit disheartening.
Edit2:
Yes my vpc has enabled dns resolution.
As it turns out the security group on my service was only allowing http on port 80. That is the inbound rules the default SG that the service wizard gives you. I updated it to allow traffic on my container ports and they seem to be talking to each other now.

Can you create Kerberos principals where the hostname is flexible? (Docker)

I'm specifically trying to do this with Apache Storm (1.0.2), but it's relevant to any service that is secured with Kerberos. I'm trying to run a secured Storm cluster in Docker. There are a number of out-of-the-box docker images out there for Storm, and they work great unsecured. I'm using https://github.com/Baqend/docker-storm. I also have Storm running securely on RHEL VM's.
However, my understanding is that Kerberos ties hostnames to principals, so if I'm making service foobar available to clients, I need to create a principal of foobar/hostname#REALM. Then a client service might connect to hostname with principal foobar, Kerberos will look up foobar/hostname#REALM in its database, find that it's there (because we created a principal with exactly that name), and everything will work.
In my case, it's described here: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.0/bk_installing_manually_book/content/configure_kerberos_for_storm.html. The nimbus authenticates as storm/<nimbus host>#REALM, and the supervisors and outside clients authenticate as storm/REALM. Everything works.
But here in 2017, we have containers and hostnames are no longer static. So how would I Kerberize a service that runs in Docker Data Center (or Kubernetes, etc)? I have to attach an unknown hostname to the server authentication. I imagine I could create a principal for all possible hostnames and dynamically pick the right one at startup based on where the container lives, but that's kludgy.
Am I misunderstanding how Kerberos works? Is there a solution here that I don't see? I see multiple examples online of people running Storm in Docker, but I can't imagine that nobody's clusters are secure.
I don't know Apache Storm or Docker, but based on previous workings with JBOSS in a cluster in which an inbound client could be connecting to any one of a possible number of different hosts, then you would simply assign a virtual name to the entire pool at the load balancer and kerberize the service according to the virtual name instead of individual host name at the host level. So if you're making service foobar available to clients, you need to create a service principal (SPN) of foobar/virtualhostname#REALM in your Directory to kerberize the service with. You assign that SPN to a user account (not a computer account) to give it the flexibility to work with any Kerberized service which uses that SPN. If you are using Active Directory, you must create a keytab with the SPN inside of it, and place the keytab on each host running the kerberized service instance foobar/virtualhostname#REALM.

Kubernetes API for provisioning pods-as-a-service?

Currently I have an app (myapp) that deploys as a Java web app running on top of a "raw" (Ubuntu) VM. In production there are essentially 5 - 10 VMs running at any given time, all load balanced behind an nginx load balancer. Each VM is managed by Chef, which injects the correct env vars and provides the app with runtime arguments that make sense for production. So again: load balancing via nginx and configuration via Chef.
I am now interested in containerizing my future workloads, and porting this app over to Docker/Kubernetes. I'm trying to see what features Kubernetes offers that could replace my app's dependency on nginx and Chef.
So my concerns:
Does Kube-Proxy (or any other Kubernetes tools) provide subdomains or otherwise-loadbalanced URLs that could load balance to any number of pod replicas. In other words, if I "push" my newly-containerized app/image to Kubernetes API, is there a way for Kubernetes to make image available as, say, 10 pod replicas all load balanced behind myapp.example.com? If not what integration between Kubernetes and networking software (DNS/DHCP) is available?
Does Kubernetes (say, perhas via etc?) offer any sort of key-value basec configuration? It would be nice to send a command to Kubernetes API and give it labels like myapp:nonprod or myapp:prod and have Kubernetes "inject" the correct KV pairs into the running containers. For instance perhaps in the "nonprod" environment, the app connects to a MySQL database named mydb-nonprod.example.com, but in prod it connects to an RDS cluster. Or something.
Does Kubernetes offer service registry like features that could replace Consul/ZooKeeper?
Answers:
1) DNS subdomains in Kubernetes:
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
Additionally, each Service loadbalancer gets a static IP address, so you can also program other DNS names if you want to target that IP address.
2) Key/Value pairs
At creation time you can inject arbitrary key/value environment variables and then use those in your scripts/config. e.g. you could connect to ${DB_HOST}
Though for your concrete example, we suggest using Namespaces (http://kubernetes.io/v1.0/docs/admin/namespaces/README.html) you can have a "prod" namespace and a "dev" namespace, and the DNS names of services resolve within those namespaces (e.g. mysql.prod.cluster.internal and mysql.dev.cluster.internal)
3) Yes, this is what the DNS and Service object provide (http://kubernetes.io/v1.0/docs/user-guide/walkthrough/k8s201.html#services)

Pod to Pod connection with using multiple port

I have a Google Cloud Container Engine cluster with 2 Pods, master and slave. Each of them runs RabbitMQ instance, that supposed to be joined into one cluster.
Ports exposed from Dockers aren't available from other machine, but could be accessed only through a Service. That's not a problem, I could establish a service for each instance (one-to-one, service-to-pod), and point each Pod to opposite service IP.
The problem that RabbitMQ uses more that one port for communications. That means that service IP should open all this ports from underlying Pod. But I cannot specify list of shared port for a Service, and if I create a new service for each port each of them will have own IP.
Is there any way to expose list of ports from same Docker/Pod on same internal IP address using Container Engine cluster? maybe some special routing configuration?
Your question is similar to this question, and unfortunately has the same response: Kubernetes / Google Container Engine does not currently have a way to expose a range of ports for a service at the current time. There is an open issue in GitHub to address this use case.

noVNC connecting to VNCServer on private LAN using HTTPS only

Not sure if i'm really up-to-date, but i'm looking in a way to convert my existing project to use HTML5 websockets.
Here's my situation :
- Client runs a modified java vnc applet with extra parameter (CONNECT).
- Modified stunnel listenin on webserver (with both public, private IP) port 443
- Client connects to 443 and sends (prior to RFB) a HTTP packet like :
'CONNECT 10.0.0.1:4001'
- Stunnel opens a new stream to 10.0.0.1:4001 using SSL wrapper
- VNC Server (#10.0.0.1:4001) responds, connection is established.
Now I want to get rid of the Java Applet and switch to Websocket using NoVNC.
I want to be able to :
- Open a single port on the webserver (HTTPS preferably)
- Have client connect using HTML5 only (no more java applet)
I cannot change :
- VNCServer will still be listening on private LAN only.
- VNCServer will still listen to a bunch of ports, each corresponding to
a virtual server
Questions are :
- How to give NoVNC the notion of target HOST:PORT ?
- Is stunnel still be usable ? Or should I change to websocket proxy ?
If anyone has a starting point, i'd really appreciate !
Disclaimer: I created noVNC so my answer may be heavily biased ;-)
I'll answer you second question first:
stunnel cannot be used directly by noVNC. The issue is that the WebSockets protocol has an HTTP-like initial handshake and the messages are framed. In addition, until binary payload support is added to WebSockets, the payload is base64 encoded by the websockets proxy (websockify). Adding the necessary support to stunnel would be non-trivial but certainly doable. In fact noVNC issue #37 is an aspirational feature to add this support to stunnel.
First question:
noVNC already has a concept of HOST:PORT via the RFB.connect(host, port, password) method. The file vnc_auto.html at the top level shows how to get noVNC to automatically connect on page load based on the host, port and password specified as URL query string parameters.
However, I think what you are really asking is how do you get noVNC to connect to alternate VNC server ports on the backend. This problem is not directly addressed by noVNC and websockify. There are several ways to solve this and it usually involves an out-of-band setup/authorization mechanism so that the proxy can't be used to launch attacks by arbitrary hosts. For example, at my company we have a web based management framework that integrates noVNC and when the user wants to connect to the console, an authenticated AJAX call is used to configure the proxy for that particular user and the system they want to connect to. Our web management interface is internal only.
Ganeti Web Manager uses a similar model and the source is available. They have a fork of VNCAuthProxy that has WebSockets support. They use a control channel from the web interface to the VNCAuthProxy to setup a temporary password associated with a specific VNC server host:port.
Also OpenStack (Nova) integrates noVNC uses a similar out-of-band token based model to allow access with their nova-vncproxy.
Some links:
Ganeti Web Manager
Wiki page about how noVNC works in Ganeti Web Manager
Ganeti Web Manager sources
Ganeti Web Manager VNCAUthProxy sources
Using noVNC in Nova/OpenStack
OpenStack fork of noVNC
Old nova-vnc-proxy code
Current nova vnc proxy code