In Cloud Foundry, how do I create a service to run my Apache web server? - apache

I'm on Ubuntu 18, running the following version of Cloud Foundry ...
$ cf -v
cf version 7.4.0+e55633fed.2021-11-15
I would to set up several containers, running off Docker image. First is an Apache web server. I have the following Dockerfile
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
COPY ./my-vhosts.conf /usr/local/apache2/conf/extra/httpd-vhosts.conf
COPY ./directory /usr/local/apache2/htdocs/directory
How do I set this up in Cloud foundry? I tried creating a service but got these errors
$ cf cups apache-service -p "localhost, 80"
FAILED
No API endpoint set. Use 'cf login' or 'cf api' to target an endpoint.
When I tried to create this API endpoint I got
$ cf api "http://my_ip_address"
Setting API endpoint to http://my_ip_address...
Request error: Get "http://my_ip_address": dial tcp my_ip_address:80: connect: connection refused
TIP: If you are behind a firewall and require an HTTP proxy, verify the https_proxy environment variable is correctly set. Else, check your network connection.
I'm thinking I'm missing something rather substantial but don't know what the right questions to ask are.

The error message you are providing (dial tcp my_ip_address:80: connect: connection refused ) is related to the cf api $address not responding.
Ensure that your Cloud Foundry API Endpoint is still active and you don't have any firewall preventing you from accessing the API. (port is open, the process is running, and the firewall is allowing traffic from your IP if applicable)

Related

Cannot access the application via node ip and node port

I have to deploy an application via Helm by supplying a VM Ip address and node port. Its a BareMetal Kubernetes cluster. The kubernetes cluster has ingress controller installed (as node port, this value is supplied in helm command). The problem is: I am receiving a 404 not found error if I access the applciation as:
curl http://{NODE_IP}:{nodeport}/path
there is no firewall. I have "allow all ingresss traffic" policy. But not sure what is wrong. I have now tried anything possible but cannot find the root cause.

Connect a Jaeger agent to a collector running in Openshift

I am having problems pointing a jaeger agent to a collector running in openshift.
I am able to browse my OCP collector endpoint doing this:
https://mycollectoropenshift.com:443
My jaeger agent Dockerfile currently looks like this
FROM centos:latest
EXPOSE 5775/udp 6831/udp 6832/udp 5778
COPY agent-linux /go/bin/
#CMD ["--collector.host-port=localhost:14267"]
#CMD ["--collector.host-port=https://mycollectoropenshift.com:443"]
CMD ["--collector.host-port=mycollectoropenshift.com:443"]
ENTRYPOINT ["/go/bin/agent-linux"]
I get the expected result when i point my agent to a collector running locally per the first commented line.
I get the following error using the second uncommented CMD flag.
error":"dial tcp: address https://mycollectoropenshift.com:443: too many colons in address"
When i attempt the agent to the collector running on openshift, i get the error below
Failed to run the agent: listen tcp 10.100.120.221:443: bind: cannot assign requested address
I am able to successfully curl the collector endpoint by doing this
curl https://mycollectoropenshift.com:443
I get the following error when i attempt to curl the endpoint this way:
curl mycollectoropenshift.com:443
curl: (52) Empty reply from server
I need help setting up a proper --collector.host-port flag that will connect to a collector running remotely behind an HTTPS protocol.
I don't think it's possible at the moment and you should definitely ask for this feature in the mailing list, Gitter or GitHub issue. The current assumption is that a clear TChannel connection can be made between the agent and collector(s), all being part of the same trusted network.
If you are using the Java, Node or C# client, my recommendation in your situation is to have your Jaeger Client to talk directly to the collector. Look for the env var JAEGER_ENDPOINT in the Client Features documentation page.

How to CURL an instance of OpenStack from another Instance

I have running devstack on my machine and created an instance of Alpine Linux which runs a Rails API (IP 10.0.0.6) on port 3000 (also tried 80, 8080). Then I created a simple CirrOS client instance (IP 10.0.0.4) to access the /test endpoint of the API. However, i find that I can ŕun:
ping 10.0.0.6
from the CirrOS instance and receive response of packets. However, when I try:
curl -XGET http://10.0.0.6:3000/test
I receive the error:
curl: (7) couldn't connect to host
The two instances belong to the private network and the security group policy allows any Ingress and Egress of any kind of protocol.
The /test endpoint works locally on the API instance.
I also tested that I'm able to make an ssh connection from one instance to another.
What configuration could I be missing? Thanks!
Found the solution.
It wasn't a wrong configuration on openstack side.
I needed to run rails with the flag -b 0.0.0.0 to allow any IP. Rails on default only serves the localhost IP.
rails s -b 0.0.0.0
You could always try telneting on the particular port which server is running on to locate the issue whether it's networking issue or it is any other configuration issue.

- Restcomm Olympus WebRTC WSS error,

We are trying to use RESTCOMM OLYMPUS by making few customizations as part of our application. The main customization is that we have deployed OLYMPUS war on our Apache TOmcat web server and the OUTBOUND PROXY is properly pointed to the same server where RESTCOMM is running.
So far all is good, but recently we got the issue that "getUserMedia()" deprecation issue because of insecure origin issue by chromium fix.
So, it means we need to use HTTPS and WSS. I can see that just around 7 days back OLYMPUS code has been updated on GITHUB to use WSS if HTTPS has been used in browser location bar.
So first we have installed self signed CERT and enabled SLL config on TOMCAT so that our customized OLYMPUS UI is accessed via https from Tomcat. And then we used WSS protocol to connect to OUTBOUND PROXY. Bt we got the below error
"WebSocket connection to 'wss:/:5082/' failed: Error in connection establishment: net::ERR_TIMED_OUT
WSMessageChannel:createWebSocket(): websocket connection has failed:[object Event]"
Then we thought that in addition to TOMCAT ( where WAR is deployed) we need to install self singed cert and SSL config on RESTCOMM as well. So we did it by following http://docs.telestax.com/restcomm-enable-https-secure-connector-on-jboss-as-7-or-eap-6/ and also we have used WSS protocol.
But this time also we got the error but with a different error code though
"WebSocket connection to 'wss:/:5083/' failed: Error in connection establishment: net::ERR_CONNECTION_CLOSED
WSMessageChannel:createWebSocket(): websocket connection has failed:[object Event]"
Can i request the forums to explain if we are missing any thin here?
Thanks in advance
I would suggest to use the mobicents RestComm docker image instead of using the zip bundle, because for docker image all settings are handled automatically and https/wss should work out of the box. Here are some quick steps to get you started:
Install docker in your Ubuntu if not already there
Download RestComm docker image:
$ docker pull mobicents/restcomm:latest
Start docker image:
$ docker run -e SECURE="true" -e SSL_MODE="allowall" -e USE_STANDARD_PORTS="true" -e VOICERSS_KEY="VOICERSS_KEY_HERE" --name=restcomm -d -p 80:80 -p 443:443 -p 9990:9990 -p 5060:5060 -p 5061:5061 -p 5062:5062 -p 5063:5063 -p 5060:5060/udp -p 65000-65535:65000-65535/udp mobicents/restcomm:latest
Now you should be able to reach your RestComm instance Admin UI at:
https://<host ip address>/
Make sure that you don't have any servers running in your host at the ports used by the docker container above, or you'll have to use different ports (please refer to the docker hub page for such options)
Best regards,
Antonis Tsakiridis

HTTP access on GCE instance after firewall rule added

I'm trying to get Apache working on a GCE instance.
Following GCE's Quickstart guide, I did the following:
Created instance "my-instance" in "my-project" (CentOS image)
Installed httpd, verified it's running
Added the following firewall rule:
gcutil addfirewall http2 --description="Incoming http allowed." --allowed="tcp:http"
and did the same for HTTPS and ICMP
Verified through gce gui that these rules were added to default network
I can ping my instance's IP address but I can't get an HTTP response. I've tried through the browser, from a curl command - no dice. And it works fine when on localhost so I know Apache is returning the index.html page.
When I use curl from a remote host, the error is:
curl: (7) Failed connect to (instance ip addr):80; Connection refused
Thoughts?
I did some experiments to replicate this. In short, I believe HTTP port 80 may be blocked by iptables firewall rules on the local Centos instance. This appears to be the default behavior.
I have a GCE firewall rule setup to allow port 80 traffic to all instances. I created a centos based image via the Cloud Console (which is indeed using the v1 API). Logged in via SSH and started a web server on port 80. I was not able to hit the web server from my laptop. However I was also not able to hit it from another instance in my project. This lead me to suspect a firewall local to the instance rather than Compute Engine's firewall.
I ran this command (which drops the default reject of all ports for testing - this is unsafe to do for machines which are directly exposed to the internet):
$ sudo iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited
After running that, I was able to hit my webserver from both another instance and my laptop. Note that this change is lost after restarting the instance. I don't know the correct procedure for changing the default firewall rules on Centos.
Please try a similar experiment on your instances, especially try to hit the web server from another Compute Engine instance, since service level firewalls do not block traffic between instances on the same network.