BigCommerce's Stencil CLI and port selection - bigcommerce

When running stencil start you'll get some information such as
-------------------------------------------------
[Browsersync] Proxying: http://localhost:3001
[Browsersync] Access URLs:
---------------------------------------
Local: http://localhost:3000
External: http://147.182.158.57:3000
---------------------------------------
UI: http://localhost:3002
UI External: http://localhost:3002
---------------------------------------
I know that I can specify the Local/External port in the config.stencil.json file. However, I don't seem to have any control on the proxying and UI ports which leads to collisions.
Is there a way to specify these? I'd even settle for disabling browsersync and the UI stuff.

You can define the default start port using
stencil init … —port xxxx
However it doesn’t seem like you can change the proxy port.
Reference: https://developer.bigcommerce.com/stencil-docs/installing-stencil-cli/stencil-cli-options-and-commands#stencil-init

Related

How to make the netdata apache plugin works on Plesk enviroment

I'm wondering how can I do to make the web apache netdata's plugin works on a Plesk server...
The graphics are empties and no data is displayed.
I've checked and apache mod-status is enabled and working...
It's probable you have apache behind an Nginx proxy, so the apache ports are not the defaults (80).
Run this commands:
cd /etc/netdata/
./edit-config go.d/apache.conf
Go to bottom of the config file when you will see:
jobs:
- name: local
url: http://localhost/server-status?auto
- name: local
url: http://127.0.0.1/server-status?auto
and change by:
jobs:
- name: local
url: http://localhost:7080/server-status?auto
- name: local
url: http://127.0.0.1:7080/server-status?auto
*(You can check on which port is running your apache using netstat -pltn command).
Restart netdata and you're going to see the information.
#Custom logs
Plesk save the logs on special folder, so you probably want to change the defaults logs.
Edit (or create) the file /etc/netdata/python.d/web_log.conf
Set this content:
nginx_log:
name : 'nginx_log'
path : '/var/www/vhosts/system/{yourdomain}/logs/proxy_access_ssl_log'
apache_log:
name : 'apache_log'
path : '/var/www/vhosts/system/{yourdomain}/logs/access_ssl_log'

Docker Swarm CE, Reverse-Proxy without shared config file on master nodes

I've been wrestling with this for several days now. I have a swarm with 9 nodes, 3 managers. I'm planning on deploying multiple testing environments to this swarm using Docker-Compose for each environment. We have many rest services in each environment that I would like to manage access to them through a reverse proxy so that access to the services comes through a single port per environment. Ideally I would like it do behave something like this http://dockerNode:9001/ServiceA and http:/dockerNode:9001/ServiceB.
I have been trying traefic, docker proxy, HAProxy, (I haven't tried NGINX yet). All of these have ran into issues where I can't even get their examples to work, OR they require me to drop a file on each masternode, or setup cloud storage of some sort).
I would like to be able to have something just work by droping it into a docker-compose file, but I am also comfortable configuring all the mappings in the compose file (these are not dynamically changing environments where services come and go).
What is there a working example of this type of setup, or what should I be looking into?
If you want to access your service using the server IP and the service port, then you need to setup dnsrr endpoint mode to override the docker swarm's service mesh. Here is a yaml so you know how to do it.
version: "3.3"
services:
alpine:
image: alpine
ports:
- target: 9100
published: 9100
protocol: tcp
mode: host
deploy:
endpoint_mode: dnsrr
placement:
constraints:
- node.labels.host == node1
Note the configuration endpoint_mode: dnsrr and the way the port has been defined. Also note the placement contraint that will make the service only be able to be schedule in the with the label node1. Thus, now you can access your service using node1's IP address and port 9100. With respect to the URI serviceA just add it.

Selenium firefox driver forces https

I have a functional app running in a docker on port 3000. I have selenium tests that works when I set my host to http://localhost:3000. I created a container to launch the selenium tests and it fails with the following error:
WebDriverError:Reachederrorpage:about:neterror?e=nssFailure2&u=https://app:3000/&c=UTF-8&f=regular&d=An error occurred during a connection to app:3000.
SSL received a record that exceeded the maximum permissible length.
Error code: <a id="errorCode" title="SSL_ERROR_RX_RECORD_TOO_LONG">SSL_ERROR_RX_RECORD_TOO_LONG</a>
Snippet of my docker-compose.yml
app:
build:
context: .
dockerfile: Dockerfile.dev
volumes:
- ./:/usr/src/app/
ports:
- "3000:3000"
- "3001:3001"
networks:
tests:
selenium-tester:
build:
context: .
dockerfile: Dockerfile.selenium.tests
volumes:
- ./:/usr/src/app/
- /dev/shm:/dev/shm
depends_on:
- app
networks:
tests:
I replaced the host by http://app:3000 but firefox seems to want to redirect this http to https (which is not working). And finally I build my driver like this:
const ffoptions = new firefox.Options()
.headless()
.setPreference('browser.urlbar.autoFill', 'false'); // test to disable auto https redirect… not working obviously
const driver = Builder()
.setFirefoxOptions(ffoptions)
.forBrowser('firefox')
.build();
When manually contacting the http://app:3000 using curl inside the selenium-tester container it works as expected, I get my homepage.
I'm short on ideas now and even decomposing my problem to write this question didn't get me new ones
I had exactly the same problem - couldn't successfully make request on HTTP to app from Selenium-controlled browsers (Chrome or Firefox) in other Docker container on same network. cURL from that container though worked fine! Connect on HTTP, but something seemed to be trying to force HTTPS. Identical situation right down to the name of the container "app".
The answer is... it's the name of the container!
"app" is a top level domain on the HSTS preloaded list - that is, browsers will force access through HTTPS.
Fix is to use a container name that isn't on HSTS preloaded lists.
HSTS - more reading
As you mentioned manually contacting the http://app:3000 using curl inside the selenium-tester container it works as expected
This error message...
WebDriverError:Reachederrorpage:about:neterror?e=nssFailure2&u=https://app:3000/&c=UTF-8&f=regular&d=An error occurred during a connection to app:3000.
SSL received a record that exceeded the maximum permissible length.
Error code: <a id="errorCode" title="SSL_ERROR_RX_RECORD_TOO_LONG">SSL_ERROR_RX_RECORD_TOO_LONG</a>
...implies that SSL layer in curl or one of its dependencies seems broken.
#RussellFulton in this discussion mentioned:
This seems to be the result you see from Firefox when the server is not configured properly for SSL. Possibly Chrome would have just gave a generic ssl failed error.
This can happen when the browser sends a SSL handshake when the server is expecting an HTTP request. Server responds with a 400 code and an error message that is much bigger that the handshake message that the browser expects. Hence you see the message.
Reasons and Solution
When the error prone code tries to redirect to HTTPS on port 80 (port 3000 in your case).
Solution: Removing the port 80 (port 3000 in your case) from the url, the redirect works.
HTTPS by default runs over port 443.
This error also occurs when you have enabled the SSL module.
Solution: You have run a2enmod ssl.
a2enmod ssl
//or
a2ensite default-ssl
Provided a wrong IP in the ssl config.
Solution: Changed IP to what it should be.
Remove the IP if not needed in the ssl config.
Solution: Change
VirtualHost your.domain.com:443
//to
VirtualHost default:443
curl: (35) SSL received a record that exceeded the maximum permissible length. issue was discussed at length.
As per Curl Support HTTPS proxy and SOCKS+HTTP(s) there was another attempt to get the HTTPS proxy support into Curl.
This curl commit should have addressed your issue.

Can not link a HTTP Load Balancer to a backend (502 Bad Gateway)

I have on the backend a Kubernetes node running on port 32656 (Kubernetes Service of type NodePort). If I create a firewall rule for the <node_ip>:32656 to allow traffic, I can open the backend in the browser on this address: http://<node_ip>:32656.
What I try to achieve now is creating an HTTP Load Balancer and link it to the above backend. I use the following script to create the infrastructure required:
#!/bin/bash
GROUP_NAME="gke-service-cluster-61155cae-group"
HEALTH_CHECK_NAME="test-health-check"
BACKEND_SERVICE_NAME="test-backend-service"
URL_MAP_NAME="test-url-map"
TARGET_PROXY_NAME="test-target-proxy"
GLOBAL_FORWARDING_RULE_NAME="test-global-rule"
NODE_PORT="32656"
PORT_NAME="http"
# instance group named ports
gcloud compute instance-groups set-named-ports "$GROUP_NAME" --named-ports "$PORT_NAME:$NODE_PORT"
# health check
gcloud compute http-health-checks create --format none "$HEALTH_CHECK_NAME" --check-interval "5m" --healthy-threshold "1" --timeout "5m" --unhealthy-threshold "10"
# backend service
gcloud compute backend-services create "$BACKEND_SERVICE_NAME" --http-health-check "$HEALTH_CHECK_NAME" --port-name "$PORT_NAME" --timeout "30"
gcloud compute backend-services add-backend "$BACKEND_SERVICE_NAME" --instance-group "$GROUP_NAME" --balancing-mode "UTILIZATION" --capacity-scaler "1" --max-utilization "1"
# URL map
gcloud compute url-maps create "$URL_MAP_NAME" --default-service "$BACKEND_SERVICE_NAME"
# target proxy
gcloud compute target-http-proxies create "$TARGET_PROXY_NAME" --url-map "$URL_MAP_NAME"
# global forwarding rule
gcloud compute forwarding-rules create "$GLOBAL_FORWARDING_RULE_NAME" --global --ip-protocol "TCP" --ports "80" --target-http-proxy "$TARGET_PROXY_NAME"
But I get the following response from the Load Balancer accessed through the public IP in the Frontend configuration:
Error: Server Error
The server encountered a temporary error and could not complete your
request. Please try again in 30 seconds.
The health check is left with default values: (/ and 80) and the backend service responds quickly with a status 200.
I have also created the firewall rule to accept any source and all ports (tcp) and no target specified (i.e. all targets).
Considering that regardless of the port I choose (in the instance group), and that I get the same result (Server Error), the problem should be somewhere in the configuration of the HTTP Load Balancer. (something with the health checks maybe?)
What am I missing from completing the linking between the frontend and the backend?
I assume you actually have instances in the instance group, and the firewall rule is not specific to a source range. Can you check your logs for a google health check? (UA will have google in it).
What version of kubernetes are you running? Fyi there's a resource in 1.2 that hooks this up for you automatically: http://kubernetes.io/docs/user-guide/ingress/, just make sure you do these: https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md.
More specifically: in 1.2 you need to create a firewall rule, service of type=nodeport (both of which you already seem to have), and a health check on that service at "/" (which you don't have, this requirement is alleviated in 1.3 but 1.3 is not out yet).
Also note that you can't put the same instance into 2 loadbalanced IGs, so to use the Ingress mentioned above you will have to cleanup your existing loadbalancer (or at least, remove the instances from the IG, and free up enough quota so the Ingress controller can do its thing).
There can be a few things wrong that are mentioned:
firewall rules need to be set to all hosts, are they need to have the same network label as the machines in the instance group have
by default, the node should return 200 at / - readiness and liveness probes to configure otherwise did not work for me
It seems you try to do things that are all automated, so I can really recommend:
https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress
This shows the steps that do the firewall and portforwarding for you, which also may show you what you are missing.
I noticed myself when using an app on 8080, exposed on 80 (like one of the deployments in the example) that the load balancer staid unhealthy untill I had / returning 200 (and /healthz I added to). So basically that container now exposes a webserver on port 8080, returning that and the other config wires that up to port 80.
When it comes to firewall rules, make sure they are set to all machines or make the network label match, or they won't work.
The 502 error is usually from the loadbalancer that will not pass your request if the health check does not pass.
Could you make your service type LoadBalancer (http://kubernetes.io/docs/user-guide/services/#type-loadbalancer) which would setup this all up automatically? This assumes you have the flag set for google cloud.
After you deploy, then describe the service name and should give you the endpoint which is assigned.

Browser sync, gulp, mongodb and express server

Trying to put together a project running an express server and gulp, browsersync, nodemon and mongodb. However I seem to be an Error: listen EADDRINUSE when I add browsersync. Any idea how to do this?
This means you already have a program listening on the port you are trying to use. What port are you running your application on? Is it 3000? If so, stop all other programs you have running that are using that port and you'll be good to go.
Are you defining in the browser-sync configuration the port to use?
In that case, the port needs to be something different from the one (if any) defined from the proxy. This works in my setup:
gulp.task('browser-sync', ['nodemon'], function() {
browserSync.init(null, {
proxy: "http://localhost:3000",
browser: ['google chrome'],
port: 4000
});
});
As reference, the full gulpfile.js (that uses nodemon and browsersync) is here.