CloudFlare worker environment creates separate services with 1 env each, not 1 service with 2 environments - cloudflare

I'm confused about the intended function of Cloudflare's Worker Environments.
In the CloudFlare dashboard, a worker has an environment dropdown, which defaults to "Production". I thought that by leveraging the environments in my Wrangler file, I would have a single worker, but with multiple environments. However, what ended up happening was I just had two workers, with the environment added at the end (my-worker-dev and my-worker-prod). Each of these workers has 1 environment (the Production environment).
I'm not sure if I'm doing something wrong or just misunderstanding the intended behavior.
Can someone help me understand the difference between how wrangler just adds a different name and the "Environment" dropdown within a single worker/service?
My wrangler.toml file
name = "my-worker"
type = "javascript"
account_id = "<redacted>"
workers_dev = true
compatibility_date = "2021-12-10"
[env.dev]
vars = { ENVIRONMENT = "dev" }
kv_namespaces = [
{ binding = "TASKS", id = "<redacted>", preview_id = "<redacted>" }
]
[env.prod]
vars = { ENVIRONMENT = "prod" }
kv_namespaces = [
{ binding = "TASKS", id = "<redacted>", preview_id = "<redacted>" },
]
[build]
command = "npm install && npm run build"
[build.upload]
format = "modules"
dir = "dist"
main = "./worker.mjs"

I think there is currently some disconnect / confusion between the meaning of "environments" as defined in the new Dashboard functionality, and the pre-existing wrangler "environment" support.
For the Dashboard / Web UI, you define a "Service" which has multiple workers grouped under it (one per environment). This allows "promoting" a worker from one environment to another (essentially copying the script, but having separate variables and routes).
There is separate documentation for this functionality - https://developers.cloudflare.com/workers/learning/using-services#service-environments
Wrangler "environments", as you've seen, work differently. Simply creating one top-level "production" worker / service (named for the environment). The good news is (according to the docs above) it sounds like Cloudflare will be updating wrangler to support the new Dashboard type environments:
As of January 2022, the dashboard is the only way to interact with Workers Service environments. Support in Wrangler is coming in v2.1
https://github.com/cloudflare/wrangler2/issues/27

Related

Multi device management in GitLab CI/CD pipelines

There are several Embedded Linux devices connected over Ethernet. The goal is to install a software image which was generated by a previous pipeline on one of the available devices. After successfully uploading the image a set of Python tests shall be executed.
For managing the different IP addresses I have used the one runner, multiple workers concept. For each worker I have defined a variable DUT_IP which points to a device. For a single job in the pipeline this works as intended.
concurrent = 2
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "Laboratory Tests runner #2"
url = "https://gitlab-ee.dev.ifm/"
token = "XXXXX"
executor = "docker"
environment = ["DUT_IP=192.168.0.70"]
[[runners]]
name = "Laboratory Tests runner #1"
url = "https://gitlab-ee.dev.ifm/"
token = "XXXXXX"
executor = "docker"
environment = ["DUT_IP=192.168.0.71"]
By using Resource groups it can be prevented that multiple jobs are executed on the same device. From my understanding this will not ensure that jobs are executed in a specific order. What I need to ensure is that for each pipeline the following steps are performed:
Firmware update
Python Tests
Test result upload
Of course I could put everything in a single Job which will become quite big. Are there other more general solutions. Maybe with dynamic environments. Unfortunately I was not able to find any good blueprint on this kind of set-up online.

Frontend no longer accessible after dependency updates

We have a rather standard web app, that consists of a Flask backend and a Vue.js frontend. In production, we use uWSGI to serve that application. We have uWSGI configured to serve frontend pages and access backend calls for their respective routes.
[uwsgi]
module = app
callable = create_app()
buffer-size=65535
limit-post=0
wsgi-disable-file-wrapper=true
check-static=./public
# enable threads for sentry
enable-threads = true
# dont show errors if the client disconnected
ignore-sigpipe=true
ignore-write-errors=true
disable-write-exception=true
; redirect all frontend requests that are not static files to the index
route-host = ^$(FRONTEND_HOST_NAME)$ goto:frontend
; also handle if the host name is frontend, for the dokku checks
route-host = ^frontend$ goto:frontend
; continue if its a backend call
route-host = ^$(BACKEND_HOST_NAME)$ last:
route-host = ^backend$ last:
; log and abort if none match
route-run = log:Host Name "${HTTP_HOST}" is neither "$(FRONTEND_HOST_NAME)" nor "$(BACKEND_HOST_NAME)"
route-run = break:500
route-label = frontend
route-if = isfile:/app/src/backend/public${PATH_INFO} static:/app/src/backend/public${PATH_INFO}
route-run = static:/app/src/backend/public/index.html
This worked perfectly fine and behaved just like our dev setup, where we use containers for both front- and backend. But after the update of some vulnerable dependencies, trying to access the frontend results in a 404.
In the frontend we moved from vue-cli ~4.5.9 to ~5.0.4. We long suspected that this might be the main issue, but we're not so sure about that anymore.
We also upgraded from Flask ~1.1 to ^2.0.3 but we kept the version 2.0 of uWSGI. The configuration of that should therefore probably not have changed.
We're treading in the dark on this one. Does anyone of you have any idea on what might be going wrong in here?
I tried to isolate the problem by creating a rather small setup, but have not been able to track down the underlying issue until now.
I have no idea what it exactly was, but I did in the end upgrade each dependency one by one until all of them were upgraded and things still worked. It must have been something related to the Dockerfile that we use. That one is now slightly more like the old one rather than the one I used previously to doing things one by one.

Mount an individual file in Azure Container Instances?

I'm attempting to mount a single file in an azure container instance, in this case the ssh host key file as described in this docker image: https://github.com/atmoz/sftp
However from my experiments Azure Container Instances via ARM / Azure CLI seem to only support mounting folders.
If I attempt to mount as a file I suspect it's actually mounting as a folder, as the built in bash appears to miss the fact it already exists, and then errors when it tries to write to it.
Are there any undocumented features to mount individual files? I'm hoping not needing to resorting customising the docker image, as it would defeat my objective of using a ready made image. :-(
You can mount files using Key Vault. If you are deploying your ACI container group using an ARM template, you can integrate it with an instance of Azure Key Vault. It is possible to mount a key vault "secret" as a single file within a directory of your choosing. Refer to the ACI ARM template reference for more details.
You can do it via Azure Container Instance secrets.
Either azure cli:
az container create \
--resource-group myResourceGroup \
--name secret-volume-demo \
--image mcr.microsoft.com/azuredocs/aci-helloworld \
--secrets id_rsa.pub="<file-content>" \
--secrets-mount-path /home/foo/.ssh/keys
or with terraform:
resource "azurerm_container_group" "aci_container" {
name = ""
resource_group_name = ""
location = ""
ip_address_type = "public"
dns_name_label = "dns_endpoint"
os_type = "Linux"
container {
name = "sftp"
image = "docker.io/atmoz/sftp:alpine-3.7"
cpu = "1"
memory = "0.5"
ports {
port = 22
protocol = "TCP"
}
// option 1: mount key as Azure Container Instances secret volume
volume {
name = "user-pub-key"
mount_path = "/home/foo/.ssh/keys"
secret = {
"id_rsa.pub" = base64encode("<public-key-content>")
}
}
// option 2: mount ssh public key as Azure File share volume
// Note: This option will work for user keys to auth, but not for the host keys
// since atmoz/sftp logic is to change files permission,
// but Azure File share does not support this POSIX feature
volume {
name = "user-pub-key"
mount_path = "/home/foo/.ssh/keys"
read_only = true
share_name = "share-name"
storage_account_name = "storage-account-name"
storage_account_key = "storage-account-key"
}
}
In both cases, you will have a file /home/foo/.ssh/keys/id_rsa.pub with the given content.

Stackdriver Node.js Logging not showing up

I have a Node.js application, running inside of a Docker container and logging events using Stackdriver.
It is a Node.Js app, running with Express.js and Winston for logging and using a StackDriverTransport.
When I run this container locally, everything is logged correctly and shows up in the Cloud console. When I run this same container, with the same environment variables, in a GCE VM, the logs don't show up.
What do you mean exactly by locally? Are you running the container on the Cloud Shell vs running it on an instance? Keep in mind that if you create a container or instance that has to do something that needs privileges (like the Stackdriver logging client library) and run it, if that instance doesn't have a service account with that role/privileges set up it won't work.
Yu mentioned that you use the same environment variables, I take that one of the env vars points to your json key file. Is the key file present in that path on the instance?
From Winston documentation it looks like you need to specify the key file location for the service account:
const winston = require('winston');
const Stackdriver = require('#google-cloud/logging-winston');
winston.add(Stackdriver, {
projectId: 'your-project-id',
keyFilename: '/path/to/keyfile.json'
});
Have you checked if this is defined with the key for the service account with a logging role?

Git - Push to Deploy and Removing Dev Config

So I'm writing a Facebook App using Rails, and hosted on Heroku.
On Heroku, you deploy by pushing your repo to the server.
When I do this, I'd like it to automatically change a few dev settings (facebook secret, for example) to production settings.
What's the best way to do this? Git hook?
There are a couple of common practices to handle this situation if you don't want to use Git hooks or other methods to modify the actual code upon deploy.
Environment Based Configuration
If you don't mind having the production values your configuration settings in your repository, you can make them environment based. I sometimes use something like this:
# config/application.yml
default:
facebook:
app_id: app_id_for_dev_and_test
app_secret: app_secret_for_dev_and_test
api_key: api_key_for_dev_and_test
production:
facebook:
app_id: app_id_for_production
app_secret: app_secret_for_production
api_key: api_key_for_production
# config/initializers/app_config.rb
require 'yaml'
yaml_data = YAML::load(ERB.new(IO.read(File.join(Rails.root, 'config', 'application.yml'))).result)
config = yaml_data["default"]
begin
config.merge! yaml_data[Rails.env]
rescue TypeError
# nothing specified for this environment; do nothing
end
APP_CONFIG = HashWithIndifferentAccess.new(config)
Now you can access the data via, for instance, APP_CONFIG[:facebook][:app_id], and the value will automatically be different based on which environment the application was booted in.
Environment Variables Based Configuration
Another option is to specify production data via environment variables. Heroku allows you to do this via config vars.
Set up your code to use a value based on the environment (maybe with optional defaults):
facebook_app_id = ENV['FB_APP_ID'] || 'some default value'
Create the production config var on Heroku by typing on a console:
heroku config:add FB_APP_ID=the_fb_app_id_to_use
Now ENV['FB_APP_ID'] is the_fb_app_id_to_use on production (Heroku), and 'some default value' in development and test.
The Heroku documentation linked above has some more detailed information on this strategy.
You can explore the idea of a content filter, based on a 'smudge' script executed automatically on checkout.
You would declare:
some (versioned) template files
some value files
a (versioned) smudge script able to recognize its execution environment and generate the necessary (non-versioned) final files from the value files or (for more sensitive information) from other sources external to the Git repo.