Why is my env variable undefined when deployed to AWS? - vue.js

I have a vue app I'm currently working on. I set the environmental variables in a file I named "config.ts" which contain codes similar to the code below:
export const configs = {
baseURL:
process.env.VUE_APP_API_BASEURL ||
'http://test.api.co/',
}
I tested the environmental variables locally with a .env like so
VUE_APP_API_BASEURL=https://test2.api.com
and it works fine.
Then I dockerised the app with a Dockerfile as shown below:
FROM private_image_container/node:v16 as build-stage
# declare args
ARG VUE_APP_API_BASEURL
# set env variables
ENV VUE_APP_API_BASEURL=$VUE_APP_API_BASEURL
RUN echo "VUE_APP_API_BASEURL=$VUE_APP_API_BASEURL" > .env
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn
COPY . .
RUN yarn build
# production stage
FROM private_image_container/nginx:latest as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
When the app is deployed, the variables are not seen even when they are defined in the task definition.

So I had this issue because our DevOp guy insist we have runtime environment variables.
The solution I developed was to write a bash script that inject a script to attach the configs to the window object, making it accessible as runtime.

Related

Docker ADD and COPY not honored in Dockerfile

When I try to build the following Dockerfile, the ADD and COPY steps do nothing:
# Use an official Apache runtime as a parent image
FROM amd64/httpd
# Set the working directory
WORKDIR /usr/local/apache2
# Copy the following contents into the container
ADD ./httpd.conf {$workdir}/conf/httpd.conf
COPY ./Projects/RavensHomeSupport/build/* {$workdir}/htdocs/Test/
# Make port 80 available to the world outside this container
EXPOSE 80
# Define environment variable
ENV NAME RavensHomeWeb
I run the following build command:
docker build -t ravenshome --rm --no-cache .
and when I check the contents of the Test directory in the running container, none of the data that I expected has been copied across to the container. The output of the build command is here.
Sending build context to Docker daemon 1.444MB
Step 1/6 : FROM amd64/httpd
---> 19459a872194
Step 2/6 : WORKDIR /usr/local/apache2
---> Running in 192cb44f767e
Removing intermediate container 192cb44f767e
---> d9816ea17258
Step 3/6 : ADD ./httpd.conf {$workdir}/conf/
---> 19f48db970bb
Step 4/6 : COPY ./Projects/RavensHomeSupport/build/ {$workdir}/htdocs/Test/
---> d93939218c2b
Step 5/6 : EXPOSE 80
---> Running in 43b9e9297f60
Removing intermediate container 43b9e9297f60
---> 3b994be07747
Step 6/6 : ENV NAME RavensHomeWeb
---> Running in a64bccaf81c8
Removing intermediate container a64bccaf81c8
---> 9217c242868c
Successfully built 9217c242868c
Successfully tagged ravenshome:latest
I start the container with the following command:
docker run -dit -p 8080:80 --name ravenshome ravenshome
When I examine the problem directory in the container with the following command:
docker exec ravenshome ls -a /usr/local/apache2/htdocs
I get the following result:
.
..
index.html
As you can see, all that was there is the contents of the default image, not the additional content that I expected.
Similarly, my customized version of httpd.conf is not copied to the new container.
I have read several posts that suggest that the problem is due to using volumes, but I am not doing so, nor do I have a .dockerignore file.
Can anyone see what I am doing wrong?
$workdir isn't a defined environment variable, so it expands to an empty string. $variable inside curly braces isn't special syntax at all; it expands to the variable expansion, inside curly braces. The net result of this is that these two lines:
WORKDIR /usr/local/apache2
ADD ./httpd.conf {$workdir}/conf/httpd.conf
copy content into a directory /usr/local/apache2/{}/conf/http.conf -- nothing is inside the curly braces, and the curly braces themselves are interpreted as a directory relative to the current working directory.
You don't need an environment variable here at all; you can just COPY to the current WORKDIR
WORKDIR /usr/local/apache2
ADD ./httpd.conf ./conf/httpd.conf
COPY ./Projects/RavensHomeSupport/build/* ./htdocs/Test/
See also Variable substitution in the docker-compose.yml documentation for the allowed forms; you're probably thinking of ${variable} syntax (dollars outside the curly braces).

Docker for Windows, AspNetCore, and aspnet-webpack Hot Module

I've recently converted my AspNetCore web application to use docker containers for local development and have run into trouble getting the npm module "aspnet-webpack" to work.
When I start the container, I get the following error:
Microsoft.AspNetCore.NodeServices.HostingModels.NodeInvocationException: Webpack dev middleware failed because of an error while loading 'aspnet-webpack'. Error was: Error: ENOENT: no such file or directory, lstat 'C:\ContainerMappedDirectories'
Of course, if I comment out the below snippet of code, the error goes away, but I'd appreciate it if anyone has some advice on getting my webpack hot module to work:
app.UseWebpackDevMiddleware(new WebpackDevMiddlewareOptions
{
HotModuleReplacement = true
});
Here's a simplified snippet of my DockerFile (hope I'm not missing anything):
FROM microsoft/dotnet:2.1-aspnetcore-runtime-nanoserver-sac2016 AS base
# Pretend I install nodejs here or the image above already has it
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM microsoft/dotnet:2.1-sdk-nanoserver-1803 AS build
WORKDIR /src
COPY ["WebApp/WebApp.csproj", "WebApp/"]
RUN dotnet restore "WebApp/WebApp.csproj"
COPY . .
WORKDIR "/src/WebApp"
RUN dotnet build "WebApp.csproj" -c Release -o /app
FROM build AS publish
RUN dotnet publish "WebApp.csproj" -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "WebApp.dll"]
And a simplified snippet of my docker-compose.yml:
services:
webapp:
image: ${DOCKER_REGISTRY-}webapp
build:
context: .
dockerfile: WebApp\Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=http://+:80
ports:
- "53760:80"
volumes:
- ${APPDATA}/ASP.NET/Https:C:\Users\ContainerUser\AppData\Roaming\ASP.NET\Https:ro
- ${APPDATA}/Microsoft/UserSecrets:C:\Users\ContainerUser\AppData\Roaming\Microsoft\UserSecrets:ro
- .\WebApp\node_modules:C:/app/node_modules
Notice that I tried mapping my node_modules from my local machine to the container to see if that'd help the hot module to find "aspnet-webpack."

Get cache location with env variable

I get get the NPM cache location using:
cache_location="$(npm get cache)"
however, is this value also represented by an env variable that I can read?
Something like NPM_CACHE_LOCATION?
https://docs.npmjs.com/cli/cache
Short answer: It depends on when/how you want to access it, as there is no env variable, (e.g. NPM_CACHE_LOCATION), available whilst npm is not running.
You'll need to invoke npm config get cache or npm get cache as you are currently doing.
However, once npm is running the configuration parameters are put into the environment with the npm_ prefix.
The following demonstrates this...
Discover which env variables are available:
As a way to find out what env variable(s) npm puts in the environment, you can utilize printenv in an npm-script. For example in package.json add:
...
"scripts": {
"print-env-vars": "printenv | grep \"^npm_\""
},
...
Then run the following command:
npm run print-env-vars
Get the cache location via an env variable:
In the resultant log to the console, (i.e. after running npm run print-env-vars), you'll see that there's the npm_config_cache environment variable listed. It reads something like this:
npm_config_cache=/Users/UserName/.npm
In the docs it states:
configuration
Configuration parameters are put in the environment with the npm_config_ prefix. For instance, you can view the effective root config by checking the npm_config_root environment variable.
Note: Running printenv | grep "^npm_" directly via the CLI returns nothing.
Accessing the cache location with env variable:
You can access the cache location via an npm-script, For example:
"scripts": {
"cache-loc-using-bash": "echo $npm_config_cache",
"cache-loc-using-win": "echo %npm_config_cache%"
},
See cross-var for utilizing a cross-platforms syntax.
Accessing the npm cache location via a Nodejs script. For example:
const cacheLocation = process.env.npm_config_cache;
console.log(cacheLocation)
Note: This node script will need to be invoked via an npm-script for the process.env.npm_config_cache to be available. Invoking it via the command line running, e.g. node ./somefile.js will return undefined - this further demonstrates that the parameters with the _npm prefix are only put into the environment whilst npm is running.
Not ideal, however you could set your own environment variable using export of course:
export NPM_CACHE_LOCATION="$(npm get cache)"
and unset to remove it:
unset NPM_CACHE_LOCATION

Exposing a port other than 3000 with Express and Docker

I'm using Docker to run an Express app and everything is fine if I run it on port 3000. The Dockerfile I'm using for that is
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD ["npm", "start" ]
I now wanted to run it on port 3500. I adjusted the Dockerfile to
FROM node:boron
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 3500
CMD ["PORT=3500", "npm", "start" ]
and the docker run command to
docker run -p 3500:3500 me/myapp
It throws the following error
container_linux.go:262: starting container process caused "exec: \"PORT=3500\": executable file not found in $PATH"
I'm sure this is something basic but I'm new to this and couldn't find the solution by googling it. A pointer in the right direction would be very much appreciated.
You're trying to set the environment variable PORT as you would in a bash script. Docker doesn't understand that - the CMD config wants something which it can execute - a command name & some arguments.
The way to do what you want in Docker is to use ENV. In your case, it'd look something like this:
ENV PORT 3500
CMD ["npm", "start" ]
You can put the ENV anywhere in the Dockerfile, before the CMD, but it makes sense to keep a section of them later, so changes don't force a costly rebuild and more layers can be shared.

Environment variables for build not visible in Dockerfile

I've set an environment variable (NPM_TOKEN) for my repo in Docker Cloud to use when building my Dockerfile. However, the variable is always empty...
Tried both of these in Dockerfile:
RUN echo ${NPM_TOKEN}
and:
ARG NPM_TOKEN
RUN echo ${NPM_TOKEN}
Am I wrong in assuming that Docker Clouds environment variables for build does the same thing as --build-arg?
It took me along time, but you can use build hooks to set variables for automated builds!
https://docs.docker.com/docker-cloud/builds/advanced/#build-hook-examples