When I serve a Golang Application from within the official Docker Hub Repository I wonder what will be the default working directory the application starts up?
Background: I will have to map local Certificate Authority and server keys into the container to serve TLS https and I wonder where to map them to the application will be able to grab them in current working directory of the application from within the container?
If you are using the golang:1.X-onbuild image from DockerHub will be copied into(https://hub.docker.com/_/golang/)
/go/src/app
this means all files and directories from the directory where you run the
docker build
command will be copied into the container.
And the workdir of all images is
/go
Go will return the current working directory using
currdir, _ = filepath.Abs(filepath.Dir(os.Args[0]))
Executed within a golang container and right after startup, the pwd is set to
/go/src/app
The current working directory of a golang application starting up within a Docker container is thus /go/src/app. In order to map a file/directory into a container you will habe to use the -v-switch as described in the Documentation for run:
-v /local/file.pem:/go/src/app/file.pem
Will map a local file into the pwd of the dockerized golang app.
Related
I have a small device that serves a webpage using Nginx in a local network. I'm developing the webpage using Vue and I need that once a person got connected to the server and visited the page, on disconnection, the page needs to work as normal
I'm currently using Workbox plugin and I get this code:
importScripts("https://storage.googleapis.com/workbox-cdn/releases/4.3.1/workbox-sw.js");
importScripts(
"/precache-manifest.b62cf508e2c3da8c27f2635f7aab384a.js"
);
The problem is that it goes to the internet to download that file and I will not have an internet connection.
I tried downloading this file, but inside goes to the internet again.
Is there a way to get this to work in an offline environment?
You can follow the guidance in the workbox-sw docs to download a local copy of the bundled Workbox runtime libraries, and modify your service worker script to use those.
Running:
$ npx workbox-cli#4.3.1 copyLibraries /path/to/dir
from the command line will download a local copy of the runtime to the specified directory (replace /path/to/dir with the desired location).
You can then modify your service worker script so that it reads:
importScripts("/path/to/dir/workbox-v4.3.1/workbox-sw.js");
workbox.setConfig({
modulePathPrefix: '/path/to/dir/workbox-v4.3.1/'
});
importScripts(
"/precache-manifest.b62cf508e2c3da8c27f2635f7aab384a.js"
);
I have a docker-composer setup in which i am uploading source code for server say flask api . Now when i change my python code, I have to follow steps like this
stop the running containers (docker-compose stop)
build and load updated code in container (docker-compose up --build)
This take a bit long time . Is there any better way ? Like update code in the running docker and then restarting Apache server without stopping whole container ?
There are few dirty ways you can modify file system of running container.
First you need to find the path of directory which is used as runtime root for container. Run docker container inspect id/name. Look for the key UpperDir in JSON output. You can edit/copy/delete files in that directory.
Another way is to get the process ID of the process running within container. Go to the /proc/process_id/root directory. This is the root directory for the process running inside docker. You can edit it on the fly and changes will appear in the container.
You can run the docker build while the old container is still running, and your downtime is limited to the changeover period.
It can be helpful for a couple of reasons to put a load balancer in front of your container. Depending on your application this could be a "dumb" load balancer like HAProxy, or a full Web server like nginx, or something your cloud provider makes available. This allows you to have multiple copies of the application running at once, possibly on different hosts (helps for scaling and reliability). In this case the sequence becomes:
docker build the new image
docker run it
Attach it to the load balancer (now traffic goes to both old and new containers)
Test that the new container works correctly
Detach the old container from the load balancer
docker stop && docker rm the old container
If you don't mind heavier-weight infrastructure, this sequence is basically exactly what happens when you change the image tag in a Kubernetes Deployment object, but adopting Kubernetes is something of a substantial commitment.
I have a dockerfile where I build an apache web server with some custom configurations etc.
Executing the Dockerfile I create an image that could be used in a deployment yaml file using Kubernetes.
Everything is working properly but after deployment, my apache service is down in every container of every pod.
Obviously I can access in every container to execute an /etc/init.d/apache2 start but this solution is not very smart..
So my question is: how can I set my custom apache to be running during the execution of the deploy yaml file?
PS: I tried this solution: with the dockerfile I created a docker container then I accessed on it and I started apache. Then I created a new image from this container (dockerfile commit + gcloud image push) but when I deploy the application I always find apache down
Well, first things first - I would very much recommend just using the official apache2 image and then making your custom configurations from there. They're documentation states this in the following paragraph:
Configuration
To customize the configuration of the httpd server, just COPY your custom configuration in as /usr/local/apache2/conf/httpd.conf.
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
However if you're dead-set on building everything yourself; you'll notice that inside of the Dockerfile for the official image they are copying in a BASH script and then setting this as the CMD option. This works because when running a Docker container you should be running a single process; this is why, as you stated, running it from it's service is a bad idea.
You can find the script they're running here, it's very short at 7 lines - so you shouldn't have too much trouble figuring out where to go from here.
Best of luck!
Where does the logs of apache apex app appear using apache apex cli and hadoop?
Like in the example project in this link.
https://github.com/DataTorrent/examples/blob/master/tutorials/partition/src/main/java/com/example/myapexapp/TestPartition.java
Apache Apex runs as a Yarn application.
Application's localized log directory will be found in ${yarn.nodemanager.log-dirs}/application_${appid}. Individual containers' log directories will be below this, in directories named container_{$contid}. Each container directory will contain the files stderr, stdin, and syslog generated by that container.
Note that it is a distributed app and you might need to go to the node where the containers are deployed.
I am currently automating a process of moving Weblogic applications from old servers to new servers. I was unable to find a way to list the local application path for a deployed Weblogic application using WLST. The closest I found was:
appInfo=cmo.getAppDeployments()
for app in appInfo:
app_path = getPath(app)
print app_path
which will return something like:
InternalAppDeployments/test.war
This is not the directory I am looking for. I was wondering if someone had some input on how to retrieve the local directory for deployed Weblogic applications.
One easy way to do it with WLST:
ls('/AppDeployments') # this will list all of the deployments
cd('/AppDeployments/<app name>')
cmo.getAbsoluteSourcePath() # this will list the full path
Some things you could try instead of WLST:
Navigate to the /config/ folder and do a:
grep source-path config.xml
This will list the full path to the deployment IF that deployment was deployed with nostage staging-mode. Those paths will be relative if the deployment was deployed with stage for the staging-mode, it will be copied to each managed server that was targeted for the deployment and you will get relative paths like you mentioned above...
Those ear/war files likely live under:
<domain>/servers/<server name>/stage/<deployment name>
Or under
<domain>/sbgen