How can I raise the limit for open files in Ubuntu 20.04 on WSL2? - windows-subsystem-for-linux

My setup looks as follows: Windows 10, Release 1909 (Build 18363.1082), using WSL2 with an Ubuntu 20.04 environment. Everything works nicely most of the time, but there are some issues I cannot manage to solve.
During development using parcel (React bundler), I run into the problem that the bundler apparently opens lots of files at the same time, and at a certain point, I run into the following problem:
EMFILE: too many open files, open '/home/myusername/Projects/some-project-path/node_modules/#material-ui/icons/esm/RoundedCornerRounded.js'
As parcel seemingly does not easily support using something like graceful-fs, I have tried to increase the limit for open files inside the Ubuntu environment. What I have tried so far:
A simple ulimit -n 4096 (which is the highest possible by default), but it's apparently (by far?) not enough
I tried increasing fs.files-max to something really high in /etc/sysctl.conf, but it doesn't seem to have an effect (neither after sysctl -p nor after a restart of wsl)
I also tried increasing fs.inotify.max_user_watches, but that did not seem to have an effect either
Also setting soft and hard limits in /etc/security/limits.conf did not seem to have an effect
I also found information that changing DefaultLimitNOFILE in /etc/systemd/system.conf can have an effect (so I did that as well)
Has anybody manage to solve a similar system on Ubuntu 20.04 on WSL2? This left me pretty stumped, and it prevents me from using parcel inside this environment. That's a real pity, as really everything else is working really fine.
UPDATE
So I have found out that my changes in various places (probably the one in /etc/security/limits.conf) has had some kind of effect. Just not when logging in directly. This illustrates this:
donmartin#SOMEMACHINE:~$ ulimit -Hn
4096
donmartin#SOMEMACHINE:~$ su donmartin
Password:
donmartin#SOMEMACHINE:~$ ulimit -Hn
65536
donmartin#SOMEMACHINE:~$
Which means: If I su to my own user, the ulimit has indeed been raised. But if I log in just as normal using Windows Terminal, this limit is not in effect. Even more puzzled now - BUT - I have a workaround for my problem. Having set my values to 65536, the parcel build now works, running as my own user. Go figure! I still don't quite know which setting was changing the behaviour now - perhaps somebody has more thorough information on how this works and/or how I can make this also the default without having to do a su to get the updated limits.

I had to add the following line to /etc/systemd/user.conf:
DefaultLimitNOFILE=65535
As written in the answer here:
https://superuser.com/questions/1200539/cannot-increase-open-file-limit-past-4096-ubuntu/1200818#1200818?s=1b927bb17396480da98a94cbacf8da62
Also you may need to run this (if working with applications that monitors changes in many files/folders):
sudo sh -c 'sysctl fs.inotify.max_user_watches=524288 && sysctl -p'

Try this:
$ visudo
ADD: user ALL=(ALL) NOPASSWD:ALL
$ vi ~/.profile
ADD: user ALL=(ALL) NOPASSWD:ALL
$ vi /etc/security/limits.conf
ADD: user soft nproc 10000
user hard nproc 10000
user soft nofile 10000
user hard nofile 10000

Temporarily increase the open files hard limit for the session
Run this 3 commands (the first one is optinal), to check current open files limit, switch to admin user, and increase the value.
$ ulimit -n
1024
$ su <user name>
<Enter password>
$ ulimit -n 65535
Check the new limit:
$ ulimit -n
65535
To check all values, run this:
$ ulimit -a

Related

React Native Error: ENOSPC: System limit for number of file watchers reached

I have setup a new blank react native app.
After installing few node modules I got this error.
Running application on PGN518.
internal/fs/watchers.js:173
throw error;
^
Error: ENOSPC: System limit for number of file watchers reached, watch '/home/badis/Desktop/react-native/albums/node_modules/.staging'
at FSWatcher.start (internal/fs/watchers.js:165:26)
at Object.watch (fs.js:1253:11)
at NodeWatcher.watchdir (/home/badis/Desktop/react-native/albums/node modules/sane/src/node watcher. js:175:20)
at NodeWatcher.<anonymous> (/home/badis/Desktop/react-native/albums/node modules/sane/src/node watcher. js:310:16)
at /home/badis/Desktop/react-native/albums/node modules/graceful-fs/polyfills.js:285:20
at FSReqWrap.oncomplete (fs.js:154:5)
I know it's related to no enough space for watchman to watch for all file changes.
I want to know what's the best course of action to take here ?
Should I ignore node_modules folder by adding it to .watchmanconfig ?
Linux uses the inotify package to observe filesystem events, individual files or directories.
Since React / Angular hot-reloads and recompiles files on save it needs to keep track of all project's files. Increasing the inotify watch limit should hide the warning messages.
You could try editing
# insert the new value into the system config
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
# check that the new value was applied
cat /proc/sys/fs/inotify/max_user_watches
# config variable name (not runnable)
fs.inotify.max_user_watches=524288
The meaning of this error is that the number of files monitored by the system has reached the limit!!
Result: The command executed failed! Or throw a warning (such as executing a react-native start VSCode)
Solution:
Modify the number of system monitoring files
Ubuntu
sudo gedit /etc/sysctl.conf
Add a line at the bottom
fs.inotify.max_user_watches=524288
Then save and exit!
sudo sysctl -p
to check it
Then it is solved!
You can fix it, that increasing the amount of inotify watchers.
If you are not interested in the technical details and only want to get Listen to work:
If you are running Debian, RedHat, or another similar Linux distribution, run the following in a terminal:
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
If you are running ArchLinux, run the following command instead
$ echo fs.inotify.max_user_watches=524288 | sudo tee /etc/sysctl.d/40-max-user-watches.conf && sudo sysctl --system
Then paste it in your terminal and press on enter to run it.
The Technical Details
Listen uses inotify by default on Linux to monitor directories for changes. It's not uncommon to encounter a system limit on the number of files you can monitor. For example, Ubuntu Lucid's (64bit) inotify limit is set to 8192.
You can get your current inotify file watch limit by executing:
$ cat /proc/sys/fs/inotify/max_user_watches
When this limit is not enough to monitor all files inside a directory, the limit must be increased for Listen to work properly.
You can set a new limit temporary with:
$ sudo sysctl fs.inotify.max_user_watches=524288
$ sudo sysctl -p
If you like to make your limit permanent, use:
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -p
You may also need to pay attention to the values of max_queued_events and max_user_instances if listen keeps on complaining.
From the official document:
"Visual Studio Code is unable to watch for file changes in this large workspace" (error ENOSPC)
When you see this notification, it indicates that the VS Code file watcher is running out of handles because the workspace is large and contains many files. The current limit can be viewed by running:
cat /proc/sys/fs/inotify/max_user_watches
The limit can be increased to its maximum by editing
/etc/sysctl.conf
and adding this line to the end of the file:
fs.inotify.max_user_watches=524288
The new value can then be loaded in by running
sudo sysctl -p
Note that Arch Linux works a little differently, See Increasing the amount of inotify watchers for details.
While 524,288 is the maximum number of files that can be watched, if you're in an environment that is particularly memory constrained, you may wish to lower the number. Each file watch takes up 540 bytes (32-bit) or ~1kB (64-bit), so assuming that all 524,288 watches are consumed, that results in an upper bound of around 256MB (32-bit) or 512MB (64-bit).
Another option
is to exclude specific workspace directories from the VS Code file watcher with the files.watcherExclude setting. The default for files.watcherExclude excludes node_modules and some folders under .git, but you can add other directories that you don't want VS Code to track.
"files.watcherExclude": {
"**/.git/objects/**": true,
"**/.git/subtree-cache/**": true,
"**/node_modules/*/**": true
}
delete react node_modules
rm -r node_modules
yarn or npm install
yarn start or npm start
if error occurs use this method again
Firstly you can run every time with root privileges
sudo npm start
Or you can delete node_modules folder and use npm install to install again
or you can get permanent solution
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
It happened to me with a node app I was developing on a Debian based distro. First, a simple restart solved it, but it happened again on another app.
Since it's related with the number of watchers that inotify uses to monitors files and look for changes in a directory, you have to set a higher number as limit:
I was able to solve it from the answer posted here
(thanks to him!)
So, I ran:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
Read more about what’s happening at https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers#the-technical-details
Hope it helps!
Remembering that this question is a duplicated: see this answer at original question
A simple way that solve my problem was:
npm cache clear
best practice today is
npm cache verify
npm or a process controlled by it is watching too many files. Updating max_user_watches on the build node can fix it forever. For debian put the following on terminal:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
If you want know how Increase the amount of inotify watchers only click on link.
I use ubuntu 20 server and i add in the file : /etc/sysctl.conf the below line
fs.inotify.max_user_watches=524288
Then save the file and run sudo sysctl -p
After that all is works fine!
I solved this issue by using sudo
ie
sudo yarn start
or
sudo npm start
Use sudo to solve this issue will force the number of watchers to be increased without apply any modifications in system settings. Use sudo to solve this kind of issue is never recommended, although it's a choice that have to be made by you, hope you choose wisely.
Root cause
Most answers above talk about raising the limit, not about taking away the root cause which is typically just a matter redundant watches, typically for files in node_modules.
Webpack
The answer is in the webpack 5 docs:
watchOptions: { ignored: /node_modules/ }
Simply read here: https://webpack.js.org/configuration/watch/#watchoptionsignored
The docs even mention this as a "tip", quote:
If watching does not work for you, try out this option. This may help
issues with NFS and machines in VirtualBox, WSL, Containers, or
Docker. In those cases, use a polling interval and ignore large
folders like /node_modules/ to keep CPU usage minimal.
VS Code
VS Code or any code editor creates lots of file watches too. By default many of them are completely redundant. Read more about it here: https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc
Generally we don't need to increase count of filewatchers
In this case we will have more watchers
We need to remove redundant watchers what became zombie
The issue is that we have many filewatchers that are filling out our memory
We just need remove these filewatchers (in case of node)
killall node
In react.js show me same error i fix this way hope work in react native too
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Now you can run npm start again.
npm start
Using the sysctl -p approach after setting fs.inotify.max_user_watches did not work for me (by the way this setting was already set to a high value, likely from me trying to fix this issue a while back ago, using the commonly recommended workaround(s) above).
The best solution to the problem I found here, and below I share the performed steps in solving it - in my case the issue was spotted while running visual studio code, but solving the issue should be the same in other instances, like yours:
Use this script to identify which processes are requiring the most file watchers in your session.
You can then query the current max_user_watches value with sysctl fs.inotify.{max_queued_events,max_user_instances,max_user_watches} and then set it to a different value (a lower value may do it)
sudo sysctl -w fs.inotify.max_user_watches=16384
Or you can simply kill the process you found in (1) that consumes the most file watchers (in my case, baloo_file)
The above, however, will likely need to be done again when restarting the system - the process we identified as responsible for taking much of the file watchers will (in my case - baloo_file) - will again so the same in the next boot. So to permanently fix the issue - either disable or remove this service/package. I disabled it: balooctl disable.
Now run sudo code --user-data-dir and it should open vscode with admin privileges this time. (by the way when it does not - run sudo code --user-data-dir --verbose to see what the problem is - that's how I figured out it had to do with file watchers limit).
Update:
You may configure VS code file watcher exclusion patterns as described here. This may prove to be the ultimate solution, I am just not sure you will always know beforehand which files you are NOT interested watching.
Easy Solution
I found, that a previous solution work well in my case. I removed node_modules and clear the yarn / npm cache.
Long Tail Solution
If you want to have a long-tail solution - e.g. if you often be catched by this error - you can increase the value of allowed watchers (depending on your available memory)
To figure out the current used amount of watchers, instead of only guessing, you can use this handy bash-script:
https://github.com/fatso83/dotfiles/blob/master/utils/scripts/inotify-consumers
I suggest to set the max_user_watches temporary to a high value:
sudo sysctl fs.inotify.max_user_watches=95524288 and run the script.
How to calculate how much you can use
Each watcher needs
540 bytes (32-bit system), or
1 kB (double - on 64-bit OS
So if you will allow to use 512MB (on 64Bit), you set something 524288 as value.
Other way around, you can take the amount of memory you will set, and multiply it by 1024.
Example:
512 * 1024 = 52488
1024 * 1024 = 1048576
It shows you the exact amount of the current used inotify-consumers. So you might have an better Idea, how much you should increase the limit.
If you are running your project in Docker, you should do the echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf and all other commands in the host machine, since the container will inherit that setting automatically (and doing it directly inside it will not work).
Late answer, and there are many good answers already.
In case you want a simple script to check if the maximum file watches is big enough, and if not, increase the limit, here it is:
#!/usr/bin/env bash
let current_watches=`sysctl -n fs.inotify.max_user_watches`
if (( current_watches < 80000 ))
then
echo "Current max_user_watches ${current_watches} is less than 80000."
else
echo "Current max_user_watches ${current_watches} is already equal to or greater than 80000."
exit 0
fi
if sudo sysctl -w fs.inotify.max_user_watches=80000 && sudo sysctl -p && echo fs.inotify.max_user_watches=80000 | sudo tee /etc/sysctl.d/10-user-watches.conf
then
echo "max_user_watches changed to 80000."
else
echo "Could not change max_user_watches."
exit 1
fi
The script increases the limit to 80000, but feel free to set a limit that you want.
As already pointed out by #snishalaka, you can increase the number of inotify watchers.
However, I think the default number is high enough and is only reached when processes are not cleaned up properly. Hence, I simply restarted my computer as proposed on a related github issue and the error message was gone.
Another simple and good solution is just to add this to jest configuration:
watchPathIgnorePatterns: ["<rootDir>/node_modules/", "<rootDir>/.git/"]
This ignores the specified directories to reduce the files being scanned
In my case in Angular 13, I added in tsconfig.spec.json
"exclude": [
"node_modules/",
".git/"
]
thanks #Antimatter it gaves me the trick.
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
Run This Code In Project Terminal After Run Npm Run Dev
Please refer this link[1]. Visual Studio code has mentioned a brief explanation for this error message. I also encountered the same error. Adding the below parameter in the relavant file will fix this issue.
fs.inotify.max_user_watches=524288
[1] https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc
While almost everyone suggests to increase a number of watchers, I couldn't agree that it is a solution.
In my case I wanted to disable watcher completely, because of the tests running on CI using vui-cli plugin which starts web-pack-dev server for each test.
The problem was: when a few builds are running simultaneously they would fail because watchers limit is reached.
First things first I've tried to add the following to the vue.config.js:
module.exports = {
devServer: {
hot: false,
liveReload: false
}
}
Ref.: https://github.com/vuejs/vue-cli/issues/4368#issuecomment-515532738
And it worked locally but not on CI (apparently it stopped working locally the next day as well for some ambiguous reason).
After investigating web-pack-dev server documentation I found this:
https://webpack.js.org/configuration/watch/#watch
And then this:
https://github.com/vuejs/vue-cli/issues/2725#issuecomment-646777425
Long story short this what eventually solved the problem:
vue.config.js
module.exports = {
publicPath: process.env.PUBLIC_PATH,
devServer: {
watchOptions: {
ignored: process.env.CI ? "./": null,
},
}
}
Vue version 2.6.14
if you working with vs code editor any editor that error due to large number of files in projects. node_modules and build not required in it so remove in list. that all open in vs code files menu
You have to filter unnecessary folders file sidebar
Goes to Code > Preferences > settings
in search setting search keyword "files:exclude"
Add pettern
**/node_modules
**/build
That's it
Try this , I was facing it for very long time but at the end it is solved by this,
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
The most important step after that is restart your system.
2 fixes if you've already added: fs.inotify.max_user_watches=524288
Reboot the machine, things will work again
Rename the folder that is causing the issue (for me node_modules) to an arbitrary name (node_modilesa) and then rename right back. This will remove the watches that linux had put on those folders. Allowing you code as normal again.
I encountered this issue on a linuxmint distro. It appeared to have happened when there was so many folders and subfolders/files I added to the /public folder in my app.
I applied this fix and it worked well...
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
change directory into the /etc folder:
cd /etc
then run this:
sudo systcl -p
You may have to close your terminal and npm start again to get it to work.
If this fails i recommend installing react-scripts globally and running your application directly with that.
$ npm i -g --save react-scripts
then instead of npm start run react-scripts start to run your application.
I tried increasing number as suggested but it didn't work.
I saw that when I login to my VM, it displayed "restart required"
I rebooted VM and it worked
sudo reboot
it is to easy to fix this
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
and run your project.
if there is fs.inotify.max_user_watches=524288 in your /etc/sysctl.conf,
run same command(echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf). and run your project
For vs code, see detailed instructions here:
https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc

Rabbitmq File Descriptor Limit

Rabbitmq documentation says that we need to do some configuration before we use it on production. One of the configuration is about maximum open file number (which is an OS parameter).
Rabbitmq server we use is running on Ubuntu 16.04 and according to resources I found on web, I updated the number of open files as 500k. When I check it from command line, I get the following output:
root#madeleine:~# ulimit -n
500000
However when I look at the rabbitmq server status, I see another number.
root#madeleine:~# rabbitmqctl status | grep 'file_descriptors' -A 4
{file_descriptors,
[{total_limit,924},
{total_used,19},
{sockets_limit,829},
{sockets_used,10}]},
It seems like, I managed to increase the limit on OS side, but rabbitmq still thinks that total limit of file descriptors is 924.
What might be causing this problem?
You might want to look at this page
Apparently, this operation depends on the OS version. If you have a systemd, you should do the following in /etc/systemd/system/rabbitmq-server.service.d/limits.conf file:
Notice that this service configuration might be somewhere else according to the operating system you are using. You can use the following command to find where this service configuration is located and update that file.
find / -name "*rabbitmq-server.service*"
[Service]
LimitNOFILE=300000
On the other hand, if you do not have the systemd folder, you should try this in your rabbitmq-env.conf file:
ulimit -S -n 4096
Increase / Set maximum number of open files
sudo sysctl -w fs.file-max=65536
These limits are defined in /etc/security/limits.conf
sudo nano /etc/security/limits.conf
and set
soft nofile 65536
hard nofile 65536
Per user settings for rabbitmq process can also be set in
/etc/default/rabbitmq-server
sudo nano /etc/default/rabbitmq-server
and set
ulimit -n 65536
Then reboot the server for changes to take effect.

Running "screen" without additional permissions on WSL

I'm trying to run the "screen" utility on Windows Subsystem for Linux on Windows 10 (Version 1703, OS Build 15063.483).
It seems that I need additional permissions to run it (it works if I "sudo" it), but I don't understand why that is necessary.
What is the recommended way to set this up?
Is there some reason why this isn't the default set up?
$ screen
Cannot make directory '/var/run/screen': Permission denied
From an answer on SuperUser I discovered that you have to run
sudo /etc/init.d/screen-cleanup start
Then screen works fine for me.
EDIT: after installing Ubuntu 20.04 the problem went away (*).
As Krease pointed out, the best solution is the one described in this SuperUser post.
Add the following to your .bashrc:
export SCREENDIR=$HOME/.screen
[ -d $SCREENDIR ] || mkdir -p -m 700 $SCREENDIR
See also issue 1245 on github.
--
(*) now this warning comes up, but seems harmless:
sleep: cannot read realtime clock: Invalid argument
sudo screen # which creates dir /var/run/screen
chmod 777 /var/screen # so that non-root users can create their own screen dir in this dir.

gke cant disable Transparent Huge Pages... permission denied

I am trying to run a redis image in gke. It works except I get the dreaded "Transparent Huge Pages" warning:
WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
Redis is currently too slow to be useful... So I tied turning off THP:
sheena#gke-projectwaxd-cluster-default-pool-23593a74-wxrv ~ $ cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never
sheena#gke-projectwaxd-cluster-default-pool-23593a74-wxrv ~ $ echo never > /sys/kernel/mm/transparent_hugepage/enabled
-bash: /sys/kernel/mm/transparent_hugepage/enabled: Permission denied
sheena#gke-projectwaxd-cluster-default-pool-23593a74-wxrv ~ $ sudo echo never > /sys/kernel/mm/transparent_hugepage/enabled
-bash: /sys/kernel/mm/transparent_hugepage/enabled: Permission denied
These permission errors are disconcerting. Redis wants THP off so it can work properly.
I did a little digging and found that google uses a special os image that makes /sys/ a read-only path. There's an alternative image that's based on Debian 7. It got me all excited but in the end I have exactly the same problem.
So how do I stop redis from being effected by THP on Google container engine?
It's not like I'm doing something unique here. Running databases in containers is pretty normal. And it's pretty normal for a database to malfunction when THP is enabled. So... what am I missing here?
Your command is slightly incorrect: echo runs as root but the redirection itself (>) runs as user so it can't write /sys/.
The following command works fine both on container-vm (debian based) and gci (chromeos based):
sudo sh -c 'echo never > /sys/kernel/mm/transparent_hugepage/enabled'
Persisting this setting on container-vm
Add this kernel command line parameter into /etc/default/grub (don't forget to run sudo update-grub and sudo reboot afterwards):
GRUB_CMDLINE_LINUX="... transparent_hugepage=never"
Persisting this setting on gci
First, using the cloud console copy the instance template that is in use by the node pool.
Second, under metadata change the value for userdata:
#cloud-config
write_files:
- path: /etc/systemd/system/hugepage.service
permissions: 0644
owner: root
content: |
[Unit]
Description=Disable THP
[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo never > /sys/kernel/mm/transparent_hugepage/enabled"
[Install]
WantedBy=kubernetes.target
...
runcmd:
- ...
- systemctl enable hugepage.service
- systemctl start kubernetes.target
Third, change the instance template to the newly created one:
gcloud compute instance-groups managed set-instance-template \
gke-YOUCLUSTER-YOURPOOL-grp \
--template=YOURNEWTEMPLATENAME \
--zone=...
Forth, recreate the instace(s):
gcloud compute instance-groups managed recreate-instances \
gke-YOUCLUSTER-YOURPOOL-grp \
--zone=... \
--instances=...
The instances will loose all data and come up with THP disabled. All new instances will have THP disabled as well (in this node pool).

How to add EnvironmentFile directive to systemctl using Docker with centos7/httpd base image

I am not sure if this is possible without creating my own base image, but I use environment variables in /etc/environment on our servers and typically make them accessible to apache by doing the following:
$ printf "HTTP_VAR1=var1-value\n\
HTTP_VAR2=var2-value"\
>> /etc/environment
$ mkdir /usr/lib/systemd/system/httpd.service.d
$ printf "[Service]\n\
EnvironmentFile=/etc/environment"\
> /usr/lib/systemd/system/httpd.service.d/environment.conf
$ systemctl daemon-reload
$ systemctl restart httpd
$ reboot
The variables are then available in any PHP calls to getenv('HTTP_VAR1'); and etc. However, in running this from a docker file I get dbus errors on the systemctl commands. Without the systemctl commands it seems the variables are not available to apache as it seems the new EnvironmentFile directive doesn't take effect. My docker file snippet:
FROM centos/httpd:latest
RUN printf "HTTP_VAR1=var1-value\n\
HTTP_VAR2=var2-value"\
>> /etc/environment
RUN mkdir /usr/lib/systemd/system/httpd.service.d &&\
printf "[Service]\n\
EnvironmentFile=/etc/environment"\
> /usr/lib/systemd/system/httpd.service.d/environment.conf
RUN systemctl daemon-reload &&\
systemctl restart httpd
COPY entrypoint.sh /entrypoint.sh
So I happened upon the answer to the issue today. It seems that systemd drops backslashes inside single quotes, but it may effect double-quotes too from what I saw in testing. I found the systemd development mailing list thread from April 2014 where patching the issue was being discussed. It seems as though the fix never made it in. So we have to work around it.
In attempting to work around it I noticed some issues with actually reading the variables at all. It seemed as though either Apache or php-cli would get the correct variables, and sometimes not at all, it took a bit of sleuthing to figure out what was going on. Then I started reading into the systemd's EnvironmentFile directive to see if there was more to gain from the docs. It turns out it does not evaluate bash so export won't work. It expects a text file with variable assignments and herein lies one of the main issues that might keep this from being resolved.
I then devised a workable solution. Utilizing systemd's ExecStartPre directive I am able to run a script on startup of the httpd service. I then read in the environment file and write a new plain text one that will then be used by httpd's systemd unit. Here is the code:
Firstly, I moved my variables to /etc/profile.d/ directory rather than /etc/environment file.
file: /etc/profile.d/environment.sh
This is where we store all our environment variables, this gets easily sourced on all interactive shell logins. In the rarer cases where we need to have these variables available non-interactively we can either provide --login flag to /bin/bash or source it manually.
export HTTP_VAR1=var1-value-with-a-back\slash
export HTTP_VAR2=var2-value
file: /usr/lib/systemd/system/httpd.service.d/environment.conf
Our drop-in unit file to extend how the httpd service works. I add in a script that runs before httpd starts up. This gets ran on all httpd restarts and starts. The script that runs generates a plain text file at /etc/profile.d/environment.env which we subsequently tell systemd to load as an EnvironmentFile.
[Service]
ExecStartPre=/usr/bin/bash -c "/usr/local/bin/generate-plain-environment-file"
EnvironmentFile=/etc/profile.d/environment.env
file: /usr/local/bin/generate-plain-environment-file
Here is the script I am using, I whipped this together really fast, I really don't think it is that robust and it could be better. It just removes the export from the beginning of the lines and then escapes any backslashes since systemd drops single backslashes. A more proper solution might be to use bash to evaluate each line and obtain the variable value in case of usage of variables or other bash in the actual bash variables, then output them as plain text name=value assignments, however this is not part of my use-case so I didn't bother.
#!/bin/bash
cd /etc/profile.d/
rm -rf "./environment.env"
while IFS='' read -r line || [[ -n "$line" ]]; do
echo $(echo "${line}" | sed 's/^export //' | sed 's/\\/\\\\/g') >> "./environment.env";
done < "./environment.sh"
file: /etc/profile.d/environment.env
This is the resulting file generated by the described script.
HTTP_VAR1=var1-value-with-a-back\\slash
HTTP_VAR2=var2-value
Conclusion is that the I now have two files with the same thing in them but I only need to maintain the one, the other is generated each time we restart httpd. Also, we fix the backslash issue in the process. Hurray!