WSL2 Clock is out of sync with Windows - windows-subsystem-for-linux

WSL2 clock goes out of sync after resuming from sleep/hibernate.
A workaround was shared on GitHub sudo hwclock -s to resync clock in WSL, but you have to do this every time you resume from sleep/hibernate.
UPDATE: THIS BUG IS FIXED, just check for updates! See the Clock Sync fix

In case anyone finds this via search and doesn't notice that there is actually a solution listed in the question, you can fix WSL clock drift via.
sudo hwclock -s
If you just need to do it occasionally, this is a fine solution. If you need to do it more frequently, consider #piouson's solution

Update
The fix is now in WSL2 Linux kernel 5.10.16.3 and newer! Note you may need to install WSL2 from the Windows Store to get the latest kernel version per this thread with Craig from Microsoft.
Older Answer
sudo hwclock -s gets you kind of there, but for some reason doesn't get the exact time - I often find it's a minute or so in the future!
sudo ntpdate pool.ntp.org should get you the correct time.
But this is all because of a bug in the Linux kernel which should be included in a Windows update at some point...
There are a number of hacks referenced in the the GitHub issue which can work around this, mostly, but not always, in my experience...

just restart wsl, it works fine for me
wsl --shutdown
then
wsl
in PowerShell

UPDATE: as mentioned by drkvogel, the Clock Sync fix was released in WSL2 kernel version 5.10.16.3
OBSOLETE
At time of writing, this GitHub Issue was open for the bug.
The workaround I chose for my situation (single distro in WSL2) is to use Windows Task Scheduler to run hwclock in WSL whenever Windows resyncs hardware clock.
Windows: Open PowerShell as Administrator
schtasks /create /tn WSLClockSync /tr "wsl.exe sudo hwclock -s" /sc onevent /ec system /mo "*[System[Provider[#Name='Microsoft-Windows-Kernel-General'] and (EventID=1)]]"
Set-ScheduledTask WSLClockSync -Settings (New-ScheduledTaskSettingsSet -AllowStartIfOnBatteries)
WSL2: Run sudo visudo and add hwclock to sudoers to skip password prompt
# bottom of my file looks like this
...
...
#includedir /etc/sudoers.d
<username> ALL=(ALL) NOPASSWD:/usr/sbin/hwclock, /usr/bin/apt update, /usr/bin/apt upgrade
Results
See image for how to get Event XPath from Windows Event filtering. Use as provided to let task scheduler auto-display scheduled triggers.

Use cron to schedule sudo hwclock -s
As others said before sudo hwclock -s syncs the clock,
but you will need to do this after every sleep/hibernate.
Solution is to add an hourly cron task to sync the clock.
Open crontab with sudo (must open with sudo since the command uses sudo):
sudo crontab -e
and add this code with a new line after the task (it's a cron requirement):
PATH=/sbin:/bin:/usr/bin
#hourly hwclock -s
You must either set PATH since root-cron do not has it or use absolute paths e.g. /usr/sbin/hwclock.
cron troubleshooting:
To verify cron is working you may add a dummy task (don't forget to add a new line):
* * * * * date > /tmp/log.txt
If no file is created, verify cron is working: pgrep cron.
If no PID shows, start cron with: sudo service cron start.
To learn cron timing method: cron timing generator

Necro'ing this: As of May 2022, this issue persists to a degree.
There are two components.
First, Windows time sync needs to be decent to start with. It's not, out of the box, on machines that aren't domain-joined.
Change w32time to start automatically. In Administrator cmd, but not PowerShell, sc triggerinfo w32time start/networkon stop/networkoff. Verify with sc qtriggerinfo w32time. To get into cmd that way, you can start Admin PowerShell and then just type cmd.
Make a few changes in regedit.
In Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\Config, set MaxPollInterval to hex c, decimal 12.
Check Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\Parameters\NtpServer. If it ends in 0x9 you are done. If it ends in 0x1 you need to adjust SpecialPollInterval in Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\w32time\TimeProviders\NtpClient to read 3600
Reboot, then from Powershell run w32tm /query /status /verbose to verify that w32time service did start. If it didn't, check triggers again. If all else fails, set it to Automatic Delayed startup
Second, WSL2 needs to actually stay in sync. MS will likely release another kernel fix. In the meantime a scheduled task can bring it back into sync periodically:
schtasks /Create /TN WSL2TimeSync /TR "wsl -u root hwclock -s" /SC ONEVENT /EC System /MO "*[System[Provider[#Name='Microsoft-Windows-Kernel-Power'] and (EventID=107 or EventID=507) or Provider[#Name='Microsoft-Windows-Kernel-General'] and (EventID=1)]]" /F

This GitHub Issue was closed
You can also run the below command in Powershell Terminal so sync it.
wsl.exe sudo hwclock -s

You can manually update the WSL2 kernel to 5.10.16 by following the method in this comment: #5650 (comment). I have fixed the issue by this method.

I've added this to Windows Task Scheduler, set to run every 12 hours:
wsl.exe -d ubuntu -u root -- ntpdate time.windows.com
To install ntpdate:
sudo apt install ntpdate

For me this issue seems to be happening when the system goes to sleep. So I have registered a bash command to call whenever, it goes out of sync. I did it by adding a function.sh file and sourced it in ~/.bashrc.
function.sh:
YELLOW='\033[0;33m'
NC='\033[0m'
TIME_SERVER=ntp.ubuntu.com
# Sync wsl time
sync_date () {
echo -e "${RED}sudo ntpdate $TIME_SERVER ${NC}"
echo
sudo ntpdate $TIME_SERVER
}
~/.bashrc:
source ~/Linux/funtions.sh
Note that I have added a bit of color and some customizations (TIME_SERVER: [windows time server is other option]).
You can sync the time using sync_date command in cli.

Related

monitor bash script execution using monit

We have just started using monit for process monitor and pretty much new in monit. I have a bash script at /home/ubuntu/launch_example.sh. This is continuously running. is it possible to monitor this using monit? monit should start the script if it bash scripts terminates. What should be syntax.I tried below syntax but all the commands are not being executed as ubuntu user, like shell script calls some python scripts.
check process launch_example
matching "launch_example"
start program = "/bin/bash -c '/home/ubuntu/launch_example.sh'"
as uid ubuntu and gid ubuntu
stop program = "/bin/bash -c '/home/ubuntu/launch_example.sh'"
as uid ubuntu and gid ubuntu
The simple answer is "no". Monit is just for monitoring and is not some kind of supervisor/process manager. So if you want to monitor your long running executable, you have to wrap it.
check process launch_example with pidfile /run/launch.pid
start program = "/bin/bash -c 'nohup /home/ubuntu/launch_example.sh &'"
as uid ubuntu and gid ubuntu
stop program = "/bin/bash -c 'kill $(cat /run/launch.pid)'"
as uid ubuntu and gid ubuntu
This quick'n'dirty way also needs an additional line for your launch_example.sh to write the pidfile (pidfile matching should always be preferred over string matching) - it could be just the first line after she shebang. It simply writes the current process ID to the pidfile. Nothing fancy here ;)
echo $$ > /run/launch.pid
In fact, it's not even hard to convert your script into a systemd unit. Here is an example on how to. User binding, restarts, pidfile, and "start-on-boot" can then be managed through systemd (eg. start program = "/usr/bin/systemctl start my_unit").

React Native Error: ENOSPC: System limit for number of file watchers reached

I have setup a new blank react native app.
After installing few node modules I got this error.
Running application on PGN518.
internal/fs/watchers.js:173
throw error;
^
Error: ENOSPC: System limit for number of file watchers reached, watch '/home/badis/Desktop/react-native/albums/node_modules/.staging'
at FSWatcher.start (internal/fs/watchers.js:165:26)
at Object.watch (fs.js:1253:11)
at NodeWatcher.watchdir (/home/badis/Desktop/react-native/albums/node modules/sane/src/node watcher. js:175:20)
at NodeWatcher.<anonymous> (/home/badis/Desktop/react-native/albums/node modules/sane/src/node watcher. js:310:16)
at /home/badis/Desktop/react-native/albums/node modules/graceful-fs/polyfills.js:285:20
at FSReqWrap.oncomplete (fs.js:154:5)
I know it's related to no enough space for watchman to watch for all file changes.
I want to know what's the best course of action to take here ?
Should I ignore node_modules folder by adding it to .watchmanconfig ?
Linux uses the inotify package to observe filesystem events, individual files or directories.
Since React / Angular hot-reloads and recompiles files on save it needs to keep track of all project's files. Increasing the inotify watch limit should hide the warning messages.
You could try editing
# insert the new value into the system config
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
# check that the new value was applied
cat /proc/sys/fs/inotify/max_user_watches
# config variable name (not runnable)
fs.inotify.max_user_watches=524288
The meaning of this error is that the number of files monitored by the system has reached the limit!!
Result: The command executed failed! Or throw a warning (such as executing a react-native start VSCode)
Solution:
Modify the number of system monitoring files
Ubuntu
sudo gedit /etc/sysctl.conf
Add a line at the bottom
fs.inotify.max_user_watches=524288
Then save and exit!
sudo sysctl -p
to check it
Then it is solved!
You can fix it, that increasing the amount of inotify watchers.
If you are not interested in the technical details and only want to get Listen to work:
If you are running Debian, RedHat, or another similar Linux distribution, run the following in a terminal:
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
If you are running ArchLinux, run the following command instead
$ echo fs.inotify.max_user_watches=524288 | sudo tee /etc/sysctl.d/40-max-user-watches.conf && sudo sysctl --system
Then paste it in your terminal and press on enter to run it.
The Technical Details
Listen uses inotify by default on Linux to monitor directories for changes. It's not uncommon to encounter a system limit on the number of files you can monitor. For example, Ubuntu Lucid's (64bit) inotify limit is set to 8192.
You can get your current inotify file watch limit by executing:
$ cat /proc/sys/fs/inotify/max_user_watches
When this limit is not enough to monitor all files inside a directory, the limit must be increased for Listen to work properly.
You can set a new limit temporary with:
$ sudo sysctl fs.inotify.max_user_watches=524288
$ sudo sysctl -p
If you like to make your limit permanent, use:
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -p
You may also need to pay attention to the values of max_queued_events and max_user_instances if listen keeps on complaining.
From the official document:
"Visual Studio Code is unable to watch for file changes in this large workspace" (error ENOSPC)
When you see this notification, it indicates that the VS Code file watcher is running out of handles because the workspace is large and contains many files. The current limit can be viewed by running:
cat /proc/sys/fs/inotify/max_user_watches
The limit can be increased to its maximum by editing
/etc/sysctl.conf
and adding this line to the end of the file:
fs.inotify.max_user_watches=524288
The new value can then be loaded in by running
sudo sysctl -p
Note that Arch Linux works a little differently, See Increasing the amount of inotify watchers for details.
While 524,288 is the maximum number of files that can be watched, if you're in an environment that is particularly memory constrained, you may wish to lower the number. Each file watch takes up 540 bytes (32-bit) or ~1kB (64-bit), so assuming that all 524,288 watches are consumed, that results in an upper bound of around 256MB (32-bit) or 512MB (64-bit).
Another option
is to exclude specific workspace directories from the VS Code file watcher with the files.watcherExclude setting. The default for files.watcherExclude excludes node_modules and some folders under .git, but you can add other directories that you don't want VS Code to track.
"files.watcherExclude": {
"**/.git/objects/**": true,
"**/.git/subtree-cache/**": true,
"**/node_modules/*/**": true
}
delete react node_modules
rm -r node_modules
yarn or npm install
yarn start or npm start
if error occurs use this method again
Firstly you can run every time with root privileges
sudo npm start
Or you can delete node_modules folder and use npm install to install again
or you can get permanent solution
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
It happened to me with a node app I was developing on a Debian based distro. First, a simple restart solved it, but it happened again on another app.
Since it's related with the number of watchers that inotify uses to monitors files and look for changes in a directory, you have to set a higher number as limit:
I was able to solve it from the answer posted here
(thanks to him!)
So, I ran:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
Read more about what’s happening at https://github.com/guard/listen/wiki/Increasing-the-amount-of-inotify-watchers#the-technical-details
Hope it helps!
Remembering that this question is a duplicated: see this answer at original question
A simple way that solve my problem was:
npm cache clear
best practice today is
npm cache verify
npm or a process controlled by it is watching too many files. Updating max_user_watches on the build node can fix it forever. For debian put the following on terminal:
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
If you want know how Increase the amount of inotify watchers only click on link.
I use ubuntu 20 server and i add in the file : /etc/sysctl.conf the below line
fs.inotify.max_user_watches=524288
Then save the file and run sudo sysctl -p
After that all is works fine!
I solved this issue by using sudo
ie
sudo yarn start
or
sudo npm start
Use sudo to solve this issue will force the number of watchers to be increased without apply any modifications in system settings. Use sudo to solve this kind of issue is never recommended, although it's a choice that have to be made by you, hope you choose wisely.
Root cause
Most answers above talk about raising the limit, not about taking away the root cause which is typically just a matter redundant watches, typically for files in node_modules.
Webpack
The answer is in the webpack 5 docs:
watchOptions: { ignored: /node_modules/ }
Simply read here: https://webpack.js.org/configuration/watch/#watchoptionsignored
The docs even mention this as a "tip", quote:
If watching does not work for you, try out this option. This may help
issues with NFS and machines in VirtualBox, WSL, Containers, or
Docker. In those cases, use a polling interval and ignore large
folders like /node_modules/ to keep CPU usage minimal.
VS Code
VS Code or any code editor creates lots of file watches too. By default many of them are completely redundant. Read more about it here: https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc
Generally we don't need to increase count of filewatchers
In this case we will have more watchers
We need to remove redundant watchers what became zombie
The issue is that we have many filewatchers that are filling out our memory
We just need remove these filewatchers (in case of node)
killall node
In react.js show me same error i fix this way hope work in react native too
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
Now you can run npm start again.
npm start
Using the sysctl -p approach after setting fs.inotify.max_user_watches did not work for me (by the way this setting was already set to a high value, likely from me trying to fix this issue a while back ago, using the commonly recommended workaround(s) above).
The best solution to the problem I found here, and below I share the performed steps in solving it - in my case the issue was spotted while running visual studio code, but solving the issue should be the same in other instances, like yours:
Use this script to identify which processes are requiring the most file watchers in your session.
You can then query the current max_user_watches value with sysctl fs.inotify.{max_queued_events,max_user_instances,max_user_watches} and then set it to a different value (a lower value may do it)
sudo sysctl -w fs.inotify.max_user_watches=16384
Or you can simply kill the process you found in (1) that consumes the most file watchers (in my case, baloo_file)
The above, however, will likely need to be done again when restarting the system - the process we identified as responsible for taking much of the file watchers will (in my case - baloo_file) - will again so the same in the next boot. So to permanently fix the issue - either disable or remove this service/package. I disabled it: balooctl disable.
Now run sudo code --user-data-dir and it should open vscode with admin privileges this time. (by the way when it does not - run sudo code --user-data-dir --verbose to see what the problem is - that's how I figured out it had to do with file watchers limit).
Update:
You may configure VS code file watcher exclusion patterns as described here. This may prove to be the ultimate solution, I am just not sure you will always know beforehand which files you are NOT interested watching.
Easy Solution
I found, that a previous solution work well in my case. I removed node_modules and clear the yarn / npm cache.
Long Tail Solution
If you want to have a long-tail solution - e.g. if you often be catched by this error - you can increase the value of allowed watchers (depending on your available memory)
To figure out the current used amount of watchers, instead of only guessing, you can use this handy bash-script:
https://github.com/fatso83/dotfiles/blob/master/utils/scripts/inotify-consumers
I suggest to set the max_user_watches temporary to a high value:
sudo sysctl fs.inotify.max_user_watches=95524288 and run the script.
How to calculate how much you can use
Each watcher needs
540 bytes (32-bit system), or
1 kB (double - on 64-bit OS
So if you will allow to use 512MB (on 64Bit), you set something 524288 as value.
Other way around, you can take the amount of memory you will set, and multiply it by 1024.
Example:
512 * 1024 = 52488
1024 * 1024 = 1048576
It shows you the exact amount of the current used inotify-consumers. So you might have an better Idea, how much you should increase the limit.
If you are running your project in Docker, you should do the echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf and all other commands in the host machine, since the container will inherit that setting automatically (and doing it directly inside it will not work).
Late answer, and there are many good answers already.
In case you want a simple script to check if the maximum file watches is big enough, and if not, increase the limit, here it is:
#!/usr/bin/env bash
let current_watches=`sysctl -n fs.inotify.max_user_watches`
if (( current_watches < 80000 ))
then
echo "Current max_user_watches ${current_watches} is less than 80000."
else
echo "Current max_user_watches ${current_watches} is already equal to or greater than 80000."
exit 0
fi
if sudo sysctl -w fs.inotify.max_user_watches=80000 && sudo sysctl -p && echo fs.inotify.max_user_watches=80000 | sudo tee /etc/sysctl.d/10-user-watches.conf
then
echo "max_user_watches changed to 80000."
else
echo "Could not change max_user_watches."
exit 1
fi
The script increases the limit to 80000, but feel free to set a limit that you want.
As already pointed out by #snishalaka, you can increase the number of inotify watchers.
However, I think the default number is high enough and is only reached when processes are not cleaned up properly. Hence, I simply restarted my computer as proposed on a related github issue and the error message was gone.
Another simple and good solution is just to add this to jest configuration:
watchPathIgnorePatterns: ["<rootDir>/node_modules/", "<rootDir>/.git/"]
This ignores the specified directories to reduce the files being scanned
In my case in Angular 13, I added in tsconfig.spec.json
"exclude": [
"node_modules/",
".git/"
]
thanks #Antimatter it gaves me the trick.
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
Run This Code In Project Terminal After Run Npm Run Dev
Please refer this link[1]. Visual Studio code has mentioned a brief explanation for this error message. I also encountered the same error. Adding the below parameter in the relavant file will fix this issue.
fs.inotify.max_user_watches=524288
[1] https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc
While almost everyone suggests to increase a number of watchers, I couldn't agree that it is a solution.
In my case I wanted to disable watcher completely, because of the tests running on CI using vui-cli plugin which starts web-pack-dev server for each test.
The problem was: when a few builds are running simultaneously they would fail because watchers limit is reached.
First things first I've tried to add the following to the vue.config.js:
module.exports = {
devServer: {
hot: false,
liveReload: false
}
}
Ref.: https://github.com/vuejs/vue-cli/issues/4368#issuecomment-515532738
And it worked locally but not on CI (apparently it stopped working locally the next day as well for some ambiguous reason).
After investigating web-pack-dev server documentation I found this:
https://webpack.js.org/configuration/watch/#watch
And then this:
https://github.com/vuejs/vue-cli/issues/2725#issuecomment-646777425
Long story short this what eventually solved the problem:
vue.config.js
module.exports = {
publicPath: process.env.PUBLIC_PATH,
devServer: {
watchOptions: {
ignored: process.env.CI ? "./": null,
},
}
}
Vue version 2.6.14
if you working with vs code editor any editor that error due to large number of files in projects. node_modules and build not required in it so remove in list. that all open in vs code files menu
You have to filter unnecessary folders file sidebar
Goes to Code > Preferences > settings
in search setting search keyword "files:exclude"
Add pettern
**/node_modules
**/build
That's it
Try this , I was facing it for very long time but at the end it is solved by this,
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
The most important step after that is restart your system.
2 fixes if you've already added: fs.inotify.max_user_watches=524288
Reboot the machine, things will work again
Rename the folder that is causing the issue (for me node_modules) to an arbitrary name (node_modilesa) and then rename right back. This will remove the watches that linux had put on those folders. Allowing you code as normal again.
I encountered this issue on a linuxmint distro. It appeared to have happened when there was so many folders and subfolders/files I added to the /public folder in my app.
I applied this fix and it worked well...
$ echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
change directory into the /etc folder:
cd /etc
then run this:
sudo systcl -p
You may have to close your terminal and npm start again to get it to work.
If this fails i recommend installing react-scripts globally and running your application directly with that.
$ npm i -g --save react-scripts
then instead of npm start run react-scripts start to run your application.
I tried increasing number as suggested but it didn't work.
I saw that when I login to my VM, it displayed "restart required"
I rebooted VM and it worked
sudo reboot
it is to easy to fix this
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
and run your project.
if there is fs.inotify.max_user_watches=524288 in your /etc/sysctl.conf,
run same command(echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf). and run your project
For vs code, see detailed instructions here:
https://code.visualstudio.com/docs/setup/linux#_visual-studio-code-is-unable-to-watch-for-file-changes-in-this-large-workspace-error-enospc

Running "screen" without additional permissions on WSL

I'm trying to run the "screen" utility on Windows Subsystem for Linux on Windows 10 (Version 1703, OS Build 15063.483).
It seems that I need additional permissions to run it (it works if I "sudo" it), but I don't understand why that is necessary.
What is the recommended way to set this up?
Is there some reason why this isn't the default set up?
$ screen
Cannot make directory '/var/run/screen': Permission denied
From an answer on SuperUser I discovered that you have to run
sudo /etc/init.d/screen-cleanup start
Then screen works fine for me.
EDIT: after installing Ubuntu 20.04 the problem went away (*).
As Krease pointed out, the best solution is the one described in this SuperUser post.
Add the following to your .bashrc:
export SCREENDIR=$HOME/.screen
[ -d $SCREENDIR ] || mkdir -p -m 700 $SCREENDIR
See also issue 1245 on github.
--
(*) now this warning comes up, but seems harmless:
sleep: cannot read realtime clock: Invalid argument
sudo screen # which creates dir /var/run/screen
chmod 777 /var/screen # so that non-root users can create their own screen dir in this dir.

"docker run -dti" with a dumb terminal

updated: added the missing docker attach.
Hi am trying to run a docker container, with -dti. but I cannot access with a terminal set to dumb. is there a way to change this (it is currently set to xterm, even though my ssh client is dumb)
example:
create the container
docker run -dti --name test -v /my-folder alpine /bin/ash
docker attach test
apk --update add nodejs
cd /my-folder
npm install -g gulp
the last command always contains ascii escape chars to move the cursor.
I have tried "export TERM=dumb" inside the running container, but it does not work.
is there a way to "run" this using the dumb terminal?
I am running this from a script on another computer, via (dumb) ssh.
using the -t which sets this https://docs.docker.com/engine/reference/run/#env-environment-variables, however removing effects the command prompt (the prompt is not shown)
possible solution 1 remove the -t and keep the -i. To see if the command has completed echo out a known token (ENDENDEND). ie
docker run -di --name test -v /my-folder alpine /bin/ash
docker attach test
apk --update add nodejs;echo ENDENDEND
cd /my-folder;echo ENDENDEND
npm install -g gulp;echo ENDENDEND
not pretty, but it works (there is no ascii in the results)
Possible solution 2 use the journal, docker can log out to the linux journal, this can be gathered as commands are executed in the container. (I have yet to fully test this one out. however the log seems to be a nicer output of what happened)
update:
Yep -t is the problem.
However if you want to see the entire process when running a command, maybe this way is better:
docker run -di --name test -v/my-folder alpine /bin/ash
docker exec -it test /bin/ash
finally you need to kill the container after all jobs finished.
docker run -d means "Run container in background and print container ID"
not start the container as a daemon
I was hitting this issue on OSx running docker, i had to do 2 things to stop the terminal/ascii/ansi escape sequences.
remove the "t" option on the docker run command (from docker run -it ... to docker run -i...)
ensure to force bash or sh shells used on osx when running the command from a script file, not the default zsh
Also
the escape sequences were not always visible on the terminal
even so, they still usually caused content corruption, even with SED brought to bear
they always were shown in my editor

Proper way to automatically start and expose ssh when running my app container

I have containers with python apps and I need them to automatically start and expose ssh when running them. I know it's against Docker's best practices, but right now I don't have any other solution. I'd be interested to know the best way to automatically run an additionnal service in a docker container anyway.
Since Docker will only start one process, installing sshd isn't enough. There are apparently multiple options to deal with it:
use a process manager like Monit or Supervisor
use the ENTRYPOINT option
append a command (service sshd start, for instance) at the end of /etc/bash.bashrc (see this answer)
Option 1 seems overkill to me. Also I suppose I'll have to run the container with a cmd calling the process manager instead of bash or my python app: not exactly what I want.
I don't know how to use Option 2 for such a case. Should I write a custom script starting sshd and then running the provided command if any ? How should this script look like ?
Option 3 is very straightforward but quite dirty. Also it won't work if I run the container with another command than /bin/bash.
What's the best solution and how to set it up ?
You mention that option 1 seems like overkill. Why is it overkill? Supervisor is very simple to configure and will basically do what you want.
First, write supervisor config files that starts your python app and sshd:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:pythonapp]
command=/path/to/python myapp.py -x args etc etc
Call that file supervisord.conf and commit it somewhere in your repo. In your Dockerfile, copy that file to the container as one of the container build steps, expose the ports for SSH and your app (if needed) and set the CMD to start supervisord:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
This is clean and easy to understand. It's how I run multiple processes in a container when needed. It is even suggested in the Docker docs as a nice solution.
If you don't want to use a process manager, you can wrap your actual container command inside a shell script and sudo service ssh start, then execute your actual command.
sudo service ssh start
python myapp.py -x args blah blah
This will start up ssh as a daemon, and then your python app will start up after.
Yes, We can configure the Supervisord for the multi process in a container. If you want to use Openssh-server we can configure the Supervisor like below-:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
in supervisord.conf file.
We can add the supervisord.conf file in the docker image update a line in Dockerfile.
RUN apt update && apt install -y supervisor openssh-server
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
Reference link-: Gotechnies