My file system went read-only. After reading this answer I ran dmesg | grep "EXT4-fs error" to see if I have any issues related to the filesystem / journaling system itself.
It returned many times:
[68241.757233] EXT4-fs error (device vda): htree_dirblock_to_tree:892: inode #533953: block 2108070: comm updatedb.mlocat: bad entry in directory: rec_len is smaller than minimal - offset=0(0), inode=0, rec_len=0, name_len=0
What should I do?
I rebooted ubuntu, it asked me if I want to solve some problems, I said yes and it did solve them.
Related
I am trying to debug for my program in Archlinux in embedded board imx8. The program just terminate abruptly after starting up. I am trying to find the core dump to examine. However I could not find the core dump anywhere (tried to find it in /var/lib/systemd/coredump)
I try to follow this link https://wiki.archlinux.org/title/core%20dump. However the link contains only how to disable the link. So I check whether there is disabling of coredump in my board
Using systemd
I could not find the presence of custom.conf in /etc/systemd/coredump.conf.d/custom.conf. In fact I could not find anything. So I guess this is not how the core dump is disabled
Using sysctl
/etc/sysctl.d/50-coredump.conf is not found.
Using PAM limits
/etc/security/limits.conf could be found. However the whole file is commented (#) and the limits is not found
Using ulimit
I try to use "ulimit -c unlimited to unset the limits.
However even after checking all this files, there is not any coredump file found in /var/lib/systemd/coredump. So I wonder what I could do.
I tried using coredumpctl list. However it seem that it has not been installed on the board. So I could not use coredumpctl to find the dump file
So I wonder whether there is any settings I should set for release of coredump
Regards
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I frequently get the following Dropbox error. The error message's proposal fixes the error, but I'm trying to figure out what it's doing to my system, and perhaps if there is a root cause at play.
Unable to monitor entire Dropbox folder hierarchy. Please run
echo fs.inotify.max_user_watches=100000 | sudo tee -a /etc/sysctl.conf; sudo sysctl -p`
and restart Dropbox to fix the problem.
Note: I strongly encourage you to actually DO the steps and not just read them if you want to learn about Linux!
If I type apropos inotify in a shell to see which manpage are about "inotify" I get these results:
$ apropos inotify
inotify (7) - monitoring filesystem events
inotify_add_watch (2) - add a watch to an initialized inotify instance
inotify_init (2) - initialize an inotify instance
inotify_init1 (2) - initialize an inotify instance
inotify_rm_watch (2) - remove an existing watch from an inotify instance
upstart-file-bridge (8) - Bridge between Upstart and inotify
apropos finds manpages. apropos is your friend. Remember apropos.
The first one looks promising, so lets try opening that with man inotify. You should now get the documentation of ifnotify. It says:
NAME
inotify - monitoring filesystem events
DESCRIPTION
The inotify API provides a mechanism for monitoring filesystem events.
Inotify can be used to monitor individual files, or to monitor directo‐
ries. When a directory is monitored, inotify will return events for
the directory itself, and for files inside the directory.
So now we've learned what inotify roughly does. Let's see if it also has something useful to
say about your error.
We can search in manpages by pressing /<term><enter>. So lets try: /max_user_watches<enter> That brings us to this section:
/proc/sys/fs/inotify/max_user_watches
This specifies an upper limit on the number of watches that can
be created per real user ID.
The /proc/sys/fs/inotify/max_user_watches file is the same as the fs.inotify.max_user_watches setting in /etc/sysctl.conf. They're just two different ways to access the same Linux kernel parameter.
You can press q to exit.
I can see what the current value is by using:
$ cat /proc/sys/fs/inotify/max_user_watches
524288
or:
$ sysctl fs.inotify.max_user_watches
fs.inotify.max_user_watches = 524288
They both use the same underlying value in the Linux kernel.
Note that this value is actually five times larger than what Dropbox recommends! This is on my Ubuntu 15.10 system.
So now we have learned that:
inotify is a Linux system to monitor changes to files and directories.
That we can set how many files or directories a user is allowed to watch simultaneously.
That Dropbox gives the error "Unable to monitor entire Dropbox folder hierarchy".
From this information, It seems to be the case that Dropbox is unable to watch enough files and directories for changes because the fs.inotify.max_user_watches is too low.
What I recommend is:
Check the /etc/sysctl.conf with any text editor as root. Make sure there are no fs.inotify.max_user_watches=100000 lines here. If there are, remove them.
Reboot the system to restore the default value.
Check what the value of fs.inotify.max_user_watches is as described above.
Double that by running sudo echo 'fs.inotify.max_user_watches=XXX' >> /etc/sysctl.conf && sudo sysctl -p /etc/sysctl.conf. You don't need to reboot after this change.
Hope this error is now fixed. Time will tell. If not, try doubling the value again.
If that doesn't fix it, there may be another problem, or maybe you just need to increase it even more. It depends a bit on what the default value is for your system.
Note: Newer versions of systemd no longer load the /etc/sysctl.conf file, it will only load files from the /etc/sysctl.d/ directory. Using a file in the /etc/sysctl.d directory should already be supported by most Linux distros, so I recommend you use that for future-proofing.
If you're wondering "why is there a limit in the first place?" then consider what would happen if a program would watch a million files. Would that still work? And what about a billion? 10 billion? 10 trillion? etc.
At some point, your system will run out of resources and crash. See here on some "fun" ways to do that ;-)
Recently I changed the permissions of the file system and gave myself all the rights. I logged out of the system and I couldn't log back in. I got the error message
Could not update ICEauthority file /home/marundu/.ICEauthority</>
I did a live boot with a Fed 17 disc and replaced my .ICEauthority file with the live-user version and it worked for a time, until I logged out again. Now, the login progress screen is all that shows. I can log into command mode (Ctrl-Alt-F2) but I can't sudo - I get the error messages:
sudo:/usr/libexec/sudoers.so must be only writable by owner and sudo: fatal error. Unable to load plugins.
I just found a good link on Ubuntu:
Ask Ubuntu: ICEauthority permissions problem
Some things to note:
I tried the obvious things like changing file permissions, but found my whole home directory was somehow owned by root. I believe this was due to a failed package update.
I used a recovery disk (Knoppix ISO) for ease of use: Better UI
When mounting the bad home partition, I used the most common Linux file type (Ext4)
I used 'sudo mount -o r,w -t ext4 /dev/sda1 /mnt'
When changing ownership, I used the numeric user:group specification, since the recovery disk doesn't have the symbolic users and groups: 'sudo chown -R 1000:1000 /mnt/home/userdir'
I verified that /home/userdir had rwx for owner, r-x for group / other. This is noted as a valid set of permissions for ICEauthority; others can work. See the linked discussion.
Hope that helps someone...
I got the “Could not update ICEauthority file” error and found that my home partition was in "Read-Only" mode. Thus, this error made sense.
The real question was what caused the "Read-Only" attribute on the partition. I ran "dmesg | read-only" and found that there were serious errors with the file system on my home partition which the kernel had set to "read-only during the boot process.
I then booted from a USB key (CDROM would do as well) and ran "sudo fsck /dev/sdXY" where /dev/sdXY is the partition containing my home directory. fsck corrected a number of file system errors on my home partition.
I then reboot after removing the USB key/CDROM and the problem went away.
Bottom line: Check if your home partition has file system errors. They might be the cause of this error. If so, run fsck from an external device on the partition containing your home directory.
I am trying to make an embedded Linux for a SuperH processor board. I am using the Buildroot 2012.2 toolchain with uClibc.
All compiles fine but when I try to run some of the BusyBox applets (for instance 'ls' or 'mount'), I get an error like this:
ls: : Unknown error 2
For 'ls' in particular it writes this error number-of-files-in-folder times.
Do you have any ideas what might be causing this? No one on the internet seems to have the same problem and I am crawling the configs for several days without luck.
I believe the error might be caused by misconfigured uClibc but it is just my guess.
Thanks.
EDIT:
I enabled several error message options in uClibc and now I get "no such file or directory" error.
I will answer my own question.
The first and most important problem was, that I have over-optimized uClibc so all reasonable error reports were not included. If you are reading this and have same problem, switch error messages on. They are quite small and very useful.
After resolving human readable error reports, I realized, that the putchar function was disabled. Enabling it solved the problem.
To future generation I advise extreme caution, which features of uClibc you decide to disable, unless you want to spend several days finding an unexpected bug.
I've been trying to backup my ubuntu11.04 with the following tar command
sudo tar -cvpzf /media/TOSHIBA\ EXT/backup.tar.gz --exclude=/backup.tar.gz --exclude=/lost+found --exclude=/proc --exclude=/sys --exclude=/mnt --exclude=/media --exclude=/dev --exclude=/home/manuzhang/Music --exclude=/home/manuzhang/Videos --exclude=/home/manuzhang/Pictures --exclude=/home/.aMule /
every time there is such a failure message
tar: Exiting with failure status due to previous errors <br/>
the procedure exited when packaging the directory /sbin several times. Finally I exclude it, but it exited in /root
So what caused the problem?
Anyone has similar experiences?
Many thanks!
The error `"Exiting with failure status due to previous errors" means exactly that. There was an earlier problem which, while not fatal to the running of the program, is reason enough to exit with a failure code.
Given that you're backing up from the root level, this is almost certainly because a file is not able to be backed up for some reason.
Leave off the verbose flag -v and you should hopefully be able to see the problem better.