Is there a way to force a Samba process to close a given file without killing it?
Samba opens a process for each client connection, and sometimes I see it holds open files far longer than needed. Usually i just kill the process, and the (windows) client will reopen it the next time it access the share; but sometimes it's actively reading other file for a long time, and i'd like to just 'kill' one file, and not the whole connection.
edit: I've tried the 'net rpc file close ', but doesn't seem to work. Anybody knows why?
edit: this is the best mention i've found of something similar. It seems to be a problem on the win32 client, something that microsoft servers have a workaround for; but Samba doesn't. I wish the net rpc file close <fileid> command worked, I'll keep trying to find out why. I'm accepting LuckyLindy's answer, even if it didn't solve the problem, because it's the only useful procedure in this case.
This happens all the time on our systems, particularly when connecting to Samba from a Win98 machine. We follow these steps to solve it (which are probably similar to yours):
See which computer is using the file (i.e. lsof|grep -i <file_name>)
Try to open that file from the offending computer, or see if a process is hiding in task manager that we can close
If no luck, have the user exit any important network programs
Kill the user's Samba process from linux (i.e. kill -9 <pid>)
I wish there was a better way!
I am creating a new answer, since my first answer really just contained more questions, and really was not a whole lot of help.
After doing a bit of searching, I have not been able to find any current open bugs for the latest version of Samba, please check out the Samba Bug Report website, and create a new bug. This is the simplest way to get someone to suggest ideas as to how to possibly fix it, and have developers look at the issue. LuckyLindy left a comment in my previous answer saying that this is the way it has been for 5 years now, well the project is Open Source the best way to fix something that is wrong by reporting it, and or providing patches.
I have also found one mailing list entry: Samba Open files, they suggest adding posix locking=no to the configuration file, as long as you don't also have the files handed out over NFS not locking the file should be okay, that is if the file is being held is locked.
If you wanted too, you could write a program that uses ptrace and attaches to the program, and it goes through and unlocks and closes all the files. However, be aware that this might possibly leave Samba in an unknown state, which can be more dangerous.
The work around that I have already mentioned is to periodically restart samba as a work around. I know it is not a solution but it might work temporarily.
This is probably answered here: How to close a file descriptor from another process in unix systems
At a guess, 'net rpc file close' probably doesn't work because the interprocess communication telling Samba to close the file winds up not being looked at until the file you want to close is done being read.
If there isn't an explicit option in samba, that would be impossible to externally close an open file descriptor with standard unix interfaces.
Generally speaking, you can't meddle with a process file descriptors from the outside. Yet as root you can of course do that as you seen in that phrack article from 1997: http://www.phrack.org/issues.html?issue=51&id=5#article - I wouldn't recommend doing that on a production system though...
The better question in this case would be why? Why do you want to close a file early? What purpose does it ultimately have to close the file? What are you attempting to accomplish?
Samba provides commands for viewing open files and closing them.
To list all open files:
net rpc file -U ADadmin%password
Replace ADadmin and password with the credentials of a Windows AD domain admin. This gives you a file id, username of who's got it open, lock status, and the filename. You'll frequently want to filter the results by piping them through grep.
Once you've found a file you want to close, copy its file id number and use this command:
net rpc file close fileid -U ADadmin%password
I needed to accomplish something like this, so that I could easily unmount devices I happened to be sharing. I wrote this quick bash script:
#!/bin/bash
PIDS_TO_CLOSE=$(smbstatus -L | tail -n-3 | grep "$1" | cut -d' ' -f1 - | sort -u | sed '/^$/$
for PID in $PIDS_TO_CLOSE; do
kill $PID
done
It takes a single argument, the paths to close:
smbclose /media/drive
Any path that matches that argument (by grep) is closed, so you should be pretty specific with it. (Only files open through samba are affected.) Obviously, you need root to close files opened by other users, but it works fine for files you have open. Note that as with any other force closing of a file, data corruption can occur. As long as the files are inactive, it should be fine though.
It's pretty ugly, but for my use-case (closing whole mount points) it works well enough.
Related
So I'm doing a GRETL script where the users writes his Operating System (Windows/Linux), his path to a gretl workdir and the .gdt file to open (saved from a previous exercise).
This passes on string variables. One of such variables is gdt_file which before opening should be /path/to/file/file.gdt
Now, reading GRETL documentation, the open command will by default look for file.gdt inside the $workdir.
Now, what I want to do is open gdt_file, but of course it doesn't work because it's looking for gdt_file.gdt inside $workdir, instead of open /path/to/file/file.gdt
I've played a bit with it, but I'm unable to find a workaround, IDK if this is even possible, the documentation isn't very clarifying.
Thank you for your time.
here's the thread with the reply from the Gretl team, in case anyone is wondering: https://sourceforge.net/p/gretl/bugs/247/
Basically, use command "#variable" as in string substitution in the gretl guide.
My goal is to write to a file (that the user whenever the user launches an application, such as FireFox) and timestamp the event.
The tricky part is having to do this from the kernel (or a module loaded onto the kernel).
From the research I've done so far (sources listed below), the execve system call seemed the most viable. As it had the filename of the process it was handling which seemed like gold at the time, but I quickly learned that it wasn't as useful as I thought since this system call isn't limited to user-related operations.
So then I thought of using ps -ef as it listed all the current running processes and I would just have to filter through which ones were applications opened by the user.
But the issue with that method is that I would have to poll every X seconds so, it has the potential to miss something if the user launched and closed an application within the time that I didn't call ps -ef.
I've also realized that writing to a file would be a challenge as well, since you don't have access to the standard library from the kernel. So my guess for that would be making use of proc somehow to allow the user to actually access the information that I'm trying to log.
Basically I'm running out of leads and I'd greatly appreciate it if anyone could point me in the right direction.
Thanks.
Sources:
http://tldp.org/LDP/lkmpg/2.6/html/x978.html (not very recent)
https://0xax.gitbooks.io/linux-insides/content/SysCall/syscall-4.html
First, writing to a file or reading a real file from the kernel is a bad idea which is not used in the kernel. There is of course VFS files, like /sys/fs or /proc, but this is a special case and this is allowed.
See this article in Linux Journal,
"Driving Me Nuts - Things You Never Should Do in the Kernel" by Greg Kroach-Hrtman
http://www.linuxjournal.com/article/8110
Every new process that is created in Linux, adds an entry under /proc,
as /proc/pidNum, where pidNum is the Process ID of the new process.
You can find out the name of the new application which was invoked simply by
cat /proc/pidNum/cmdline.
So for example, if your crond daemon has pid 1336, then
$cat /proc/1336/cmdline
will give
cron
And there are ways to monitor adding entries to a folder in Linux.
I've got a zip file of 1,6gb and it takes me forever to extract it on a server. I left it all night long and when i woke up it wasn't finished. There is no way to keep track how much time is left on extracting a file and how much percantage is done so i'm not sure if the whole thing works properly. Is there a way to exctract that file using File manager in Cpanel so that it can be done while the pc is off and maybe to note me on an email when it's done. I basically need to copy a webshop from live server to developers server and am just loosing too much time on that. So if anyone has a better idea how to extract it please feel free to suggest it.
P.S. Deleting of those files that did extract takes forever too
P.P.S. I'm a linux/SystemAdmin
If it's all about copying files from one server to another - why not just use rsync and avoid archiving?
I mean, if extraction is a pain - remove it from the equation :)
It is not a good ideato use the cPanel File Manager for this task, as the server will probably kill the extract process if it takes too long.
The best way to go about this would be via SSH, while logged in as root. If you need to switch off your computer, you should run it in screen.
You can also use unzipper.php which you can get from github.
It will require you to upload your file and unzipper.php too. Then run wwww.yourdomain.con/unzipper.php
I'm writing a command line application in Mac using Objective-c
At the start of the application, i want to check if another instance of the same application is already running. If it is, then i should be either wait for it to finish or exit the current instance or quit the other instance etc.
Is there any way of doing this?
The standard Unix solution for this is to create a "run file". When you start up, you try to create that file and write your pid to it if it doesn't exist; if it does exist, read the pid out of it, and if there's a running program with that pid and your process name, wait/exit/whatever.
The question is, where do you put that file, and what do you call it?
Well, first, you have to decide what exactly "already running" means. Obviously not "anywhere in the world", but it could be anything from "anywhere on the current machine" to "in the current desktop session". (For example, if User A starts your program, pauses it, then User B comes along and takes over the computer via Fast User Switching, should she be able to run the program, or not?)
For pretty much any reasonable answer to that question, there's an obvious pathname pattern. For example, on a Mac, /tmp is shared system-wide, while $TMPDIR is specific to a given session, so, e.g., /tmp/${ARGV[0]}.pid is a good way to say "only one copy on the machine, period", while ${TMPDIR}/${ARGV[0]}.pid is a good way to say "only one copy per session".
Simple but common way to do this is to check the process list for the name of your executable.
ps - A | grep <your executable name>
Thank you #abarnert.
This is how I have presently implemented. At the start of the main(), I would check if a file named .lock exists in the binary's own directory (I am considering moving it to /tmp). If it is, application exits.
If not, it would create the file.
At the end of the application, the .lock file is removed
I haven't yet written the pid to that file, but I will when exiting the previous instance is required (as of yet I don't need it, but may in the future).
I think PID can be retrieved using
int myPID=[[NSProcessInfo processInfo] processIdentifier];
The program will be invoked by a custom scheduler which is running as a root daemon. So it would be run as root.
Seeing the answers, I would assume that there is no direct method of solving the problem.
I just accidentally pasted a $200 SSL certificate in the private key file and saved in vi. The private key is now lost. I know I yanked the existing data before replacing it and saving. Is it possible to retrieve this data somehow? I think no, but I figured I'd ask.
If you haven't quit vi, you can just 'p'.. no?
If your vi session is still running, and you haven't written your file yet, just do [esc]:q! and you should be back to your original file.
Or just hit p to paste the stuff you yanked previously.
You might have an id.rsa~ file hanging around. If so, that is your backup file.
It sounds like you've already written your file, so you are probably out of luck. Can you generate a new keypair and ask your cert vendor to re-issue the cert?
In the future, you might want to look into setting the backup option in vim. This used to be a default setting in Linux distributions back in the day, but it definitely isn't the default on my mac now.
If you yanked the data before you overwrote it, it probably should still be accessible using registers (:help registers):
:registers
will show you the contents of all registers.
If you find the lost text, it can be yanked by using the number displayed at the beginning of the line, e.g. by issuing "3p in normal mode.
UPDATE: The question was about vi, not vim, right? Then the command registers might not exist; I think the yank registers 0-9 are a vim extension.
I don't suppose you have backups set do you (doc)? If not, can't you do u?