I want to run a command to remove all files under the folder 'allfiles/'
root#admin:/home/admin/allfiles# find . -name '*' | xargs rm
and it clears all files under allfiles
I want to run the same command using Laravel Schedule, I am using in this way
protected function schedule(Schedule $schedule)
{
$schedule->exec('find . -name '*' | xargs rm /home/admin/millionfiles/')
->everyMinute();
}
But it does not work
You missed ' before the directory beginning
Also try adding double quotes instead of single
protected function schedule(Schedule $schedule) {
$schedule-> exec('find . -name "*" | xargs rm ' . '/home/admin/millionfiles/')
->everyMinute();
}
Related
Any recursive chown or chmod command on an s3fs mount takes a long time when you have a few directories (about 70) each with quite a few files.
Either of these commands are likely to take almost 24 hours. I have to do this or the Apache process cannot access these files/directories. The command on a normal mount takes about 20 seconds.
Mounting with:
/storage -o noatime -o allow_other -o use_cache=/s3fscache -o default_acl=public-read-write
In /etc/fuse.conf:
user_allow_other
Using latest version: 1.78
Any thoughts on how to do this faster?
After a while, I found it just better to parallel the processes in order to speed it up. Example:
find /s3fsmount/path/to/somewhere -print | xargs --max-args=1 --max-procs=100 chmod 777
It is still slow, but nowhere near as slow as it was.
Using aws cli may help.
what I do:
use aws cli to get the full file list of the target directory.
write a script to parallel execute chmod 777 to each file (with > /dev/null 2>&1 &)
then I found that the chmod jobs finished immediately, from ps -ef.
my PHP code:
<?php
$s3_dir = 'path/to/target/';
$s3fs_dir = '/mnt/s3-drive/' .$s3_dir;
echo 'Fetching file list...' . "\n\n";
sleep(1.5);
$cmd = 'aws s3 ls --recursive s3://<bucket_name>/' . $s3_dir;
exec($cmd, $output, $return);
$num = 0;
if ( is_array($output) ) {
foreach($output as $file_str) {
if ( $num>100 ) {
sleep(4);
$num=0;
}
$n = sscanf( $file_str, "%s\t%s\t%s\t". $s3_dir ."%s", $none1, $none2, $none3, $file );
$cmd = 'chmod 777 ' . $s3fs_dir . $file . ' > /dev/null 2>&1 &';
echo $cmd ."\n";
exec( $cmd );
$num+=1;
}
}
?>
For Change user
find /s3fsmount/path/to/somewher -print | xargs --max-args=1 --max-procs=100 sudo chown -R user:user
It work me..
I am trying to grep for the .dat string in all my *.mk files using the below command. I am wondering if this is right, because it doesn't give me any output.
find . -name "*.mk" | grep *.dat
No it's not right, there are a couple of issues: 1) you seem to be supplying grep with a glob pattern, 2) the pattern is not quoted and will be expanded by the shell before grep ever sees it, 3) you're grep'ing through filenames, not file contents.
To address 1), use Basic Regular Expression, the equivalent here would be .*\.dat or just .dat. 2) is a matter of using single or double-quotes. 3) find returns filenames, so if you want grep to operate on each of those files either use the -exec flag for find or use xargs. All these taken together:
find . -name '*.mk' | xargs grep '.dat'
Use Find's Exec Flag
You don't really need a pipeline here, and can bypass the need for xargs. Use the following invocation to perform a fixed-string search (which is generally faster than a regex match) on each file found by the standard find command:
find . -name '*.mk' -exec grep -F .dat {} \;
If you're using GNU find, you can use this syntax instead to avoid the process overhead of multiple calls to grep:
find . -name '*.mk' -exec grep -F .dat {} +
Use xargs:
find . -name "*.mk"| xargs grep '\.dat'
Using exec option in find command this way:
find . -name "*.mk" -exec grep ".dat" \{\} \;
Hi Friends I have link in "mentortask" with in the folder of "/etc/apache2/sites-enabled/" when I was try to delete that file using following command. it can't work and said can not remove permission denied when I am reload the apache server.
" rm /etc/apache2/sites-enabled/mentortask.com " and
" unlink /etc/apache2/sites-enabled/mentortask.com "
How will solve this problem.
sudo rm /etc/apache2/sites-enabled/mentortask.com
If you can't, then you need to petition the administrator to do it for you.
find ./{DirectoryName} -type l -print0 | xargs -0 ls -plah |tr -s ' ' | cut -f9 -d ' ' | xargs rm -rf
I am trying to write a bourne-shell script that takes a string as a parameter and deletes all files in the directory containing that string
I was thinking about using find and execute rm all but I just started b-shell
find . -name $1 'core' -exec rm -i* {}\;
any help would be much appreciated. thanks.
Why not just this:
#!/bin/sh
rm *$1*
Removes files in the current directory that contain your argument.
remove.sh script:
#!/bin/sh
find . -type f -iname *$1* -exec rm -rf {} \;
usage:
$remove.sh "main"
I am trying to write a simple command that searches through a music directory for all files of a certain type and copies them to a new directory. I would be fine if the files didn't have spaces.
I'm using a script from the following question, but it fails for spaces:
bash script problem, find , mv tilde files created by gedit
Any suggestions on how I can handle spaces, once I'm satisfied that all the files are being listed, I'll change echo to cp
for D in `find . -name "*.wma*"`; do echo "${D}"; done
You probably want to do this instead:
find . -name *.wma -exec echo "{}" \;