Un-stage file with libgit2 - objective-c

Using objective-git and libgit2 it has been fairly easy to stage a file ready for commit:
GTIndex *repoIndex = [self.repository indexWithError:&error];
[repoIndex removeFile:path error:&error];
if (status != GTFileStatusIgnored && status != GTFileStatusWorkingDeleted) {
// Now add the file to the index
[repoIndex addFile:path error:&error];
}
[repoIndex write:&error];
However un-staging a file is proving to be a tad more tricky. Simply removing it from the repository's index doesn't work as git then thinks the file has been deleted which makes sense. It seems what I need to do is change the index entry in the index to the one it was before it was staged.
I have tried doing the following, using diff to get the old diff delta and constructing a git_index_entry from that and inserting it:
GTIndex *repoIndex = [self.repository indexWithError:&error];
GTBranch *localBranch = [self.repository currentBranchWithError:&error];
GTCommit *localCommit = [localBranch targetCommitAndReturnError:&error];
GTDiff *indexCommitDiff = [GTDiff diffIndexFromTree:localCommit.tree inRepository:self.repository options:nil error:&error];
// Enumerate through the diff deltas until we get to the file we want to unstage
[indexCommitDiff enumerateDeltasUsingBlock:^(GTDiffDelta *delta, BOOL *stop) {
NSString *deltaPath = delta.newFile.path;
// Check if it matches the path we want to usntage
if ([deltaPath isEqualToString:path]) {
GTDiffFile *oldFile = delta.oldFile;
NSString *oldFileSHA = oldFile.OID.SHA;
git_oid oldFileOID;
int status = git_oid_fromstr(&oldFileOID, oldFileSHA.fileSystemRepresentation);
git_index_entry entry;
entry.mode = oldFile.mode;
entry.oid = oldFileOID;
entry.path = oldFile.path.fileSystemRepresentation;
[repoIndex removeFile:path error:&error];
status = git_index_add(repoIndex.git_index, &entry);
[repoIndex write:&error];
}
}];
However this leaves the git index in a corrupt state resulting in any git command logging a fatal error:
fatal: Unknown index entry format bfff0000
fatal: Unknown index entry format bfff0000
What is the correct way to un-stage a file using libgit2?

Remember that Git is snapshot-based, so whatever is in the index at commit time will be what's committed.
There is no unstage action by itself, as it's context-dependent. If the file is new, then removing it is what you do to unstage it. Otherwise, unstaging and staging are the same operation, with the difference that the blob you use in the entry is the old version of the file.
Since you want to move files back to their state in HEAD, there shouldn't be a need to use diff, but take the tree of HEAD and look up the entries you want there (though from a quick glance, I don't see objective-git wrapping git_tree_entry_bypath())
If we write out a bad version of the index, that's definitely a bug in the library, could you open an issue so we can try to reproduce it?

Related

Property Photo Files with PHRETS v2

My php code, below, attemps to download all the photos for a property listing. It successfully queries the RETS server, and creates a file for each photo, but the file does not seem to be a functional image. (MATRIX requires files to be downloaded, instead of URLs.)
The list of photos below suggests that it successfully queries one listing id (47030752) for all photos that exist, (20 photos in this case). In a web browser, the files appear only as a small white square on a black background: e.g. (https://photos.atlantarealestate-homes.com/photos/PHOTO-47030752-9.jpg). The file size (4) also seems to be very low, as compared to that of a real photo.
du -s PHOTO*
4 PHOTO-47030752-10.jpg
4 PHOTO-47030752-11.jpg
4 PHOTO-47030752-12.jpg
4 PHOTO-47030752-13.jpg
4 PHOTO-47030752-14.jpg
4 PHOTO-47030752-15.jpg
4 PHOTO-47030752-16.jpg
4 PHOTO-47030752-17.jpg
4 PHOTO-47030752-18.jpg
4 PHOTO-47030752-19.jpg
4 PHOTO-47030752-1.jpg
4 PHOTO-47030752-20.jpg
4 PHOTO-47030752-2.jpg
4 PHOTO-47030752-3.jpg
4 PHOTO-47030752-4.jpg
4 PHOTO-47030752-5.jpg
4 PHOTO-47030752-6.jpg
4 PHOTO-47030752-7.jpg
4 PHOTO-47030752-8.jpg
4 PHOTO-47030752-9.jpg
script I'm using:
#!/usr/bin/php
<?php
date_default_timezone_set('this/area');
require_once("composer/vendor/autoload.php");
$config = new \PHRETS\Configuration;
$config->setLoginUrl('https://myurl/login.ashx')
->setUsername('myser')
->setPassword('mypass')
->setRetsVersion('1.7.2');
$rets = new \PHRETS\Session($config);
$connect = $rets->Login();
$system = $rets->GetSystemMetadata();
$resources = $system->getResources();
$classes = $resources->first()->getClasses();
$classes = $rets->GetClassesMetadata('Property');
$host="localhost";
$user="db_user";
$password="db_pass";
$dbname="db_name";
$tablename="db_table";
$link=mysqli_connect ($host, $user, $password, $dbname);
$query="select mlsno, matrix_unique_id, photomodificationtimestamp from fmls_homes left join fmls_images on (matrix_unique_id=mls_no and photonum='1') where photomodificationtimestamp <> last_update or last_update is null limit 1";
print ("$query\n");
$result= mysqli_query ($link, $query);
$num_rows = mysqli_num_rows($result);
print "Fetching Images for $num_rows Homes\n";
while ($Row= mysqli_fetch_array ($result)) {
$matrix_unique_id="$Row[matrix_unique_id]";
$objects = $rets->GetObject('Property', 'LargePhoto', $matrix_unique_id);
foreach ($objects as $object) {
// does this represent some kind of error
$object->isError();
$object->getError(); // returns a \PHRETS\Models\RETSError
// get the record ID associated with this object
$object->getContentId();
// get the sequence number of this object relative to the others with the same ContentId
$object->getObjectId();
// get the object's Content-Type value
$object->getContentType();
// get the description of the object
$object->getContentDescription();
// get the sub-description of the object
$object->getContentSubDescription();
// get the object's binary data
$object->getContent();
// get the size of the object's data
$object->getSize();
// does this object represent the primary object in the set
$object->isPreferred();
// when requesting URLs, access the URL given back
$object->getLocation();
// use the given URL and make it look like the RETS server gave the object directly
$object->setContent(file_get_contents($object->getLocation()));
$listing = $object->getContentId();
$number = $object->getObjectId();
$url = $object->getLocation();
//$photo = $object->getContent();
$size = $object->getSize();
$desc = $object->getContentDescription();
if ($number >= '1') {
file_put_contents("/bigdirs/fmls_pics/PHOTO-{$listing}-{$number}.jpg", "$object->getContent();");
print "$listing - $number - $size $desc\n";
} //end if
} //end foreach
} //end while
mysqli_close ($link);
fclose($f);
php?>
Are there any suggested changes to capture photos into the created files? This command creates the photo files:
file_put_contents("/bigdirs/fmls_pics/PHOTO-{$listing}-{$number}.jpg", "$object->getContent();");
There may be some parts of this script that wouldn't work in live production, but are sufficient for testing. This script seems to successfully query for the information needed from the RETS server. The problem is just simply that the actual files created do not seem to be functional photos.
Thanks in Advance! :)
Your code sample is a mix of the official documentation and a usable implementation. The problem is with this line:
$object->setContent(file_get_contents($object->getLocation()));
You should completely take that out. That's actually overriding the image you downloaded with nothing before you get a chance to save the contents to a file. With that removed, it should work as expected.

Pull (fetch & merge) with libgit2

I've been using objecitive-git and libgit2 to try to implement pull functionality. As git pull is just a 'porcelain' command and is made up of a git fetch followed by a git merge origin/master then that would be how I implemented it.
The fetch is performed using the method in objective-git from the fetch branch on github.
[remote fetchWithCredentialProvider:nil error:&error progress:nil];
The code below is what is done after the fetch (which I know succeeds):
// Get the local branch
GTBranch *localBranch = [repo localBranchesWithError:nil][0];
// Get the remote branch
GTBranch *remoteBranch = [repo remoteBranchesWithError:nil][0];
// Get the local & remote commit
GTCommit *localCommit = [localBranch targetCommitAndReturnError:nil];
GTCommit *remoteCommit = [remoteBranch targetCommitAndReturnError:nil];
// Get the trees of both
GTTree *localTree = localCommit.tree;
GTTree *remoteTree = remoteCommit.tree;
// Get OIDs of both commits too
GTOID *localOID = localCommit.OID;
GTOID *remoteOID = remoteCommit.OID;
// Find a merge base to act as the ancestor between these two commits
GTCommit *ancestor = [repo mergeBaseBetweenFirstOID:localOID secondOID:remoteOID error:&error];
if (error) {
NSLog(#"Error finding merge base: %#", error);
}
// Get the ancestors tree
GTTree *ancestorTree = ancestor.tree;
// Merge into the local tree
GTIndex *mergedIndex = [localTree merge:remoteTree ancestor:ancestorTree error:&error];
if (error) {
NSLog(#"Error mergeing: %#", error);
}
// Write the merge to disk and store the new tree
GTTree *newTree = [mergedIndex writeTreeToRepository:repo error:&error];
if (error) {
NSLog(#"Error writing merge index to disk: %#", error);
}
After the mergedIndex which starts out in memory has been written as a tree to disk (writeTreeToRepository uses git_index_write_tree_to) there is no change in the git repos status. I'm assuming I'm missing a last step to make the new tree HEAD or merge it with HEAD or something similar but I'm not sure exactly what.
Any help would be much obliged.
Once you have the tree you want to use for the merge commit, you need to create the merge commit, which you can create with createCommitWithTree in GTRepository the same way as you'd create any other, but with both ancestors as parents. That function also allows you to ask the library to update a particular branch if you expect the repository to be in a quiet state.

Objective-Git Merge

I am using Objective-Git. I cannot get the following method to work:
- (GTIndex *)merge:(GTTree *)otherTree ancestor:(GTTree *)ancestorTree error:(NSError **)error
No error is returned, but the index returned is empty, while it exists, all attributes are nil. The merge operation does not take place, I can't write out a tree as I cannot obtain the index resulting from the attempted merge.
Has anybody managed to successfully perform a merge using objective git - How? Help!
GTBranch *branch1 = branches[0];
GTCommit *commit1 = [branch1 targetCommitAndReturnError:NULL];
GTOID *oid1 = commit1.OID;
GTTree *tree1 = commit1.tree;
GTBranch *branch2 = branches[1];
GTCommit *commit2 = [branch2 targetCommitAndReturnError:NULL];
GTTree *tree2 = commit2.tree;
GTOID *oid2 = commit2.OID;
GTRepository *repo = branch1.repository;
NSError *error;
GTCommit *ancestor = [repo mergeBaseBetweenFirstOID:oid1 secondOID:oid2 error:&error];
if (error){
NSLog(#"%#", error.description);
}
GTTree *ancTree = ancestor.tree;
NSError *someError;
NSLog(#"attempting merge into ""%#"" from ""%#"" with ancestor ""%#""", commit2.message, commit1.message,ancestor.message);
GTIndex *mergedIndex = [tree2 merge:tree1 ancestor: ancTree error:&someError]; //returns index not backed by existing repo --> index_file_path = nil, all attributes of git_index are nil
if (someError){
NSLog(#"%#", someError.description);
}
NSError *theError;
GTTree *mergedtree = [mergedIndex writeTree:&theError]; //can't write out the tree as the index given back by merge: ancestor: error: does not reference a repo
if (theError){
NSLog(#"%#",theError);
}
}
}
The index that is returned by merging the trees together is not bound to the repository. Unfortunately, the operation to write the index as a tree to a specific repository (git_index_write_tree_to) is not exposed yet through Objective-Git.
You probably want to open a ticket in their issue tracker.

Can I use my own filename when using NSTemporaryDirectory

I am generating a PDF file and saving it to my device before e-mailing it from within my application. At the moment it does not get cleared down either manually or automatically so I was looking at NSTemporaryDirectory. I have looked at various sites and also on here for the answer to my specific query but cannot find it.
I have the following function:
+(NSString *)getTempFilePathName {
NSString *tempFilePathName = nil;
NSString *tempFileTemplate = [NSTemporaryDirectory() stringByAppendingPathComponent:#"tempfile.XXXXXX"];
const char *tempFileTemplateCString = [tempFileTemplate fileSystemRepresentation];
char *tempFileNameCString = (char *)malloc(strlen(tempFileTemplateCString) + 1);
strcpy(tempFileNameCString, tempFileTemplateCString);
int fileDescriptor = mkstemp(tempFileNameCString);
if (fileDescriptor != -1) {
// File opened successfully
tempFilePathName = [[NSFileManager defaultManager] stringWithFileSystemRepresentation:tempFileNameCString length:strlen(tempFileNameCString)];
close(fileDescriptor);
}
free(tempFileNameCString);
return tempFilePathName;
}
This returns me an NSString with the full path and filename to a temp file ending with the 'XXXXXX' component of the filename replaced with unique combo of letters and numbers. As I am using this to save a PDF file, I need to fix the 'XXXXXX' bit with the extension ".PDF". Preferably I'd like to also specify the filename itself too so something like "Order-123.PDF".
Can I just edit my method to pass in a filename and use that in the stringByAppendingPathComponent parameter? Is the directory name generated unique per call to this method?
I have solved this by myself. After the line that calls the above routine I do:
NSString *newFilePath = [tempNameStub stringByAppendingPathExtension:#"pdf"];
Which adds .pdf to the end.

Delete Amazon S3 buckets? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've been interacting with Amazon S3 through S3Fox and I can't seem to delete my buckets. I select a bucket, hit delete, confirm the delete in a popup, and... nothing happens. Is there another tool that I should use?
It is finally possible to delete all the files in one go using the new Lifecycle (expiration) rules feature. You can even do it from the AWS console.
Simply right click on the bucket name in AWS console, select "Properties" and then in the row of tabs at the bottom of the page select "lifecycle" and "add rule". Create a lifecycle rule with the "Prefix" field set blank (blank means all files in the bucket, or you could set it to "a" to delete all files whose names begin with "a"). Set the "Days" field to "1". That's it. Done. Assuming the files are more than one day old they should all get deleted, then you can delete the bucket.
I only just tried this for the first time so I'm still waiting to see how quickly the files get deleted (it wasn't instant but presumably should happen within 24 hours) and whether I get billed for one delete command or 50 million delete commands... fingers crossed!
Remeber that S3 Buckets need to be empty before they can be deleted. The good news is that most 3rd party tools automate this process. If you are running into problems with S3Fox, I recommend trying S3FM for GUI or S3Sync for command line. Amazon has a great article describing how to use S3Sync. After setting up your variables, the key command is
./s3cmd.rb deleteall <your bucket name>
Deleting buckets with lots of individual files tends to crash a lot of S3 tools because they try to display a list of all files in the directory. You need to find a way to delete in batches. The best GUI tool I've found for this purpose is Bucket Explorer. It deletes files in a S3 bucket in 1000 file chunks and does not crash when trying to open large buckets like s3Fox and S3FM.
I've also found a few scripts that you can use for this purpose. I haven't tried these scripts yet but they look pretty straightforward.
RUBY
require 'aws/s3'
AWS::S3::Base.establish_connection!(
:access_key_id => 'your access key',
:secret_access_key => 'your secret key'
)
bucket = AWS::S3::Bucket.find('the bucket name')
while(!bucket.empty?)
begin
puts "Deleting objects in bucket"
bucket.objects.each do |object|
object.delete
puts "There are #{bucket.objects.size} objects left in the bucket"
end
puts "Done deleting objects"
rescue SocketError
puts "Had socket error"
end
end
PERL
#!/usr/bin/perl
use Net::Amazon::S3;
my $aws_access_key_id = 'your access key';
my $aws_secret_access_key = 'your secret access key';
my $increment = 50; # 50 at a time
my $bucket_name = 'bucket_name';
my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id, aws_secret_access_key => $aws_secret_access_key, retry => 1, });
my $bucket = $s3->bucket($bucket_name);
print "Incrementally deleting the contents of $bucket_name\n";
my $deleted = 1;
my $total_deleted = 0;
while ($deleted > 0) {
print "Loading up to $increment keys...\n";
$response = $bucket->list({'max-keys' => $increment, }) or die $s3->err . ": " . $s3->errstr . "\n";
$deleted = scalar(#{ $response->{keys} }) ;
$total_deleted += $deleted;
print "Deleting $deleted keys($total_deleted total)...\n";
foreach my $key ( #{ $response->{keys} } ) {
my $key_name = $key->{key};
$bucket->delete_key($key->{key}) or die $s3->err . ": " . $s3->errstr . "\n";
}
}
print "Deleting bucket...\n";
$bucket->delete_bucket or die $s3->err . ": " . $s3->errstr;
print "Done.\n";
SOURCE: Tarkblog
Hope this helps!
recent versions of s3cmd have --recursive
e.g.,
~/$ s3cmd rb --recursive s3://bucketwithfiles
http://s3tools.org/kb/item5.htm
With s3cmd:
Create a new empty directory
s3cmd sync --delete-removed empty_directory s3://yourbucket
This may be a bug in S3Fox, because it is generally able to delete items recursively. However, I'm not sure if I've ever tried to delete a whole bucket and its contents at once.
The JetS3t project, as mentioned by Stu, includes a Java GUI applet you can easily run in a browser to manage your S3 buckets: Cockpit. It has both strengths and weaknesses compared to S3Fox, but there's a good chance it will help you deal with your troublesome bucket. Though it will require you to delete the objects first, then the bucket.
Disclaimer: I'm the author of JetS3t and Cockpit
SpaceBlock also makes it simple to delete s3 buckets - right click bucket, delete, wait for job to complete in transfers view, done.
This is the free and open source windows s3 front-end that I maintain, so shameless plug alert etc.
I've implemented bucket-destroy, a multi threaded utility that does everything it takes to delete a bucket. I handle non empty buckets, as well as version enabled bucket keys.
You can read the blog post here http://bytecoded.blogspot.com/2011/01/recursive-delete-utility-for-version.html and the instructions here http://code.google.com/p/bucket-destroy/
I've successfully deleted with it a bucket that contains double '//' in the key name, versioned key and DeleteMarker keys. Currently I'm running it on a bucket that contains ~40,000,000 so far I've been able to delete 1,200,000 in several hours on m1.large. Note that the utility is multi threaded but does not (yet) implemented shuffling (which will horizontal scaling, launching the utility on several machines).
If you use amazon's console and on a one-time basis need to clear out a bucket: You can browse to your bucket then select the top key then scroll to the bottom and then press shift on your keyboard then click on the bottom one. It will select all in between then you can right click and delete.
If you have ruby (and rubygems) installed, install aws-s3 gem with
gem install aws-s3
or
sudo gem install aws-s3
create a file delete_bucket.rb:
require "rubygems" # optional
require "aws/s3"
AWS::S3::Base.establish_connection!(
:access_key_id => 'access_key_id',
:secret_access_key => 'secret_access_key')
AWS::S3::Bucket.delete("bucket_name", :force => true)
and run it:
ruby delete_bucket.rb
Since Bucket#delete returned timeout exceptions a lot for me, I have expanded the script:
require "rubygems" # optional
require "aws/s3"
AWS::S3::Base.establish_connection!(
:access_key_id => 'access_key_id',
:secret_access_key => 'secret_access_key')
while AWS::S3::Bucket.find("bucket_name")
begin
AWS::S3::Bucket.delete("bucket_name", :force => true)
rescue
end
end
I guess the easiest way would be to use S3fm, a free online file manager for Amazon S3. No applications to install, no 3rd party web sites registrations. Runs directly from Amazon S3, secure and convenient.
Just select your bucket and hit delete.
One technique that can be used to avoid this problem is putting all objects in a "folder" in the bucket, allowing you to just delete the folder then go along and delete the bucket. Additionally, the s3cmd tool available from http://s3tools.org can be used to delete a bucket with files in it:
s3cmd rb --force s3://bucket-name
I hacked together a script for doing it from Python, it successfully removed my 9000 objects. See this page:
https://efod.se/blog/archive/2009/08/09/delete-s3-bucket
One more shameless plug: I got tired of waiting for individual HTTP delete requests when I had to delete 250,000 items, so I wrote a Ruby script that does it multithreaded and completes in a fraction of the time:
http://github.com/sfeley/s3nuke/
This is one that works much faster in Ruby 1.9 because of the way threads are handled.
This is a hard problem. My solution is at http://stuff.mit.edu/~jik/software/delete-s3-bucket.pl.txt. It describes all of the things I've determined can go wrong in a comment at the top. Here's the current version of the script (if I change it, I'll put a new version at the URL but probably not here).
#!/usr/bin/perl
# Copyright (c) 2010 Jonathan Kamens.
# Released under the GNU General Public License, Version 3.
# See <http://www.gnu.org/licenses/>.
# $Id: delete-s3-bucket.pl,v 1.3 2010/10/17 03:21:33 jik Exp $
# Deleting an Amazon S3 bucket is hard.
#
# * You can't delete the bucket unless it is empty.
#
# * There is no API for telling Amazon to empty the bucket, so you have to
# delete all of the objects one by one yourself.
#
# * If you've recently added a lot of large objects to the bucket, then they
# may not all be visible yet on all S3 servers. This means that even after the
# server you're talking to thinks all the objects are all deleted and lets you
# delete the bucket, additional objects can continue to propagate around the S3
# server network. If you then recreate the bucket with the same name, those
# additional objects will magically appear in it!
#
# It is not clear to me whether the bucket delete will eventually propagate to
# all of the S3 servers and cause all the objects in the bucket to go away, but
# I suspect it won't. I also suspect that you may end up continuing to be
# charged for these phantom objects even though the bucket they're in is no
# longer even visible in your S3 account.
#
# * If there's a CR, LF, or CRLF in an object name, then it's sent just that
# way in the XML that gets sent from the S3 server to the client when the
# client asks for a list of objects in the bucket. Unfortunately, the XML
# parser on the client will probably convert it to the local line ending
# character, and if it's different from the character that's actually in the
# object name, you then won't be able to delete it. Ugh! This is a bug in the
# S3 protocol; it should be enclosing the object names in CDATA tags or
# something to protect them from being munged by the XML parser.
#
# Note that this bug even affects the AWS Web Console provided by Amazon!
#
# * If you've got a whole lot of objects and you serialize the delete process,
# it'll take a long, long time to delete them all.
use threads;
use strict;
use warnings;
# Keys can have newlines in them, which screws up the communication
# between the parent and child processes, so use URL encoding to deal
# with that.
use CGI qw(escape unescape); # Easiest place to get this functionality.
use File::Basename;
use Getopt::Long;
use Net::Amazon::S3;
my $whoami = basename $0;
my $usage = "Usage: $whoami [--help] --access-key-id=id --secret-access-key=key
--bucket=name [--processes=#] [--wait=#] [--nodelete]
Specify --processes to indicate how many deletes to perform in
parallel. You're limited by RAM (to hold the parallel threads) and
bandwidth for the S3 delete requests.
Specify --wait to indicate seconds to require the bucket to be verified
empty. This is necessary if you create a huge number of objects and then
try to delete the bucket before they've all propagated to all the S3
servers (I've seen a huge backlog of newly created objects take *hours* to
propagate everywhere). See the comment at the top of the script for more
information about this issue.
Specify --nodelete to empty the bucket without actually deleting it.\n";
my($aws_access_key_id, $aws_secret_access_key, $bucket_name, $wait);
my $procs = 1;
my $delete = 1;
die if (! GetOptions(
"help" => sub { print $usage; exit; },
"access-key-id=s" => \$aws_access_key_id,
"secret-access-key=s" => \$aws_secret_access_key,
"bucket=s" => \$bucket_name,
"processess=i" => \$procs,
"wait=i" => \$wait,
"delete!" => \$delete,
));
die if (! ($aws_access_key_id && $aws_secret_access_key && $bucket_name));
my $increment = 0;
print "Incrementally deleting the contents of $bucket_name\n";
$| = 1;
my(#procs, $current);
for (1..$procs) {
my($read_from_parent, $write_to_child);
my($read_from_child, $write_to_parent);
pipe($read_from_parent, $write_to_child) or die;
pipe($read_from_child, $write_to_parent) or die;
threads->create(sub {
close($read_from_child);
close($write_to_child);
my $old_select = select $write_to_parent;
$| = 1;
select $old_select;
&child($read_from_parent, $write_to_parent);
}) or die;
close($read_from_parent);
close($write_to_parent);
my $old_select = select $write_to_child;
$| = 1;
select $old_select;
push(#procs, [$read_from_child, $write_to_child]);
}
my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id,
aws_secret_access_key => $aws_secret_access_key,
retry => 1,
});
my $bucket = $s3->bucket($bucket_name);
my $deleted = 1;
my $total_deleted = 0;
my $last_start = time;
my($start, $waited);
while ($deleted > 0) {
$start = time;
print "\nLoading ", ($increment ? "up to $increment" :
"as many as possible")," keys...\n";
my $response = $bucket->list({$increment ? ('max-keys' => $increment) : ()})
or die $s3->err . ": " . $s3->errstr . "\n";
$deleted = scalar(#{ $response->{keys} }) ;
if (! $deleted) {
if ($wait and ! $waited) {
my $delta = $wait - ($start - $last_start);
if ($delta > 0) {
print "Waiting $delta second(s) to confirm bucket is empty\n";
sleep($delta);
$waited = 1;
$deleted = 1;
next;
}
else {
last;
}
}
else {
last;
}
}
else {
$waited = undef;
}
$total_deleted += $deleted;
print "\nDeleting $deleted keys($total_deleted total)...\n";
$current = 0;
foreach my $key ( #{ $response->{keys} } ) {
my $key_name = $key->{key};
while (! &send(escape($key_name) . "\n")) {
print "Thread $current died\n";
die "No threads left\n" if (#procs == 1);
if ($current == #procs-1) {
pop #procs;
$current = 0;
}
else {
$procs[$current] = pop #procs;
}
}
$current = ($current + 1) % #procs;
threads->yield();
}
print "Sending sync message\n";
for ($current = 0; $current < #procs; $current++) {
if (! &send("\n")) {
print "Thread $current died sending sync\n";
if ($current = #procs-1) {
pop #procs;
last;
}
$procs[$current] = pop #procs;
$current--;
}
threads->yield();
}
print "Reading sync response\n";
for ($current = 0; $current < #procs; $current++) {
if (! &receive()) {
print "Thread $current died reading sync\n";
if ($current = #procs-1) {
pop #procs;
last;
}
$procs[$current] = pop #procs;
$current--;
}
threads->yield();
}
}
continue {
$last_start = $start;
}
if ($delete) {
print "Deleting bucket...\n";
$bucket->delete_bucket or die $s3->err . ": " . $s3->errstr;
print "Done.\n";
}
sub send {
my($str) = #_;
my $fh = $procs[$current]->[1];
print($fh $str);
}
sub receive {
my $fh = $procs[$current]->[0];
scalar <$fh>;
}
sub child {
my($read, $write) = #_;
threads->detach();
my $s3 = Net::Amazon::S3->new({aws_access_key_id => $aws_access_key_id,
aws_secret_access_key => $aws_secret_access_key,
retry => 1,
});
my $bucket = $s3->bucket($bucket_name);
while (my $key = <$read>) {
if ($key eq "\n") {
print($write "\n") or die;
next;
}
chomp $key;
$key = unescape($key);
if ($key =~ /[\r\n]/) {
my(#parts) = split(/\r\n|\r|\n/, $key, -1);
my(#guesses) = shift #parts;
foreach my $part (#parts) {
#guesses = (map(($_ . "\r\n" . $part,
$_ . "\r" . $part,
$_ . "\n" . $part), #guesses));
}
foreach my $guess (#guesses) {
if ($bucket->get_key($guess)) {
$key = $guess;
last;
}
}
}
$bucket->delete_key($key) or
die $s3->err . ": " . $s3->errstr . "\n";
print ".";
threads->yield();
}
return;
}
I am one of the Developer Team member of Bucket Explorer Team, We will provide different option to delete Bucket as per the users choice...
1) Quick Delete -This option will delete you data from bucket in chunks of 1000.
2) Permanent Delete-This option will Delete objects in queue.
How to delete Amazon S3 files and bucket?
Amazon recently added a new feature, "Multi-Object Delete", which allows up to 1,000 objects to be deleted at a time with a single API request. This should allow simplification of the process of deleting huge numbers of files from a bucket.
The documentation for the new feature is available here: http://docs.amazonwebservices.com/AmazonS3/latest/dev/DeletingMultipleObjects.html
I've always ended up using their C# API and little scripts to do this. I'm not sure why S3Fox can't do it, but that functionality appears to be broken within it at the moment. I'm sure that many of the other S3 tools can do it as well, though.
Delete all of the objects in the bucket first. Then you can delete the bucket itself.
Apparently, one cannot delete a bucket with objects in it and S3Fox does not do this for you.
I've had other little issues with S3Fox myself, like this, and now use a Java based tool, jets3t which is more forthcoming about error conditions. There must be others, too.
You must make sure you have correct write permission set for the bucket, and the bucket contains no objects.
Some useful tools that can assist your deletion: CrossFTP, view and delete the buckets like the FTP client. jets3t Tool as mentioned above.
I'll have to have a look at some of these alternative file managers. I've used (and like) BucketExplorer, which you can get from - surprisingly - http://www.bucketexplorer.com/.
It's a 30 day free trial, then (currently) costing US$49.99 per licence (US$49.95 on the purchase cover page).
Try https://s3explorer.appspot.com/ to manage your S3 account.
This is what I use. Just simple ruby code.
case bucket.size
when 0
puts "Nothing left to delete"
when 1..1000
bucket.objects.each do |item|
item.delete
puts "Deleting - #{bucket.size} left"
end
end
Use the amazon web managment console. With Google chrome for speed. Deleted the objects a lot faster than firefox (about 10 times faster). Had 60 000 objects to delete.