Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
In order to take regular backups, we want to compress and upload files from our Windows Server to Amazon S3 service. Is there any freeware application that allows us to schedule regular backups?
Your best bet is to use Windows PowerShell.
This blog post describes how to automate SQL Server backups with PowerShell to Amazon S3:
Randoom: EC2 SQL Server backup strategies and tactics
There is also a newer option that has become available in the meantime:
Standalone Windows .EXE command line utility for Amazon S3 & EC2
It's a .Net command-line tool executable that provides S3 commands to work with the S3 directly or through batch scripts.
Another great tool is S3Sync from SprightlySoft (free & opensource - S3Sync.zip) - a command line tool for windows. It offers a "differential" sync of a folder, detecting all additions, deletions/file-changes etc.
Here's a detailed article on how to use it Automating S3 backups on Windows
We do this under Linux with a shell script (batch file) that simply zips the required files and then calls s3cmd (from the Amazon command line tools) to put the zip file to an S3 bucket. There's also some exit code (errorlevel) checking to ensure everything went well.
We schedule that script with cron. You could do the same with windows task scheduler.
If you need a command line capable ZIP utility, 7-zip is a good open source choice.
Yes, the backup rubygem. It lets you define the backups declaratively, and then shells out to command-line tools to actually run them. It's probably cross-platform enough.
Alternatively, yes; powershell can be made to do what you want with the AWS .NET SDK loaded - it's quite bare-bones compared to the ruby one, though; I've ended up with far more concise ruby-based scripts than powershell ones (and I'm reasonable at both approaches, I think), since the ruby sdk layers a nicer model over the top of each API.
Related
We have multiple clients using our service.
Each client may create multiple projects.
Each client may upload multiple files to any of his projects.
Each file may have custom meta data associated.
Each client may "share" any of the projects to another client.
Each client may comment any of his or shared projects/files.
My question is about file storing in a cloud. What will best solution? I thought about Amazon S3 but maybe there are better alternatives?
You can explore Box.com solution. They are an advanced file management solution in the cloud and support fine-grained permission management as you explained above. Dropbox for Teams is also another option - The permission model is not as extensive as Box, but the sync client is very stable here. In one of my recent projects, I used box.com mainly due to their fine-grained permission controls
You can also build this on S3 (Dropbox and I guess Box too is behind the scenes built on S3). To achieve all the functionality as you mentioned, it is quite some programming work !
Is there any good way of creating and managing S3 policies and users from the command line of my Raspberry Pi?
The AWS Universal Command Line Tools are newer and better supported. They rely on Python, so if you can get Python for Raspberry Pi, you should be set.
I have no experience of using it myself, but I found a tool for interacting with Amazon IAM, the access control service for AWS, in a manner that might work for you:
IAM Command Line Toolkit (note: last updated September 2010)
There may be more usable stuff under the IAM Resources section.
If you are unfamiliar with IAM, the documentation is one place to start. Although, knowing the general style of AWS documentation, there may be better resources and tutorials to be found elsewhere.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How does Dropbox work? Is it just an FTP client with an easy-to-use web interface and support for many plarforms?
What makes it so useful to programmers, even for those who are working on web-based applications and who have FTP access to a server by default?
Does Dropbox come with an improved algorithm to facilitate file transfer for a better speed? What is the difference between an FTP client and Dropbox from a programmer's point of view?
FTP is just a way of copying files. And copying is not the same as synchronizing, which I believe is Dropbox's biggest strength.
Dropbox is a multiway synchronization system. This means if you are using your Dropbox account on many machines and editing different files on each machine, they will all be synchronised appropriately. With FTP you would have to delicately pick and choose which files need to be removed or added from each client to the server.
Another main difference is that synchronisation happens automatically whenever a file changes, which FTP does not do.
In terms of algorithms, I would guess that Dropbox uses file deltas for file transfer, which makes it much more efficient than FTP. This means only the parts of the file that changed are transferred instead of transferring the entire file every time it changes (see rsync).
I believe you are only asking about Dropbox's core functionality. Beyond that, Dropbox has lots of cool features that FTP does not like some revision control, photo gallery sharing, etc.
Dropbox files are not accessible by FTP. The API uses a REST-style architecture over the HTTP protocol. See Build the power of Dropbox into your app.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
So I would like to try using Heroku to be my server, however heroku does not allow writing to its file system. Instead I need to use the database and something like Amazon S3 for storage of things like uploaded images.
The problem is that I often don't have internet access when developing. Or very poor internet access. So developing for Amazon S3 is kind of impractical. Is there an offline version to use so that my local machine can act as the S3 cloud, and when in testing/production environments I can use the real S3?
Old question but wanted to post this, there is a "Fake S3" tool that appears to be designed to do exactly this. Just about to give it a whirl.
https://github.com/jubos/fake-s3
My recommendation is to try s3fs with rsync. Here's how it would work:
Mount your s3 drive to /mnt/sdaX/ on your production machine and /mnt/sdaY/ on your local machine.
Create a file system at /mnt/sdaX/ on your local machine.
Make the changes on your local machine as needed. When appropriate, rync /mnt/sdaX/ to /mnt/sdaY/ on your local box.
I realize that this is complicated, but I'm not sure that there's really any other way to do it while maintaining the same configuration in both places. Normally I'd say you should just write to the s3fs drive locally with local caching enabled, but I'm not sure what happens when you return online (I'm pretty sure it doesn't sync, but I've gone ahead and asked s3fs developers).
Best,
Zach
Developer, LongTail Video
Have a look at:
Eucalyptus Walrus
Park Place
It might be some work to get them running, however. I finally wanted to write my own clone using node.js, but it has moved far away from the original S3 API, so it won't really help you anymore.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
My team and I have found that documenting our project (a development platform w/ API) with a wiki is both useful to us and useful to the users. Due to some organizational issues, we're forced to do multi-site development without network connectivity. We've switched to a DVCS (Mercurial) and had great success with this. The wiki documentation proves to be a problem as the central site is setup with MediaWiki. The offsite people have no way to access or edit the wiki.
Is there any sort of wiki-style package which doesn't not require a server/database and will be useable in a DVCS environment?
Update: Should be open-source and cross-platform
I can recommend TiddlyWiki. It does not need any web servers, only a browser, stores the entire Wiki documentation in a single HTML page. This can easily be shared through Mercurial.
Edit: Check this page, it discusses how to use TiddlyWiki with DVCS. It involves using an extension dubbed SynchroTiddly.
DokuWiki stores all data in plain text files. You could install local web servers for every developer and use your VC system to sync between developers.
ikiwiki: http://ikiwiki.info/ stores the info directly in the VCS (it supports mercurial as backend).
http://zim-wiki.org/
It's a desktop wiki (WYSIWYG editing, though not very sophisticated formatting) which stores everything in plain-text files. That means you can hold the files in version control, and have a friendly editing experience.
It even has builtin Bazaar support UPDATE: also Git, Mercurial, and Fossil.
[I know, late to the party - writing for benefit of others reading this question...]
Perhaps you should look at auto-generation of documentation from source. This way, the documentation will automatically be version controlled.
A lot of generators support adding additional documentation via plain-text files which can be added to the repository.
Look into Fossil it is a DVCS that contains a built in wiki and bug tracking system. This may be just what your looking for. Read the site, there is a built in webserver. You can use a CGI script to open up the connection to people (the fossil website is the fossil DVCS). After using it you may decide to move your code over to it as well. It is open source, and does have cross platform builds.
Ended up writing my own system using python,cherrpy, and mercurial. Perhaps one day it will end up open-source. Thanks for all the suggestions.
http://hatta-wiki.org/ is a wiki running on a Mercurial repository.
It's interesting to note how it handles conflicts: simultaneous edits are silently merged on commit, even if conflicting and committed with the conflict markers! That's OK because:
it's text, not software
you see the result of your edit immediately after commiting
it treats conflict markers as valid wiki syntax (resulting in diff -u like highlighting of the conflict)!
This arrangement motivates you to edit again to resolve the conflict immediately - but doesn't force you to.
Github's gollum is open-source, git based, eats many popular syntaxes.
But the most important selling point of course is that it's built into github.
Bitbucket similarly has a mercurial based wiki. Not sure if the code is open source though (i.e. you can edit the text offline, but not sure that you can see it rendered).
MoinMoin supports storing your pages in a Mercurial repository: http://moinmo.in/Storage2009/HelpOnStorageConfiguration#Mercurial_Backend_.28hg.29
This is quite interesting because MoinMoin has been around for a while, is rather well supported, and a rich set of features (but that's just my opinion; don't take my word for it and see for yourself ;-)).