Offsite backups [closed] - backup

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I was recently tasked with coming up with an offsite backup strategy. We have about 2TB of data that would need to be backed up so our needs are a little out of the norm.
I looked into Iron Mountain and they wanted $12,000 a month!
Does anyone have any suggestions on how best to handle backing up this much data on a budget (like a tenth of Iron Mountain)? How do other companies afford to do this?
Thanks!
UPDATE :: UPDATE
Ironically enough, I just had the sort of devastating failure we're all talking about. I had my BES server fail and than 2 days later 2 drives in my Exchange server's RAID5 died (2!!!??!). I'm currently in the process of rebuilding my network and the backup integrity is an definitely an issue.
At least now my bosses are paying attention :)

You can buy external eSATA RAID boxes in the 8TB capacity range for $2600. I'm not saying that particular product is the right choice, but that's the kind of box that will do 6TB in RAID5 and still be portable enough to buy a couple of them and rotate them through the bank, like Stu says.
Obviously if you have to have to keep 7 individual days worth, a 14 day, 30 and 90 day snapshot, etc. then things are going to be much more expensive, but it's certainly doable if what you're after is just disaster recovery.
The biggest thing to make sure is part of your plan is actually testing the restoration from the backup. That seems to get overlooked WAY too often and turns out to be the weakest link in nearly all of the strategies.
You should plan for scheduled restorations as often as is reasonable where you actually dump the real data and restore from the backup. Without that, you don't know that it will work when you NEED it too.
I've lost track of the number of times I've been in a company where there's a big rack full of backup tapes/drives, all dutifully made according to the schedule only to find out that NONE of them have valid data when the server gets wiped out.
The more ways you can verify the integrity of the backups the better, but nothing substitutes for doing an actual dump/load from one of your backups to really test the setup.

Amazon S3 might fit your budget better. I don't know if there is software available to automate the backup process but it's rather easy to write your own code to handle this. Here's their pricing calculator.
According to my estimates you're going to be well under the $1000/mo range.

You really have to assess the true value of your data. If you lost it tomorrow what impact would it have on your business? We use offsite backups, it isn't cheap, but if we were to lose our data the business would cease to trade withing 2-3 days.
We considered on-site backups as a possible cost saver but in my experience with data centres/computer rooms over the last ten years (as both an employee and a customer) I've seen fires, fire suppression system malfunctions (wet), hardware theft and one day a car crashed through an external wall right into the suite. Add to that our last DC was located at Heathrow, right next to the runways....you never know what strange things can happen (remember the BA 777 that got caught short of the runway on landing?).
My advice, assess the value of the data then decide if $12k is too rich to keep it safe.

2TB is chump change nowadays.
Look into hard-drive based hot-swappable backup machines, and rent a box at your local bank:
http://www.high-rely.com/ (there are many more products such as this, but my Google-time is limited).

Jungle Disk is one such piece of software that can automate the backup process to Amazon S3. I use it for backup at home, but I guess it could work just as well from a server. Also, there are probably other backup tools that make use of S3 for offsite storage.

We've been using DataDomain appliances for that purpose for about 2 years. They're not inexpensive, but compared to $12,000/month they'd pay for themselves pretty quickly.
Basically, we send our backups over NFS and CIFS to one DataDomain appliance, it deduplicates the data and then replicates the differences to the other appliance we have at a remote site.

As for pure online solutions, make sure you do some back-of-the-envelope calculations first. For example, if you have 2TB of churn a month, you are going to saturate a 1Mb Internet connection just for your backup traffic!

As previously mentioned, Amazon S3 is definitely an option, but it may be cheaper in the long run to own the hardware you are backing up to.
For example:
Buy a basic server and and eSATA RAID5 setup with 2-3 times the capacity you currently need, then install it at a co-location center. Preferably one with high, but cheap, bandwidth.
This way the server and storage is off-site, but after the initial cost of the hardware, you are only paying for bandwidth.
Granted, the downside to this is that, unlike something like S3, if the hardware goes down you have to go fix it yourself, or pay the CoLo people to. But this may be a tradeoff you are willing to make.
Also, with this solution, you are still going to need a beefy upload pipe to handle the traffic... so there's always the "sneakernet" solution.

I've used bqbackup.com for 1-2 years no problem. You can do a sync using rsync nightly. Wanted to add that their prices are dirt cheap, and I now have close to 1TB with them.

Related

What is a good access time to a database (SQL)?

Hey.. i wanna know which time is a good accesstime, because i'm searching for a good sql database and hsqldb says their accesstime is 12ms... <-- good?
I think it would depend on your needs. Is it for a web server or a desktop application? The amount of data is also important, because reading lots of small records will perform differently than reading a few large records. Access time is also based upon your hardware, software and maybe even some other factors.
For example, you can use a database with lightning-fast access, but if your users need to connect to it over a 5 megabit VPN connection, passing through three different proxies and with trafic world-wide, your database would then just be a waste of power.
Basically, it's a marketing thing that they're claiming. It's a good product but don't just focus on access time. Make sure you also look at your other needs. Another system might just perform better, even if it has a slower acess time, because it is more optimized in reading it's indices and stuff.
So, what do you want, exactly?
I don't think access time tells you anything, really. If you have slow or incorrectly configured storage, then this access time metric will be dwarfed by how much time is spent on waits and split I/Os. Network latency is also a factor, since I'm guessing you probably won't want to have your code on the same machine as your database, and you will most likely have a few network devices you'll need to traverse in your production environment.
In my experience, all the database platforms these days will all perform adequately if configured correctly and paired with a complementary application. Pick the DBMS that best fits your requirements, follow the best practices for configuration of the DBMS on your hardware, and you should be please with the outcome.

Backups for online businesses - better external hard drives or tape drives

We are an online business. Currently we are using DVD's for our backups. The problem is we are running out of space.
I guess there are two alternatives here:
external hard disk drives
tape drives
The important point is we want to carry the backup with us from the office to home every day.
Which alternative do you think would suit best our needs?
The important point is we want to
carry the backup with us from the
office to home every day.
This is on the level of "do we want to go to work or not". Backups that are not stored external ARE NOT BACKUPS. Ever heard of buildings going down? Burning out?
You NEED backups that are at least far enough to survive a larger fire.
Both scnearios are feasible. Tapes have more / larger capacity if you grow.
Also remember soonish we get... writeable 100gb Blue Ray discs ;)

Restart daily or 100% uptime for enterprise applications?

I have a general question that is rather open-ended (i.e. "depends on platform, application type, etc.") but I am looking for general guidelines as an answer.
When is it preferable to design an application for continuous operation (100% uptime) vs. scheduled daily shutdown/restart?
Obviously, web apps need to be up all the time, so assume for this question that we are discussing an internal enterprise application, such as an accounting system, or a B2B system that is only used actively during weekday business hours.
Arguments I've heard for each are as follows:
Pro 100% Uptime: "once you get an application running, it's better to keep it up, because there's a chance it won't restart when you shut it down."
Pro daily restarts: "an application that is up continuously for 3 years might one day go down, and nobody will know how to bring it back online."
Other considerations are memory growth, performance, need for maintenance, etc. This is a programming issue because the choice you make can affect your technical design. For example, you don't need to code certain batch jobs and clear state daily if you know the application will be shutdown/restarted daily.
Thoughts?
The arguments you state both for and against 100% uptime are foolish arguments, in my opinion. If you're worried about the application not restarting when it is shutdown then you have larger issues than uptime concerns. Likewise, if you feel that nobody will know how to bring it back online after a prolonged period of uptime you have training and documentation issues.
The reality is that you should always design an application to be efficient when it comes to memory consumption and performance. Generally, by doing this you end up with an application that can sucessfully survive as a long running process or one that restarts frequently. Keep in mind that your typical computer system is rebooted periodically anyway due to OS updates, etc.
Unless you have requirements and service level agreements that guarantee 100% uptime, this isn't usually something you have to be overly concerned about as long as you design an application efficiently.
Sorry, but I'm not getting the point or this question is totally pointless.
An application, any application, should be designed, IMO, to stay up unless it's needed. If an application/platform needs to be restarted daily, then it has memory leaks, or bugs, or it's, in general, poorly written.
The point "don't make it stay up too long, otherwise you'd risk nobody will ever remember how to turn it up again" is quite laughable. I do Application Management (Operations) as my daily job, and I've never seen an application staying up for more than one month. After that period, you have to cope with OS maintainance, db patching, software upgrades, etc.
So, to summarize: write applications that can stay up as long as it's needed.
When is it preferable to design an application for continuous operation (100% uptime) vs. scheduled daily shutdown/restart?
I think this is really an orthogonal question to application design. Many web servers and application containers can support hot restarts. In other words, this is not a question so much of "application design" but rather a choice of technology. For example, you can avoid the question entirely by simply having N copies of your application (N > 1), then systematically bringing a particular instance down for maintenance and restarting as needed.
Furthermore, business needs and requirements should be determining the appropriate downtime, not your choice of technology.
Pro daily restarts: "an application that is up continuously for 3 years might one day go down, and nobody will know how to bring it back online."
Hogwash. That is a social/organizational argument, not a technical one. This is solved by having an obvious build process which includes starting the server as one of its possible tasks. That reduces the task of "restarting" to a single command.
If you're not extremely confident in your team, it might be better to go down from time to time, to clear everything. Once a day could do it, but there is a range from this to "never" ...
But this is generally dictated by business contraints. If you don't have those constraints yet ...
Well, why don't you also postpone your decision then ?
As others said, if you can't trust your app to start up again you have much larger issues.
From experience my general, personal, recommendation for web-apps is to cycle them once a day (in the early hours of the morning i.e. at the lowest traffic point) staggered over the whole server cluster. No matter how memory efficient your app is web-apps in particular can always have cache bloat issues over extended periods of up-time and one you accept the inevitability of a restart you absolutely want that to happen on your schedule and not t the whim of w3wp.exe.
Of course this all depends on the number of servers you have, the traffic manager you have (if any) and what your traffic profile looks like.
Apart from "Your app is not good enough if you need to restart it" ideas (which I see them perfect and I like them), I would prefer something in the middle as a preventive measure.
If you application is not too big, and one person can restart it without much trouble, it would be fine to restart it maybe once per month or 3/4 times per year. This way you will ensure that the sysadmin knows well how to do it (people sometimes comes and go form the companies) and also his knowledge keeps fresh.
If you have a problem and your sysadmin has not restarted the application since two years ago, he will have several manuals available and courses done, but probably he has forget some steps, or he is not so quick to solve the problem.
Other topic to consider is: "Is a fully implemented application or are you still working on it?" If it's an application made for yourselves, you still code on it and make frequent upgrades for new features, it can be interesting to restart it more frequently. If a problem appears, it has more probabilities to be hidden on the new code. It will help your programmers to fix it and your sysadmin to keep updated about what's happening with the app.
Of course, making a perfect application is always a top-prio element, but... ok, we all know that not always is possible

How do you handle off-site backups of terabytes of data?

I have terabytes of files and database dumps that I need to backup off-site.
What's the best way to accomplish this?
I'm currently weighing rsyinc to Amazon EBS or getting an appliance (eg barracuda).
I called a buddy of mine, and he said he uses backula to get all the files on a single disk, then backs that disk up to tape, then sends the tapes off to iron mountain.
Still waiting to hear back from other sysadmins I've contacted. Will post results here.
One common solution to offsite backups that is worth considering is performing the backup onsite and then physically transporting the backup elsewhere, either via secure snail mail or with a service designed for that purpose. If bandwidth is an issue, this may be more practical.
Instead of tapes, I use hard drives that I physically swap out every week. It is less expensive than tape equipment, and easier to plug into another system when necessary.
Back in the late 80s I worked at a place where every week we received a box of tapes of various sorts every monday - we would do one set of weekly backups on the tapes on that box and send them off-site. Evidently they had two of these boxes, one that was in our office and the other they kept locked up somewhere. Then we got an Exabyte drive which had a single tape capacity greater than that whole box of TK-50s, QIC-40s and mag tapes, and it was just simpler to send a single tape home with one of the manager every week.
I'm sure there are still off-site backup systems like that, but I find it easier to keep cycling a couple of 500Gb drives from my home system to my desk at work.
Why not encrpyt it and actually upload to a third party vendor?
I am thinking of doing this with my data at home but have not found a vendor that will just let me do a dump...They all want to install client side apps...
Admittedly, I have not looked that hard...
We use a couple of solutions. We have an offsite backup with another company that we do. We also use several portable hard drives and swap them out each day. Neither solution really handles multiple terabytes of data. More like gigabytes.
In the future, however, we will probably be looking at going the tape router, or something else that is similarly permanent and storable. Terabytes of data is too much to transfer over the wire. When bluray discs become reasonably priced and commercially viable, it may be a good idea to look into the 400GB discs that were touted not long ago. Those would be extremely storage friendly (both in the physical sense and the file size sense), and depending on the longevity stats, may keep for a while, similar to tapes.
I would recommend using a local san from a company like EMC that provides compressed snapshot based replication to remote facilities. It's an expensive solution, but it works.
http://www.emc.com/products/family/emc-centera-family.htm
Over the weekend, I've heard back from a couple of my sysadmin buddies.
It seems the best practice is to backup all machines to a central large disk, then back that disk up to tape, then send the tapes off site (all have used Iron Mountain).
Tapes hold 400-800G and cost $30-$80 per tape.
A tape changer seems to go for $10k on up.
Not sure how much the off-site shipping costs.
I'm scared of tape. I think it gives a false sense of data security. In my own experience from backing up dozens of terrabytes across hundreds of tapes, we discovered that the data recovery rate after a few years fell to about 70%.
To be fair, that was with a now discontinued technology (AIT), but it pretty much put me off tape for life unless it sits on a 1" spool and is reassuringly expensive.
These days, multiple hard drives, multiple locations, and yes, a fall back into Amazon S3 or other cloud provider does no harm (apart from being a tad expensive).

How do I calculate the "cost" of a crash? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Background:
Some time ago, I built a system for recording and categorizing application crashes for one of our internal programs. At the time, I used a combination of frequency and aggregated lost time (the time between the program launch and the crash) for prioritizing types of crashes. It worked reasonably well.
Now, The Powers That Be want solid numbers on the cost of each type of crash being worked on. Or at least, numbers that look solid. I suppose I could use the aggregate lost time, multiplied by some plausible figure, but it seems dodgy.
Question:
Are there any established methods of calculating the real-world cost of application crashes? Or failing that, published studies speculating on such costs?
Consensus
Accuracy is impossible, but an estimate based on uptime should suffice if it is applied consistently and its limitations clearly documented. Thanks, Matt, Orion, for taking time to answer this.
The Powers That Be want solid numbers on the cost of each type of crash being worked on
I want to fly in my hot air balloon to Mars, but it doesn't mean that such a thing is possible.
Seriously, I think you have a duty to tell them that there is no way to accurately measure this. Tell them you can rank the crashes, or whatever it is that you can actually do with your data, but that's all you've got.
Something like "We can't actually work out how much it costs. We DO have this data about how long things are running for, and so on, but the only way to attach costs is to pretend that X minutes equals X dollars even though this has no basis in reality"
If you just make some bullcrap costing algorithm and DON'T push back at all, you only have yourself to blame when management turns around and uses this arbitrary made up number to do something stupid like fire staff, or decide not to fix any crashes and instead focus on leveraging their synergy with sharepoint portal internet web sharing love server 2013
Update: To clarify, I'm not saying you should only rely on stats with 100% accuracy, and just give up on everything else.
What I think is important is that you know what it is you're measuring. You're not actually measuring cost, you're measuring uptime. As such, you should be upfront about it. If you want to estimate the cost that's fine, but I believe you need to make this clear..
If I were to produce such a report, I'd call it the 'crash uptime report' and maybe have a secondary field called "Estimated cost based on $5/minute." The managers get their cost estimate, but it's clear that the actual report is based on the uptime, and cost is only an estimate, and how the estimate works.
I've not seen any studies, but a reasonable heuristic would be something like :
( Time since last application save when crash occurred + Time to restart application ) * Average hourly rate of application operator.
The estimation gets more complex if the crashes have some impact on external customers such, or might delay other things (i.e. create a bottle neck such that another person winds up sitting around waiting because some else's application crashed).
That said, your 'powers that be' may well be happy with a very rough estimate so long as it's applied consistently and they can see how it is changing over time.
There is a missing factor here .. most applications have a 'buckling' factor where crashes suddenly start "costing" a lot more because people loose confidence in the service your app is providing. Once that happens then it can be very costly to get users back to trusting and using the system.
It depends...
In terms of cost, the only thing that matters is the business impact of the crash, so it rather depends on the type of application.
For may applications, it may not be possible to determine business impact. For others, there may be meaninful measures.
Demand-based measures may be meaningful - if sales are steady then down-time for a sales app may be useful. If sales fluctuate unpredictable, then such measures are less useful.
Cost of repair may also be useful.