Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Does anyone have information/ideas about offloading SQL data processes to other hardware, possibly in a cloud environment (internal or external)? We have nightly processes that we really don't have the processing power to complete in a nightly fashion and are looking for other alternatives. We are considering new hardware, but that won't happen for a while.
More details about out situation:
We are using & licensed(SA) for Sql Server 2008, 2 cpu currently. The server is backended by an EVA 4000 with maxed out spindles. Our database is almost 2TB in size. We have lots of nightly processes that crunch data for summary tables and that do scheduling of our customer email sends. Currently, we are limited by what the EVA can physically do. Most of the time, the read and writes are what take the longest. We are considering moving off the EVA for something else, but this will not happen until 2013 or 2014.
I would recommend you checkout Amazon Web Services: http://aws.amazon.com/running_databases/
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 11 days ago.
Improve this question
Program A is good at collecting data while Program B, in another language, is good at creating REST APIs. Is it possible to connect these two with a single database that A and B will read and write to? Performance for database operations is not really an issue in my case.
Sure this is possible. Databases typically can handle multiple connections from different programs/clients. A database does not really care which language the tool that is making the connection is written in.
Short edit:
Also most databases support "transactions". Which are used to cover that different connected clients do not break consistency of your application data while reading and writing in parallel.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Can you give me a hint a pros and cons on wheather to choose RavenHQ or host raven in our own server?
Facts
Internet Web application (OLTP)
30 000 documents or records per month will be generated
Approx. 300 users (data entry) simultaneously (maybe less but needs to scale up to 300 hundred if neccessary)
4 Admins for reporting and issues
Will have to maintain end of day backup
Will have to replicate to SQL or other RDBMS for reporting purpose
(like Datawarehouse)
Will enable Versioning Bundle for audit trail
Absolutely critical in terms of loosing money if it doesn't work
Working time from morning till afternoon
Please advise me for the most reliable and fast choice, I'm not considering the cost in this choice?
RavenHQ or host raven in our own dedicated Server?
I would recommend RavenHQ with a Replicated Plan due to your requirement that it was Absolutely critical to work. With a dedicated server you have a single point of failure so it goes down nothings is going to work. It supports your:
backup requirement (https://ravenhq.zendesk.com/entries/24241973-Periodic-Backups-to-Amazon-S3-Glacier)
has the Versioning Bundle (https://ravenhq.zendesk.com/entries/21336716-What-RavenDB-bundles-are-supported-)
would easily support 300 simultaneous users
30k documents a month would be about 450megs of space a month which would be covered by the Gold and Platinum level plans.
Unsure what you mean by 4 Admins so I can not comment on that.
You would have to write your own data warehousing service as SQL Replication is not a supported plug-in but that would be very easy to do.a
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have a production db that I'd like to copy to dev. Unfortunately it takes about an hour to do this operation via mysqldump | mysql and I am curious if there is a faster way to do this via direct sql commands within mysql since this is going into the same dbms and not moving to another dbms elsewhere.
Any thoughts / ideas on a streamlined process to perform this inside of the dbms so as to eliminate the long wait time?
NOTE: The primary goal here is to avoid hour long copies as we need some data very quickly from production in the dev db. This is not a question about locking or replication. Wanted to clarify based on some comments from my including more info / ancillary remarks than I should have initially.
You could set up a slave to replicate the production db, then take dumps from the slave. This would allow your production database to continue operating normally.
After the slave is done performing a backup, it will catch back up with the master.
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions-backups-mysqldump.html
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am in the process of evaluating whether BigQuery could be a good choice for some data processing we want to run periodically.
I do appreciate that BigQuery is still under heavy development and that improvements and fixes are likely to happen pretty often.
I woud like to know what would be the process should a release break a previously working process. I have read the SLAs, but they seem to be more oriented to downtime, rather than regression issues/bugs. Is there an option for paid support with SLAs?
Like most commercial services, Google applies rigorous build and test processes to ensure quality. If you have specific support requirements, we encourage you to contact Google's Cloud Platform sales team to discuss our premier offering.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
We have a need to have a datastore of some form that has the following properties.
Relocatable, local or remote systems.
Capable of multiple readers/writers, new queries should contain updates.
De-centralized, no server would be required.
Capable of holding at least 16 Mb of data.
SQL CE seems capable, but I'm not sure I'd understand what technologies would go into integrating such a solution as I don't really have an SQL background.
Is there anyone that has tackled a problem like this? what solutions have worked for you?
For point #1, do you want to be able to access the SQL CE database remotely on a share? If so I do not believe you want to do this as CE is not targetted for this. See this link for some details. I think it would be fine for the other 3 items if I am understanding you properly.