Pros & cons of BigQuery vs. Amazon Redshift [closed] - google-bigquery

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Comparing Google BigQuery vs. Amazon Redshift shows that both can answer same set of requirements, differ mostly by cost plans. It seems that Redshift is more complex to configure (defining keys and optimization work) vs. Google BigQuery that perhaps has an issue with joining tables.
Is there a pros & cons list of Google BigQuery vs. Amazon Redshift?

I posted this comparison on reddit. Quickly enough a long term RedShift practitioner came to comment on my statements. Please see https://www.reddit.com/r/bigdata/comments/3jnam1/whats_your_preference_for_running_jobs_in_the_aws/cur518e for the full conversation.
Sizing your cluster:
Redshift will ask you to choose a number of CPUs, RAM, HD, etc. and to turn them on.
BigQuery doesn't care. Use it whenever you want, no provisioning needed.
Hourly costs when doing nothing:
Redshift will ask you to pay per hour of each of these servers running, even when you are doing nothing.
When idle BigQuery only charges you $0.02 per month per GB stored. 2 cents per month per GB, that's it.
Speed of queries:
Redshift performance is limited by the amount of CPUs you are paying for
BigQuery transparently brings in as many resources as needed to run your query in seconds.
Indexing:
Redshift will ask you to index (correction: distribute) your data under certain criteria, and you'll only be able to run fast queries based on this index.
BigQuery has no indexes. Every operation is fast.
Vacuuming:
Redshift requires periodic maintenance and 'vacuum' operations that last hours. You are paying for each of these server hours.
BigQuery does not. Forget about 'vacuuming'.
Data partitioning and distributing:
Redshift requires you to think about how to distribute data within your servers to keep performance up - optimization that works only for certain queries.
BigQuery does not. Just run whatever query you want.
Streaming live data:
Impossible(?) with Redshift.
BigQuery easily handles ingesting up to 100,000 rows per second per table.
Growing your cluster:
If you have more data, or more concurrent users scaling up will be painful with Redshift.
BigQuery will just work.
Multi zone:
You want a multi-zone Redshift for availability and data integrity? Painful.
BigQuery is multi-zoned by default.
To try BigQuery you don't need a credit card or any setup time. Just try it (quick instructions to try BigQuery).
When you are ready to put your own data into BigQuery, just copy your JSON new-line separated logs from to Google Cloud Storage and import them.
See this in depth guide to data warehouse pricing on the cloud:
Understanding Cloud Pricing Part 3.2 - More Data Warehouses

Amazon Redshift is a standard SQL database (based on Postgres) with MPP features that allow it to scale. These features also require you to conform your data model somewhat to get the best performance. It supports a large amount of the SQL standard and most tools that can speak to Postgres can use it unchanged.
BigQuery is not a database, in the sense that there it doesn't use standard SQL and doesn't provide JDBC/ODBC connectivity. It's a unique service with it's own API and interfaces. It provides limited support for SQL queries but most users interact with via custom code (Java, Python, etc.). Some 3rd party tools have added support for BigQuery but existing tools will not work without modification.
tl;dr - Redshift is better for interacting with existing tools and using complex SQL. BigQuery is better for custom coded interactions and teams who dislike SQL.
UPDATE 2017-04-17 - Here's a much more up to date summary of the cost and speed differences (wrapped in a sales pitch so YMMV). TL;DR - Redshift is usually faster and will be cheaper if you query the data somewhat regularly. http://blog.panoply.io/a-full-comparison-of-redshift-and-bigquery
UPDATE - Since I keep getting down votes on this (🤷‍♂️) here's an up-to-date response to the items in the other answer:
Sizing your cluster:
Redshift allows you to tailor your costs to your usage. If you want the fastest possible queries choose SSD nodes and if you want the lowest possible cost per GB choose HDD nodes. Start small and add nodes whenever you want.
Hourly costs when doing nothing:
Redshift keeps your cluster ready for queries, can respond in milliseconds (result cache) and it provides a simple, predictable monthly bill.
For example, even if some script accidentally runs 10,000 giant queries over the weekend your Redshift bill will not increase at all.
Speed of queries:
Redshift performance is absolutely best in class and gets faster all the time. 3-5x faster in the last 6 months.
Indexing:
Redshift has no indexes. It allows you to define sort keys to optimize performance from fast to insanely fast.
Vacuuming:
Redshift now automatically runs routine maintenance such as ANALYZE and VACUUM DELETE when your cluster has free resource.
Data partitioning and distributing:
Redshift never requires distribution. It allows you to define distribution keys which can make even huge joins very fast.
{Ask competitors about join performance…}
Streaming live data:
Redshift has 2 choices
Stream real time data into Redshift using Amazon Kinesis Firehose.
Skip ingestion altogether by querying your real time instantly on S3 as soon as it land (and at high speeds) using Redshift Spectrum external tables.
Growing your cluster:
Redshift can elastically resize most clusters in a few minutes.
Multi zone:
Redshift seamlessly replaces any failed hardware and continuously backs up your data, including across regions if desired.

Related

is redshift really required when quicksight can query directly from s3 using athena?

We have data dumped into S3 buckets and we are using these data to pull some reports in Quicksight some directly accessing s3 as datasource and for other reports, we used Athena to query S3.
At which point, one need to use Redshift? Is there any advantage of using Redshift over S3+Athena?
No you might be perfectly fine with just QuickSight, Athena and S3 - it will be relatively cheaper as well if you keep Redshift out of the equation. Athena is based on PRESTO and is pretty comprehensive in terms of functionality for most SQL reporting needs.
You would need Redshift if you approach or hit the QuickSight's SPICE limits and would still like your reports to be snappy and load quickly. From a data engineering side, if you need to update existing records it is easier to micro batch and update records in RedShift. With athena/s3 you also need to take care of optimising the storage format (use orc/parquet file formats, use partitions, not use small files etc...) - it is not rocket science but some people prefer paying for RedShift and not having to worry about that at all.
In the end, RedShift will probably scale better when your data grows very large (into the petabyte scale). However, my suggestion would be to keep using Athena and follow it's best practices and only use RedShift if you anticipate huge growth and want to be sure that you can scale the underlying engine on demand (and, of course, are willing to pay extra for it).

Data processing - BigQuery vs Data Proc+BigQuery

We have large volumes (10 to 400 billion) of raw data in BigQuery tables. We have a requirement to process this data to convert and create the data in the form of star schema tables (probably a different dataset in bigquery) which can then be accessed by atscale.
Need pros and cons between two options below:
1. Write complex SQL within BigQuery which reads data form source dataset and then loads to target dataset (used by Atscale).
2. Use PySpark or MapReduce with BigQuery connectors from Dataproc and then load the data to BigQuery target dataset.
The complexity of our transformations involve joining multiple tables at different granularity, using analytics functions to get the required information, etc.
Presently this logic is implemented in vertica using multiple temp tables for faster processing and we want to re-write this processing logic in GCP (Big Query or Data Proc)
I went successfully with option 1: Big Query is very capable to run the very complex transformation with SQL, on top of that you can also run them incrementally with time range decorators. Note that it takes a lot of time and resources to take data back and forth to BigQuery. When running BigQuery SQL data never leaves BigQuery in the first place and you already have all raw logs there. So as long your problem can be solved by a series of SQL I believe this is the best way to go.
We moved out Vertica reporting cluster, rewriting successfully ETL last year, with option 1.
Around a year ago, I've written POC comparing DataFlow and series of BigQuery SQL jobs orchestrated by potens.io workflow allowing SQL parallelization at scale.
I took a good month to write DataFlow in Java with 200+ data points and complex transformation with terrible debugging capability at a time.
And a week to do the same using a series of SQL with potens.io utilizing
Cloud Function for Windowed Tables and parallelization with clustering transient tables.
I know there's been bunch improvement in CloudDataFlow since then, but at a time
the DataFlow did fine only at a million scale and never-completed at billions record input (main reason shuffle cardinality went little under billions of records, with each records having 200+ columns). And the SQL approach produced all required aggregation under 2 hours for a dozen billion. Debugging and easiest of troubleshooting with potens.io helped a lot too.
Both BigQuery and DataProc can handle huge amounts of complex data.
I think that you should consider two points:
Which transformation would you like to do in your data?
Both tools can make complex transformations but you have to consider that PySpark will provide you a full programming language processing capability while BigQuery will provide you SQL transformations and some scripting structures. If only SQL and simple scripting structures can handle your problem, BigQuery is an option. If you need some complex scripts to transform your data or if you think you'll need to build some extra features involving transformations in the future, PySpark may be a better option. You can find the BigQuery scripting reference here
Pricing
BigQuery and DataProc have different pricing systems. While in BigQuery you'd need to concern about how much data you would process in your queries, in DataProc you have to concern about your cluster's size and VM's configuration, how much time your cluster would be running and some other configurations. You can find the pricing reference for BigQuery here and for DataProc here. Also, you can simulate the pricing in the Google Cloud Platform Pricing Calculator
I suggest that you create a simple POC for your project in both tools to see which one has the best cost benefit for you.
I hope these information help you.

Optimized storage for large integer series

I'm currently designing the back-end of a start-up from scratch. We scrape time series from the Internet. We scrape a large amount of integers every minute and store them in rows with a timestamp in csv files.
We didn't start to exploit the data properly as we are still on the design phase. I was wondering, what would be the optimal storage for several years of integer series ? We started to look toward loading it in Postgres, but is sql suited for exploiting time series ?
I was expecting to find a miracle software that would be optimal for handling this kind of specific datasets, and would be glad to hear any suggestion that would enable :
Persistent large storage
Averaging/grouping calculation, possibly other R-like features
Gain in performance, power or ease of use compared to raw sql database storage
Every minute, 8,000 values translates into 11.5 million values per day or 4 billion rows per year. This is a heavy load. Just the insert load (using whatever ACID-compliant method) is noticeable -- over 100 inserts per second. This is definitely manageable in modern database systems, but it is not trivial.
It is quite likely that Postgres can handle this load, with appropriate indexes and partitioning schemes. The exact nature of this solution depends on the queries that you need to run, but Postgres does have the underlying tools to support it.
However, your requirements are (in my opinion) bigger than Stack Overflow can provide. If you are designing such a system, you should enlist the help of a professional Postgres DBA. I might add that you could consider looking at cloud-based solutions such as Amazon Redshift or Microsoft Azure because these allow you to easily scale the system "just" by paying more money.

Pros and Cons of using MongoDB instead of MS SQL Server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am new to NoSQL world and thinking of replacing my MS Sql Server database to MongoDB. My application (written in .Net C#) interacts with IP Cameras and records meta data for each image coming from Camera, into MS SQL Database. On average, i am inserting about 86400 records per day for each camera and in current database schema I have created separate table for separate Camera images, e.g. Camera_1_Images, Camera_2_Images ... Camera_N_Images. Single image record consists of simple metadata info. like AutoId, FilePath, CreationDate. To add more details to this, my application initiates separate process (.exe) for each camera and each process inserts 1 record per second in relative table in database.
I need suggestions from (MongoDB) experts on following concerns:
to tell if MongoDB is good for holding such data, which eventually will be queried against time ranges (e.g. retrieve all images of a particular camera between a specified hour)? Any suggestions about Document Based schema design for my case?
What should be the specs of server (CPU, RAM, Disk)? any suggestion?
Should i consider Sharding/Replication for this scenario (while considering the performance in writing to synch replica sets)?
Are there any benefits of using multiple databases on same machine, so that one database will hold images of current day for all cameras, and the second one will be used to archive previous day images? I am thinking on this with respect to splitting reads and writes on separate databases. Because all read requests might be served by second database and writes to first one. Will it benefit or not? If yes then any idea to ensure that both databases are synced always.
Any other suggestions are welcomed please.
I am myself a starter on NoSQL databases. So I am answering this at the expense of potential down votes but it will be a great learning experience for me.
Before trying my best to answer your questions I should say that if MS
SQL Server is working well for you then stick with it. You have not
mentioned any valid reason WHY you want to use MongoDB except the fact
that you learnt about it as a document oriented db. Moreover I see
that you have almost the same set of meta-data you are capturing for
each camera i.e. your schema is dynamic.
to tell if MongoDB is good for holding such data, which eventually will be queried against time ranges (e.g. retrieve all images of a particular camera between a specified hour)? Any suggestions about Document Based schema design for my case?
MongoDB being a document oriented db, is good at querying within an aggregate (you call it document). Since you already are storing each camera's data in its own table, in MongoDB you will have a separate collection created for each camera. Here is how you perform date range queries.
What should be the specs of server (CPU, RAM, Disk)? any suggestion?
All NoSQL data bases are built to scale-out on commodity hardware. But by the way you have asked the question, you might be thinking of improving performance by scaling-up. You can start with a reasonable machine and as the load increases, you can keep adding more servers (scaling-out). You no need to plan and buy a high end server.
Should i consider Sharding/Replication for this scenario (while considering the performance in writing to synch replica sets)?
MongoDB locks the entire db for a single write (but yields for other operations) and is meant for systems which have more reads than writes. So this depends upon how your system is. There are multiple ways of sharding and should be domain specific. A generic answer is not possible. However some examples can be given like sharding by geography, by branches etc.
Also read A plain english introduction to CAP Theorem
Updated with answer to the comment on sharding
According to their documentation, You should consider deploying a sharded cluster, if:
your data set approaches or exceeds the storage capacity of a single node in your system.
the size of your system’s active working set will soon exceed the capacity of the maximum amount of RAM for your system.
your system has a large amount of write activity, a single MongoDB instance cannot write data fast enough to meet demand, and all other
approaches have not reduced contention.
So based upon the last point yes. The auto-sharding feature is built to scale writes. In that case, you have a write lock per shard, not per database. But mine is a theoretical answer. I suggest you take consultation from 10gen.com group.
to tell if MongoDB is good for holding such data, which eventually
will be queried against time ranges (e.g. retrieve all images of a
particular camera between a specified hour)?
This quiestion is too subjective for me to answer. From personal experience with numerous SQL solutions (ironically not MS SQL) I would say they are both equally as good, if done right.
Also:
What should be the specs of server (CPU, RAM, Disk)? any suggestion?
Depends on too many variables that only you know, however a small cluster of commodity hardware works quite well. I cannot really give a factual response to this question and it will come down to your testing.
As for a schema I would go for a document of the structure:
{
_id: {},
camera_name: "my awesome camera",
images: [
{
url: "http://I_like_S3_here.amazons3.com/my_image.png" ,
// All your other fields per image
}
]
}
This should be quite easy to mantain and update so long as you are not embedding much deeper since then it could become a bit of pain, however, that depends upon your queries.
Not only that but this should be good for sharding since you have all the data you need in one document, if you were to shard on _id you could probably get the perfect setup here.
Should i consider Sharding/Replication for this scenario (while considering the performance in writing to synch replica sets)?
Possibly, many people assume they need to shard when in reality they just need to be more intelligent in how they design the database. MongoDB is very free form so there are a lot of ways to do it wrong, but that being said, there are also a lot of ways of dong it right. I personally would keep sharding in mind. Replication can be very useful too.
Are there any benefits of using multiple databases on same machine, so that one database will hold images of current day for all cameras, and the second one will be used to archive previous day images?
Even though MongoDBs write lock is on DB level (currently) I would say: No. The right document structure and the right sharding/replication (if needed) should be able to handle this in a single document based collection(s) under a single DB. Not only that but you can direct writes and reads within a cluster to certain servers so as to create a concurrency situation between certain machines in your cluster. I would promote the correct usage of MongoDBs concurrency features over DB separation.
Edit
After reading the question again I omitted from my solution that you are inserting 80k+ images for each camera a day. As such instead of the embedded option I would actually make a row per image in a collection called images and then a camera collection and query the two like you would in SQL.
Sharding the images collection should be just as easy on camera_id.
Also make sure you take you working set into consideration with your server.
to tell if MongoDB is good for holding such data, which eventually
will be queried against time ranges (e.g. retrieve all images of a
particular camera between a specified hour)? Any suggestions about
Document Based schema design for my case?
MongoDB can do this. For better performance, you can set an index on your time field.
What should be the specs of server (CPU, RAM, Disk)? any suggestion?
I think RAM and Disk would be important.
If you don't want to do sharding to scale out, you should consider a larger size of disk so you can store all your data in it.
Your hot data should can fit into your RAM. If not, then you should consider a larger RAM because the performance of MongoDB mainly depends on RAM.
Should i consider Sharding/Replication for this scenario (while
considering the performance in writing to synch replica sets)?
I don't know many cameras do you have, even 1000 inserts/second with total 1000 cameras should still be easy to MongoDB. If you are concerning insert performance, I don't think you need to do sharding(Except the data size are too big that you have to separate them into several machines).
Another problem is the read frequency of your application. It it is very high, then you can consider sharding or replication here.
And you can use (timestamp + camera_id) as your sharding key if your query only on one camera in a time range.
Are there any benefits of using multiple databases on same machine, so
that one database will hold images of current day for all cameras, and
the second one will be used to archive previous day images?
You can separate the table into two collections(archive and current). And set index only on archive if you only query date on archive. Without the overhead of index creation, the current collection should benefit with insert.
And you can write a daily program to dump the current data into archive.

Optimization techniques for large databases [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
What optimization techniques do you use on extremely large databases? If our estimations are correct, our application will have billions of records stored in the db (MS SQL Server 2005), mostly logs that will be used for statistics. The data contains numbers (mostly integer) and text (error message texts, URLs) alike.
I am interested in ANY kind of tips, hacks, solutions.
The question is a little big vague, but here are a few tips:
Use appropriate hardware for your databases. I'd opt for 64-bit OS as well.
Have dedicated machines for the DBs. Use fast disks configured for optimal performance. The more disks you can span over, the better the performance.
Optimize the DB for the type of queries that will be performed. What happens more SELECTs or INSERTs?
Does the load happens for the entire day, or for just few hours? Can you postpone some of the things to be run for the night?
Have incremental backups.
If you'll consider Oracle instead of SQL Server, you could use features such as Grid and Table Partitioning, which might boost performance considerably.
Consider having some load-balancing solution between the DB servers.
Pre-design the schemes and tables, so queries will be performed as fast as possible. Consider the appropriate indexes as well.
You're gonna have to be more specific about the way you're going to store those logs. Are they LOBs in the DB? Simple text records?
I don't use it myself but I have read that one can use Hadoop in combination with hbase for distributed storage and distributed analysing of data like logs.
duncan's link has a good set of tips. Here are a few more tips:
If you do not need to query against totally up-to-date data (i.e. if data up to the last hour or close of business yesterday is acceptable), consider building a separate data mart for the analytics. This allows you to optimise this for fast analytic queries.
The SQL Server query optimiser has a star transformation operator. If the query optimiser recongises this type of query it can select what slice of data you want by filtering based on the dimension tables before it touches the fact table. This reduces the amount of I/O needed for the query.
For VLDB applications involving large table scans, consider direct attach storage with as many controllers as possible rather than a SAN. You can get more bandwidth cheaper. However, if your data set is less than (say) 1TB or so it probably won't make a great deal of difference.
A 64-bit server with lots of RAM is good for caching if you have locality of reference in your query accesses. However, a table scan has no locality of reference so once it gets significantly bigger than the RAM on your server extra memory doesn't help so much.
If you partition your fact tables, consider putting each partition on a sepaarate disk array - or at least a separate SAS or SCSI channel if you have SAS arrays with port replication. Note that this will only make a difference if you routinely do queries across multiple partitions.