I am looking for a solution that will host a nearly-static 200GB, structured, clean dataset, and provide a JSON API onto the data, for querying in a web app.
Each row of my data looks like this, and I have about 700 million rows:
parent_org,org,spend,count,product_code,product_name,date
A31,A81001,1003223.2,14,QX0081,Rosiflora,2014-01-01
The data is almost completely static - it updates once a month. I would like to support straightforward aggregate queries like:
get total spending on product codes starting QX, by organisation, by month
get total spending by parent org A31, by month
And I would like these queries to be available over a RESTful JSON API, so that I can use the data in a web application.
I don't need to do joins, I only have one table.
Solutions I have investigated:
To date I have been using Postgres (with a web app to provide the API), but am starting to reach the limits of what I can do with indexing and materialized views, without dedicated hardware + more skills than I have
Google Cloud Datastore: is suitable for structured data of about this size, and has a baked-in JSON API, but doesn't do aggregates (so I couldn't support my "total spending" queries above)
Google BigTable: can definitely do data of this size, can do aggregates, could build my own API using App Engine? Might need to convert data to hbase to import.
Google BigQuery: fast at aggregating, would need to roll my own API as with BigTable, easy to import data
I'm wondering if there's a generic solution for my needs above. If not, I'd also be grateful for any advice on the best setup for hosting this data and providing a JSON API.
Update: Seems that BigQuery and Cloud SQL support SQL-like queries, but Cloud SQL may not be big enough (see comments) and BigQuery gets expensive very quickly, because you're paying by the query, so isn't ideal for a public web app. Datastore is good value, but doesn't do aggregates, so I'd have to pre-aggregate and have multiple tables.
Cloud SQL is likely sufficient for your needs. It certainly is capable of handling 200GB, especially if you use Cloud SQL Second Generation.
They only reason why a conventional database like MySQL (the database Cloud SQL uses) might not be sufficient is if your queries are very complex and not indexed. I recommend you try Cloud SQL, and if the performance isn't sufficient, try ensuring you have sufficient indexes (hint: use the EXPLAIN statement to see how the queries are being executed).
If your queries cannot be indexed in a useful way, or your queries are so cpu intensive that they are slow regardless of indexing, you might want to graduate up to BigQuery. BigQuery is parallelised so that it can handle pretty much as much data as you throw at it, however it isn't optimized for real-time use and isn't as conveneint as Cloud SQL's "MySQL in a box".
Take a look at ElasticSearch. It's JSON, REST, cloud, distributed, quick on aggregate queries and so on. It may or may not be what you're looking for.
Related
I am new to BigQuery and GCP. I am working with a (big) public data set available in BigQuery on which I am running a SQL query - it selects a bunch of data from one of the tables in the dataset, based on a simple where clause.
I then proceed to perform additional operations on the obtained data. I only need to run this query once a month, the other operations need to be run more often (hourly).
My problem is that every time I do this, it causes BigQuery to process 4+ million rows of data, and the cost of running this query is quickly adding up for me.
Is there a way I can run the SQL query and export the data to another
table/database in GCP, and then run my operations on that exported
data?
Am I correct in assuming (and I could be wrong here) that once I
export data to standard SQL DB in GCP, the cost per query will be
less in that exported database than it is in BigQuery?
Thanks!
Is there a way I can run the SQL query and export the data to another table/database in GCP, and then run my operations on that exported data?
You can run your SQL queries and therefore export the data into another table/databases in GCP by using the Client Libraries for BigQuery. You can also refer to this documentation about how to export table data using BigQuery.
As for the most efficient way to do it, I will proceed by using both BigQuery and Cloud SQL (for the other table/database) APIs.
The BigQuery documentation has an API example for extracting a BigQuery table to your Cloud Storage Bucket.
Once the data is in Cloud Storage, you can use the Cloud SQL Admin API to import the data into your desired database/table. I attached documentation regarding the best practices on how to import/export data within Cloud SQL.
Once the data is exported you can delete the residual files from your Cloud Storage Bucket, using the console, or interacting with the Cloud Storage. API
Am I correct in assuming (and I could be wrong here) that once I export data to standard SQL DB in GCP, the cost per query will be less in that exported database than it is in BigQuery?
As for the prices, you will find here how to estimate storage and query costs within BigQuery. As for other databases like Cloud SQL, here you will find more information about the Cloud SQL pricing.
Nonetheless, as Maxim point out, you can refer to both the best practices within BigQuery in order to maximize efficiency and therefore minimizing cost, and also the best practices for using Cloud SQL.
Both can greatly help you minimize cost and be more efficient in your queries or imports.
I hope this helps.
I am trying to find out what makes the most sense for my type of database structure.
A breakdown of what it is and what I intend to do is.
A deals based website using strong consistency that will be needing to update existing linked foreign keys to new parents in a scenario where an alias such as 'Coke' is not linked up to its actual data 'Coca-Cola'.
I will be creating a price over time for these products and should be able to handle large amounts of data with little performance issues over time.
I initially began with Google's BigTable but quickly realised that without a relational part of it, it will fail on any cascading updates.
I don't want to spend plenty of time researching and learning all of these different types to later realise it isn't what I wanted. The most important aspect for me is the cascading update and ensuring it can handle a vertically large data structure for the price over time trends.
Additionally, because this is from scratch, I would be more interested in price and scalability than existing compatibility.
Cloud SQL is a fully-managed database service that makes it easy to
set up, maintain, manage, and administer your relational PostgreSQL
BETA and MySQL databases in the cloud. Cloud SQL offers high
performance, scalability, and convenience. Hosted on Google Cloud
Platform, Cloud SQL provides a database infrastructure for
applications running anywhere.
check this out it might help - https://cloud.google.com/sql/
Googles Cloud SQL service provides a fully managed relational database service. It supports PostgreSQL and MySQL.
Google also provides the Cloud Spanner service. It also provides a fully managed relational database service. Additionally Cloud Spanner provides a distributed relational database. It is better suited for mission critical systems.
I know that OLAP is used in Power Pivot, as far as I know, to speed up interacting with data.
But I know that big data databases like Google BigQuery and Amazon RedShift have appeared in the last few years. Do SQL targeted BI solutions like Looker and Chart.io use OLAPs or do they rely on the speed of the databases?
Looker relies on the speed of the database but does model the data to help with speed. Mode and Periscope are similar to this. Not sure about Chartio.
OLAP was used to organize data to help with query speeds. While used by many BI products like Power Pivot and Pentaho, several companies have built their own ways of organizing data to help with query speed. Sometimes this includes storing data in their own data structures to organize the data. Many cloud BI companies like Birst, Domo and Gooddata do this.
Looker created a modeling language called LookML to model data stored in a data store. As databases are now faster than they were when OLAP was created, Looker took the approach of connecting directly to the data store (Redshift, BigQuery, Snowflake, MySQL, etc) to query the data. The LookML model allows the user to interface with the data and then run the query to get results in a table or visualization.
That depends. I have some experience with BI solution (for example, we worked with Tableau), and it can operate is two main modes: It can execute the query against your server, or can collect the relevant data and store it on the user's machine (or on the server where the app installed). When working with large volumes, we used to make Tableau query the SQL Server itself, that's because our SQL Server machine is very strong compared to the other machines we had.
In any way, even if you store the data locally and want to "refresh" it, when it updates the data it needs to retrieve it from the database, which sometimes can also be an expensive operation (depends on how your data is built and organized).
You should also notice that you compare 2 different families of products: while Google BigQuery and Amazon's RedShift are actually database engines that used to store the data and also query it, most of the BI and reporting solutions are more concerend about querying the data and visualizing it and therefore (generally speaking) are less focused on having smart internal databases (at least from my experience).
We are very pleased with the combination BigQuery <-> Tableau Server with live connection. However, we now want to work with a data extract (500MB) on Tableau Server (since this datasource is not too big and is used very frequently). This takes too much time to refresh (1.5h+). We noticed that only 0.1% is query time and the rest is data export. Since the Tableau Server is on the same platform and location, latency should not be a problem.
This is similar to the slow export of a BigQuery table to a single file, which can be solved by using "daisy chain" option (wildcards). Unfortunately we can't use similar logic with a Google BigQuery data extract refresh in Tableau...
We have identified some approaches, but are not pleased with our current ideas:
Working with incremental refresh: our existing BigQuery table rows can change: these changes can only be applied in Tableau if you do a full refresh
Exporting the BigQuery table to GCS using the daisy chain option and making a Tableau data extract using the Tableau SDK: this would result in quite some overhead...
Writing a Dataflow job using a custom sink for Tableau Server (data extracts).
Experimenting with a Tableau web connector that communicates directly with the BigQuery API: I don't think this will be faster? I didn't see anything about parallelizing calls with the Tableau web connecector, but I didn't try this approach yet.
We would prefer a non-technical option, to limit maintenance... Is there a way to modify the Tableau connector to make use of the "daisy chain" option for BigQuery?
You've uploaded the data in BigQuery. Can't you just use the input for that load job (a CSV perhaps) as input for Tableau?
When we use Tableau and BigQuery we also notice that extracts are slow but we generally don't do that because you lose BigQuery's power. We start with a live data connection at first, and then (if needed) convert this into a custom query that aggregates that data into a much smaller datasets which extracts in just a few seconds.
Another way to achieve higher performance with BigQuery and Tableau is aggregating or joining tables on beforehand. JOINs on huge tables can be slow, so if you use a lot of those you might consider generating a denormalised dataset which does all of the JOIN-ing first. You will get a dataset with a lot of duplicates and a lot of columns. But if you select only what you need in Tableau (hide unused fields!) then these columns won't count in your query cost.
One recommendation I have seen is similar to your point 2 where you export the BQ table to Google Cloud Storage and then use the Tableau Extract API to create a .tde from the flat files in GCS.
This was from an article on the Google Cloud site so I'd assume it would be best practice:
https://cloud.google.com/blog/products/gcp/the-switch-to-self-service-marketing-analytics-at-zulily-best-practices-for-using-tableau-with-bigquery
There is an article here which provides a step by step guide to achieving the above.
https://community.tableau.com/docs/DOC-23161
It would be nice if Tableau optimised the BQ connector for extract refresh using the BigQuery Storage API. We too have our Tableau Server environment in the same GCP zone as our BQ datasets and experience slow refresh times.
I am relatively new to Azure storage and have been implementing a solution for some time now.
And I keep hitting obstacles, making me feel that I'm not applying the right storage type for the data I'm storing.
So this is more of an overall question:
When should I use Azure SQL?
When should I use Azure Table storage?
When should I use Azure Blobs?
So far I have been using table storage a lot, and I'm now paying for it.
As requirements for the solution grow I find myself unable to access the data as needed.
For instance I need to fetch the 50 latest entries in a table, but I can not user OrderBy in the query.
I need to fetch the total amount of entries, but can not use Count.
I keep getting the impression that any data I plan to access regularly without knowing the exact RowKey and PartitionKey should be indexed in Azure SQL aswell as being stored in a table. Is this correct?
I also find myself recreating objects as Entity objects, but with the very severe limitations on datatypes I often end up just serializing the object into a byte array. And though a table row may hold up to 1MB a byte array on that row may only hold 64KB, at which point I end up using Blob storage instead.
So in the end I feel like I would have been better off just putting all my data in Azure SQL and indexing larger data but saving it as blobs.
Of course this does not feel quite right, since that would leave Table storage with no real purpose.
So I'm wondering if there are any guidelines for when to use which kind of storage.
In my case I have very large amount of data in some areas, some of it consumes a fair amount of space (often above 64KB), but I also need to access the data very frequently and will need to be able to filter and sort it by certain values.
Do I really need to index all data I plan to access in SQL?
And would I be better off avoiding Table for any data that could potentially exceed 64KB?
I feel like there's something I'm not doing right. Something I didn't understand. What am I missing here?
The best recommendation I can make is basically, "Try really hard not to use Azure Table Storage". As other folks have pointed out, it's not just a "No-SQL" data-store, it's a particularly stunted, handicapped, and very-low-featured instance of a No-SQL store. About the only thing good about it is that you can put lots and lots of data into it very quickly, and with minimal storage fees. However, you basically can't hope to get that data back out again unless you're lucky enough to have a use-case that magically matches its Partition-Key/Row-Key storage model. If you don't - and I suspect very few people do - you're going to be doing a lot of partition scans, and processing the data yourself.
Beyond that, Azure Table Storage seems to be at a dead-end in terms of development. If you look at the "Support Secondary Indexes" request on the Azure feedback forums (https://feedback.azure.com/forums/217298-storage/suggestions/396314-support-secondary-indexes), you can see that support for Secondary Indexes was promised as far back as 2011, but no progress has been made. Nor has any progress been made on any of the other top requests for Table Storage.
Now, I know that Scott Guthrie is a quality guy, so my hope is that all this stagnation on the Table Storage front is a preface to Azure fixing it and coming up with something really cool. That's my hope (though I have zero evidence that's the case). But for right now, unless you don't have a choice, I'd strongly recommend against Azure Table Storage. Use Azure SQL; use your own instance of MongoDB or some other No-SQL DB; or use Amazon DynamoDB. But don't use Azure Table Storage.
EDIT: 2014-10-09 - Having been forced into a scenario where I needed to use it, I've modified my opinion on Azure Table Storage slightly. It does in fact have all the regrettable limitations I ascribe to it above, but it also has its (limited) uses. I go into them somewhat on a blog post here.
EDIT: 2017-02-09 - Nah, ATS is still awful. Steer clear of it. It hasn't improved significantly in 7+ years, and MS obviously wishes it would just go away. And it probably should - they're presumably only keeping it around for folks who made the mistake of betting on it originally.
have a look at this: Windows Azure Table Storage and Windows Azure SQL Database - Compared and Contrasted
doesn't include blobs, but a good read anyway...
I keep getting the impression that any data I plan to access regularly without knowing the exact RowKey and PartitionKey should be indexed in Azure SQL aswell as being stored in a table. Is this correct?
Table storage does not support secondary indexes and so any efficient queries should contain the RowKey and the PartitionKey. There can be workarounds such as saving the same data twice in the same table with different RowKeys. However this can quickly become a pain. If eventual consistency is ok then you could do this. You need to take care of transactions and rollbacks.
In my case I have very large amount of data in some areas, some of it consumes a fair amount of space (often above 64KB), but I also need to access the data very frequently and will need to be able to filter and sort it by certain values.
Use table storage for basic NoSQL functionality and the ability to scale quickly. However, if you want secondary indexes and other such features you might have to take a look at something like DynamoDB on AWS which afaik seems to have better support for secondary indexes etc. If you have data that has complex relationships in other words data that requires an RDBMS go with SQL Azure.
Now, as far as your options on Azure go I'd think you would need to store everything on SQL Azure and large objects as blobs or on table storage.
Do I really need to index all data I plan to access in SQL?
Tough to say. If each partition is going to contain say just 100 rows then you can query by partition key and any of the columns. At this point the partition scan is going to be pretty fast. However, if you have a million rows then it could be a problem.
I feel like there's something I'm not doing right. Something I didn't understand. What am I missing here?
A bunch of early Azure users started using Table Storage without understanding what NoSQL (and in this case a particularly stunted version of NoSQL) entails.