Where Calculated Field performs better in QuickSight, in Dataset or in Visuals? - amazon-s3

I can add Calculated Field in Amazon QuickSight in three places:
Option 1: In Athena Query and add that as a view. Then refer that view as QuickSight Dataset and use that into Visuals
Option 2: In QuickSight Dataset and use that into Visuals
Option 3: In QuickSight Visuals directly
I wonder, where to place the calculated field is best option if I consider the dashboard performance. Anyone can share their views?
I tried to place that into Athena end and in the Dataset end. To me seems same. Only little slowness I found in loading of website, when I placed that into Athena end. But I am not sure, where I should place that ideally

I prefer to create the custom dimensions in Athena and then query them from QuickSight. Performance might be a bit worse this way (because you're transferring more data), but you can avoid vendor lock-in (if you want to change the BI tool, you don't have to rebuild the dataset). I calculate the fields directly in the visuals only if they are aggregations (sum, count, etc.)

Related

is redshift really required when quicksight can query directly from s3 using athena?

We have data dumped into S3 buckets and we are using these data to pull some reports in Quicksight some directly accessing s3 as datasource and for other reports, we used Athena to query S3.
At which point, one need to use Redshift? Is there any advantage of using Redshift over S3+Athena?
No you might be perfectly fine with just QuickSight, Athena and S3 - it will be relatively cheaper as well if you keep Redshift out of the equation. Athena is based on PRESTO and is pretty comprehensive in terms of functionality for most SQL reporting needs.
You would need Redshift if you approach or hit the QuickSight's SPICE limits and would still like your reports to be snappy and load quickly. From a data engineering side, if you need to update existing records it is easier to micro batch and update records in RedShift. With athena/s3 you also need to take care of optimising the storage format (use orc/parquet file formats, use partitions, not use small files etc...) - it is not rocket science but some people prefer paying for RedShift and not having to worry about that at all.
In the end, RedShift will probably scale better when your data grows very large (into the petabyte scale). However, my suggestion would be to keep using Athena and follow it's best practices and only use RedShift if you anticipate huge growth and want to be sure that you can scale the underlying engine on demand (and, of course, are willing to pay extra for it).

Data processing - BigQuery vs Data Proc+BigQuery

We have large volumes (10 to 400 billion) of raw data in BigQuery tables. We have a requirement to process this data to convert and create the data in the form of star schema tables (probably a different dataset in bigquery) which can then be accessed by atscale.
Need pros and cons between two options below:
1. Write complex SQL within BigQuery which reads data form source dataset and then loads to target dataset (used by Atscale).
2. Use PySpark or MapReduce with BigQuery connectors from Dataproc and then load the data to BigQuery target dataset.
The complexity of our transformations involve joining multiple tables at different granularity, using analytics functions to get the required information, etc.
Presently this logic is implemented in vertica using multiple temp tables for faster processing and we want to re-write this processing logic in GCP (Big Query or Data Proc)
I went successfully with option 1: Big Query is very capable to run the very complex transformation with SQL, on top of that you can also run them incrementally with time range decorators. Note that it takes a lot of time and resources to take data back and forth to BigQuery. When running BigQuery SQL data never leaves BigQuery in the first place and you already have all raw logs there. So as long your problem can be solved by a series of SQL I believe this is the best way to go.
We moved out Vertica reporting cluster, rewriting successfully ETL last year, with option 1.
Around a year ago, I've written POC comparing DataFlow and series of BigQuery SQL jobs orchestrated by potens.io workflow allowing SQL parallelization at scale.
I took a good month to write DataFlow in Java with 200+ data points and complex transformation with terrible debugging capability at a time.
And a week to do the same using a series of SQL with potens.io utilizing
Cloud Function for Windowed Tables and parallelization with clustering transient tables.
I know there's been bunch improvement in CloudDataFlow since then, but at a time
the DataFlow did fine only at a million scale and never-completed at billions record input (main reason shuffle cardinality went little under billions of records, with each records having 200+ columns). And the SQL approach produced all required aggregation under 2 hours for a dozen billion. Debugging and easiest of troubleshooting with potens.io helped a lot too.
Both BigQuery and DataProc can handle huge amounts of complex data.
I think that you should consider two points:
Which transformation would you like to do in your data?
Both tools can make complex transformations but you have to consider that PySpark will provide you a full programming language processing capability while BigQuery will provide you SQL transformations and some scripting structures. If only SQL and simple scripting structures can handle your problem, BigQuery is an option. If you need some complex scripts to transform your data or if you think you'll need to build some extra features involving transformations in the future, PySpark may be a better option. You can find the BigQuery scripting reference here
Pricing
BigQuery and DataProc have different pricing systems. While in BigQuery you'd need to concern about how much data you would process in your queries, in DataProc you have to concern about your cluster's size and VM's configuration, how much time your cluster would be running and some other configurations. You can find the pricing reference for BigQuery here and for DataProc here. Also, you can simulate the pricing in the Google Cloud Platform Pricing Calculator
I suggest that you create a simple POC for your project in both tools to see which one has the best cost benefit for you.
I hope these information help you.

Best approach for BigQuery data transformations

I already have terabytes of data stored on BigQuery and I'm wondering to perform heavy data transformations on it.
Considering COSTS and PERFORMANCE, what the best approach you guys would suggest to perform these transformations for future usage of these data on BigQuery?
I'm considering a few options:
1. Read raw data from DataFlow and then load the transformed data back into BigQuery?
2. Do it directly from BigQuery?
Any ideas about how to proceed with this?
I wrote down some most important things about performance, you can find there consideration regarding your question about using DataFlow.
Best practices considering performance:
Choosing file format:
BigQuery supports a wide variety of file formats for data ingestion. Some are going to be naturally faster than others. When optimizing for load speed, prefer using the AVRO file format, which is binary, row-based format and enables to split it and then read it in parallel with multiple workers.
Loading data from compressed files, specifically CSV and JSON, is going to be slower than loading data in a other format. And the reason being is because, since the compression of Gzip is not splitable, there is a need to take that file, load it onto a slot within BQ, and then do the decompression, and then finally parallelize the load afterwards.
**FASTER**
Avro(Compressed)
Avro(Uncompressed)
Parquet/ORC
CSV
JSON
CSV (Compressed)
JSON(Compressed
**SLOWER**
ELT / ETL:
After loading data into BQ, you can think about transformations (ELT or ETL). So in general, you want to prefer ELT over ETL where possible. BQ is very scalable and can handle large transformations on a ton of data. ELT is also quite a bit simpler, because you could just write some SQL queries, transform some data and then move data around between tables, and not have to worry about managing a separate ETL application.
Raw and staging tables:
Once, you have started loading data into BQ, in general, within your warehouse, you're going to want to leverage raw and staging tables before publishing to reporting tables. The raw table essentially contains the full daily extract, or a full load of the data that they're loading. Staging table then is basically your change data capture table, so you can utilize queries or DML to marge that data into your staging table and have a full history of all the data that's being inserted. And then finally your reporting tables are going to be the ingest that you publish out to your users.
Speeding up pipelines using DataFlow:
When you're getting into streaming loads really complex batch loads (that doesn't really fit into SQL cleanly), you can leverage DataFlow or DataFusion to speed up those pipelines, and do more complex activities on that data. And if you're starting with streaming, I recommend using the DataFlow templates - Google provides it for loading data from multiple different places and moving data around. You can find those templates in DataFlow UI, within Create Job from Template button, you'll find all these templates.
And if you find that it mostly fits your use case, but want to make one slight modification, all those templates are also open sourced (so you can go to repo, modify the code to fit your needs).
Partitioning:
Partition in BQ physically split your data on disk, based on ingestion time or based on a column within your data. Efficiently query over the parts of the table you want. This provides huge cost and performance benefits, especially on large fact tables. Whenever you have a fact table or temporal table, utilize a partition column on your date dimension.
Cluster Frequently Accessed Fields:
Clustering allows you to physically order data within a partition. So you can do Clustering by one or multiple keys. This provide massive performance benefits when used properly.
BQ reservations:
It allows to create reservations of slots, assign project to those reservations, so you can allocate more or less resources to certain types of queries.
Best practices considering saving costs you can find in official documentation.
I hope it helps you.
According to this Google Cloud Documentation, the following questions should be done to choose between DataFlow or BigQuery tool for ELT.
Although the data is small and can quickly be uploaded by using the BigQuery UI, for the purpose of this tutorial you can also use Dataflow for ETL. Use Dataflow for ETL into BigQuery instead of the BigQuery UI when you are performing massive joins, that is, from around 500-5000 columns of more than 10 TB of data, with the following goals:
You want to clean or transform your data as it's loaded into BigQuery, instead of storing it and joining afterwards. As a result,
this approach also has lower storage requirements because data is only
stored in BigQuery in its joined and transformed state.
You plan to do custom data cleansing (which cannot be simply achieved with SQL).
You plan to combine the data with data outside of the OLTP, such as logs or remotely accessed data, during the loading process.
You plan to automate testing and deployment of data-loading logic using continuous integration or continuous deployment (CI/CD).
You anticipate gradual iteration, enhancement, and improvement of the ETL process over time.
You plan to add data incrementally, as opposed to performing a one-time ETL.

Is Power BI Best Practice to Group Data in the SQL Query Before Importing even though this may effect how you write some DAX expressions later?

I'm importing data via SQL import to a Power BI data model. If I group the data I can save the import 100,000 rows, but it effect the way I write my DAX queries to get the correct answer (and makes them slightly more complex). I'm after the general best practice for where to group data, pre import, or post import and allowing the DAX aggregator DAX functions to work on the whole table.
I've tried both options and can save about 6 seconds on the load if I group in SQL, but I needed to re-write some DAX.
You're sailing into waters of the question being 'too broad' as the correct answer will differ with different data sets etc.
You should always try and feed your dashboards with as little data as possible to answer the question being asked. This will save processing time in the dashboard itself. If you can aggragate in SQL (SQL is good at aggregating) and save yourself some load time then great. However if it makes your DAX unmaintainable (and maintainability is improtant to you) then it might not be the best.
Feeding your dashboards with as little data as possible and making your datasets as simple as possible will ensure your dashboards remain snappy.
If you put a billion rows into a dashboard you might find the engine can handle it, but if you can make that data into 15 rows you know which one is going to be more responsive.
You can find more information about best practices here.

Fast reporting with user parameters and temp result sets

I have come across a problem with reporting from SQL Server databases using SSRS, that I wonder if you could help me with.
When you have a huge amount of data in a table, and you want to select only those rows within a certain criteria, and you want to allow the users to specify that criteria (for example, it might be a start date and end date), and you then want to take that data (within the criteria) and perform a ton of other transformations on it, including producing various temporary result sets along the way (using CTEs or Table Variables or Temp tables) to finally produce the report, this basically takes ages in SQL. You can do it, but your users might have to wait an hour or two from the moment they've hit View Report, to their report being rendered.
I don't know much about MDX or DAX, cubes or tabular models, but I wonder if there is a quicker way to do what I want. Note the important aspect of the problem: the user is specifying a criteria that has to go all the way back to the original table, and then various transformations (including temp result sets) have to be applied to produce the final report.
What is the best way to do this? Am I doing it the only way possible? I know it's a broad question, but I'd like to know, theoretically, what the answer is. Where should I be looking? Should I be looking at Cubes? Tabular Models? Should I be using R in SQL Server?
There is always a balance when it comes to handling large datasets. Sometimes it makes sense to do some of the work ahead of time so that on-demand reports can run in a reasonable amount of time.
In order for a model to be a good option here are some general guidelines:
Many reports would be able to use common attributes from the model
The data involves aggregates, not just lists of records
The data does not need to be live
You have plenty of development and testing time
Anyone who would be using it as a data source will have to have be
trained on the structure and be at least slightly familiar with MDX
Another option for you to consider is to have a stored procedure that "prepares" the data for you overnight in a separate table. This table could be well indexed because the write time is not as important. They report would then point to this table to be able to quickly retrieve the data it needs to present. This shifts most of the preparation/aggregation work. You can still of course have parameters that limit how much of this data you pull back.
Based on the little bit of information you've given us (300 million rows in a single non-normalized table), there is definitely a faster way. However, there will not be any quick solutions and you haven't provided enough information for me to give any recommendations.
I think you may need to seek some professional help to review your infrastructure and needs along with your usage and objectives so you can be pointed in the right direction.