How to visualize data from identical tables in a single chart in Looker Studio? - google-bigquery

We have multiple production environments and we want to see data from those enviroments in one chart. Each env has identical tables with the same table schema, each with their corresponding data. We need an intuitive way to merge data from these environments and be able to show them in one chart.
Let's take an example here:
Table 1
| name | age |
| Sam | 42 |
| Mary | 19 |
Table 2
| name | age |
| Adam | 22 |
| James | 45 |
The outcome should be:
| name | age |
| Sam | 42 |
| Mary | 19 |
| Adam | 22 |
| James | 45 |
We would like to use this outcome data for our chart visualization.

Related

Designing a database for a workout tracker

I'm designing a database for a workout tracker app. Each user should be able to track multiple workouts (routines). A workout can have multiple exercises an exercise can be used in many workouts. Each exercise will have a specific track type (weight and reps, distance and time, only reps).
My tables so far:
| User | |
|------|-------|
| id | name |
| 1 | Ilka |
| 2 | James |
| Exercise | | |
|----------|---------------------|---------------|
| id | name | track_type_id |
| 1 | Barbell Bench Press | 1 |
| 2 | Squats | 1 |
| 3 | Deadlifts | 1 |
| 4 | Rowing Machine | 3 |
| Workout | | |
|---------|---------|-----------------|
| id | user_id | name |
| 1 | 1 | Chest & Triceps |
| 2 | 1 | Legs |
| Workout_Exerice (Junction table) | |
|-----------------|------------------|------------|
| id | exersice_id | workout_id |
| 1 | 1 | 1 |
| 2 | 2 | 1 |
| 3 | 4 | 1 |
| Workout_Sets | | | |
|--------------|---------------------|------|--------|
| id | workout_exersice_id | reps | weight |
| 1 | 1 | 12 | 120 |
| 2 | 1 | 10 | 120 |
| 3 | 1 | 8 | 120 |
| 4 | 2 | 10 | 220 |
| 5 | 3 | null | null |
| TrackType | |
|-----------|-----------------|
| id | name |
| 1 | Weight and Reps |
| 2 | Reps Only |
| 3 | Distance Time |
My issue is how to incorporate the TrackType table for each workout set, my first option was to create columns in the Workout_Sets table for each tracking type (weight and reps, distance and time, only reps) but that means for many rows I will have many nulls. Another option I thought was to use an EAV type table but I'm not sure. Also do you think my design is efficient (Over-normalization)?
I would say that the most efficient way is to have nulls in your table. The alternative would require you to split many of the category's into separate tables. Also a recommendation is that you start factoring a User ID table into your database
Your description states that “Each exercise will have a specific track type” suggesting a one-to-one relationship between Exercise and TrackType, and that the relationship is unchanging. As such, the exercise table should have a TrackType column.
I suspect, however, that your problem description may be lacking specificity, making it difficult to give you sound advice. For instance, if the TrackType can vary for any given exercise, your TrackType column may belong on the Workout_Sets table. If the relationship between TrackType and Exercise/Workout_Sets is many-to-many, then you will need another junction table.
Your question regarding “over-normalization” depends upon many factors that are specific to your solution. In general, I would say no - the degree of normalization appears to be appropriate.

Teradata SQL Assistant - How can I pivot or transpose large tables with many columns and many rows?

I am using Teradata SQL Assistant Version TD 16.10.06.01 ...
I have seen a lot people transpose data for set smallish tables but I am working on thousands of clients and need the break the columns up into Line Item Values to compare orders/highlight differences between orders. Problem is it is all horizontally linked and I need to transpose it to Id,Transaction id,Version and Line Item Value 1, Line Item Value 2... then another column comparing values to see if they changed.
example:
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
| Id | First Name | Last Name | DOB | transaction id | Make | Location | Postcode | Year | Price |
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
| 1 | John | Smith | 15/11/2001 | 1654654 | Audi | NSW | 2222 | 2019 | $ 10,000.00 |
| 2 | Mark | White | 11/02/2002 | 1661200 | BMW | WA | 8888 | 2016 | $ 8,999.00 |
| 3 | Bob | Grey | 10/05/2002 | 1667746 | Ford | QLD | 9999 | 2013 | $ 3,000.00 |
| 4 | Phil | Faux | 6/08/2002 | 1674292 | Holden | SA | 1111 | 2000 | $ 5,800.00 |
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
hoping to change the data to :
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
| id | trans_id | Vers_ord | Item Val | Ln_Itm_Dscrptn | Org_Val | Updt_Val | Amndd_Ord_chck | Lbl_Rnk | ... |
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
| 1 | 1654654 | 2 | 11169 | Make | Audi BLK | Audi WHT | Yes | 1 | |
| 1 | 1654654 | 2 | 11189 | Location | NSW | WA | Yes | 2 | |
| 1 | 1654654 | 2 | 23689 | Postcode | 2222 | 6000 | Yes | 3 | |
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
Recently with smaller data I created a table added in Values then used a case statement when value 1 then xyz with a product join ... and the data warehouse admins didn't mention anything out of order. but I only had row 16 by 200 column table to transpose ( Sum, Avg, Count, Median(function) x 4 subsets of clients) , which were significantly smaller than my current tables to make comparisons with.
I am worried my prior method will probably slow the data Warehouse down, plus take me significant amount of time to type the SQL.
Is there a better way to transpose large tables?

Transform table from sequential identifier to real with attributes

I changed a but the context, but it's basically the same issue.
Imagine we are in a never-ending tunnel, shaped like a circle. We split every section of the circle, from 1 to 10 and we'll call each section slot (sl). There are 2 groups (gr) of living things walking in the tunnel. Each group has 2 bands, where each has a name and global hitpoints (hp). Every group is walking forward (although the bands might change order). If a group is at slot #10 and moves forward, he will be at slot #1. We snapshot their information every day. All the data gathered is stored in a table with this structure:
+----------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+--------------+--+
| day_id | | gr_1_sl_1_id | | gr_1_sl_1_name | | gr_1_sl_1_hp | | gr_1_sl_2_id | | gr_1_sl_2_name | | gr_1_sl_2_hp | | gr_2_sl_1_id | | gr_2_sl_1_name | | gr_2_sl_1_hp | | gr_2_sl_2_id | | gr_2_sl_2_name | | gr_2_sl_2_hp | |
+----------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+--------------+--+
| 1 | 3 | orc | 100 | 4 | goblin | 10 | 10 | human | 50 | 1 | dwarf | 25 | |
| 2 | 6 | goblin | 7 | 7 | orc | 76 | 2 | human | 60 | 3 | dwarf | 28 | |
+----------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+----------------+----------------+------------------+--------------+--+
As you can see, the columns are structured in a sequential way, while the data shows what is the actual value. What I want is to have the information shaped this way instead:
+---------+-------+-------+-----------+---------+
| id_game | gr_id | sl_id | band_name | band_hp |
+---------+-------+-------+-----------+---------+
| 1 | 1 | 3 | orc | 100 |
| 1 | 1 | 4 | goblin | 10 |
| 1 | 2 | 10 | human | 50 |
| 1 | 2 | 1 | dwarf | 25 |
| 2 | 1 | 6 | goblin | 7 |
| 2 | 1 | 7 | orc | 76 |
| 2 | 2 | 2 | human | 60 |
| 2 | 2 | 3 | dwarf | 28 |
+---------+-------+-------+-----------+---------+
I have this information in power bi, although I can create views in sql server if need be. I have tried many things, closest thing I got was unpivoting and parsing the original columns to get day_id, gr_id, sl_id, attributes and values. In attributes and values, it's basically name and hp with their corresponding value (I changed hp into string), but then I'm stocked, I'm not sure what to do next.
Anyone has any ideas ? Keep in mind that I oversimplified the problem; there are more groups, more slots, more bands and more statistics (i.e. attack and defense rating, etc.)
You seem to want to unpivot the table. In SQL Server, I recommend using apply:
select t.day_id, v.*
form t cross apply
(values (1, 1, gr_1_sl_1_id, gr_1_sl_1_name, gr_1_sl_1_hp),
(1, 2, gr_1_sl_2_id, gr_1_sl_2_name, gr_1_sl_2_hp),
(2, 1, gr_2_sl_1_id, gr_1_sl_1_name, gr_2_sl_1_hp),
(2, 2, gr_2_sl_2_id, gr_1_sl_2_name, gr_2_sl_2_hp)
) v(id_game, gr_id, sl_id, band_name, band_hp);
In other databases, you can do something similar with union all.

SQL join or concatenate multiple record text fields with same foreign key

This is probably a simple question that for some reason I just cannot see the answer. Here is sample data:
+----+----------+---------------------------+
| ID | F_Key_ID | Desc_Text |
+----+----------+---------------------------+
| 1 | 15 | This is an example |
| 2 | 15 | that I wished worked |
| 3 | 15 | correctly |
| 4 | 21 | Unique entry |
| 5 | 18 | The Eagles are |
| 6 | 18 | the best football team. |
+----+----------+---------------------------+
Please excuse the noob table. How awful is that?!
What I'd like is some SQL that takes the text common to each F_Key_ID and concatenates it together like this:
+----------+---------------------------------------------------+
| F_Key_ID | Concat_Text |
+----------+---------------------------------------------------+
| 15 | This is an example that I wished worked correctly |
| 21 | Unique entry |
| 18 | The Eagles are the best football team. |
+----------+---------------------------------------------------+
Thanks in advance!
So after taking Gordon's advice and visiting another page, I found this answer for oracle:
SELECT F_Key_ID,
replace(wm_concat(Desc_Text), ',' , ' ') AS Concat_Text
FROM databaseName
GROUP BY F_Key_ID;

SSRS Hierarchy Recursive total by Row/Column

I've created a very simple employee hierarchy in a matrix, looks something like:
Buildings
-------------------------------------------------
Employee | One | Two | R2D2 | Three |
----------------------------------------------------------------------
Miss Piggy | 15 | | | |
Walter Muppet | | 50 | | |
80's robot | | | 2 | |
Andy Pig | | | | 2 |
Randy Pig | | | | 1 |
Animal Muppet | 15 | 50 | 3 | |
...
...
The SQL is a recursive cte, returns: EmployeeName, ReportsToID, GroupLevel, Building, EmployeeID. In the matrix I have the employee column with these settings to create the hierarchy grouping. To get the count of employees by building in the matrix I have this expression:
=IIF(Fields!GroupLevel.Value="0",
IIF(InScope("Building"),
IIF(InScope("EmployeeName"),sum(Fields!Count.Value, "Building", Recursive),""),""),
IIF(Fields!GroupLevel.Value,Sum(Fields!Count.Value, "EmployeeName", recursive),""))
And so on, you can expand the hierarchy until there are no more managers. The count is the sum of all the direct and indirect reports in that sit in the respective building. etc.
What I want is to have the total of all employees in all buildings for each manager.
I can add row/column totals to get the sum for Miss Piggy, however I also want totals for all buildings for the other managers. Meaning Miss Piggy has people that sit in Buildings: Two, R2D2 and three. Animal Muppet has people that sit in Buildings: One, Two and Three.
Ideally I would like the report to look like this.
Buildings
-------------------------------------------------
Employee | One | Two | R2D2 | Three |
----------------------------------------------------------------------
Miss Piggy | 15 | 100 | 5 | 6 |
Walter Muppet | | 50 | | |
80's robot | | | 2 | |
Andy Pig | | | | 2 |
Randy Pig | | | | 1 |
Animal Muppet | 15 | 50 | | 3 |
... | | | | 2 |
... | | | 1 | |
Any ideas how to accomplish?