I've created a very simple employee hierarchy in a matrix, looks something like:
Buildings
-------------------------------------------------
Employee | One | Two | R2D2 | Three |
----------------------------------------------------------------------
Miss Piggy | 15 | | | |
Walter Muppet | | 50 | | |
80's robot | | | 2 | |
Andy Pig | | | | 2 |
Randy Pig | | | | 1 |
Animal Muppet | 15 | 50 | 3 | |
...
...
The SQL is a recursive cte, returns: EmployeeName, ReportsToID, GroupLevel, Building, EmployeeID. In the matrix I have the employee column with these settings to create the hierarchy grouping. To get the count of employees by building in the matrix I have this expression:
=IIF(Fields!GroupLevel.Value="0",
IIF(InScope("Building"),
IIF(InScope("EmployeeName"),sum(Fields!Count.Value, "Building", Recursive),""),""),
IIF(Fields!GroupLevel.Value,Sum(Fields!Count.Value, "EmployeeName", recursive),""))
And so on, you can expand the hierarchy until there are no more managers. The count is the sum of all the direct and indirect reports in that sit in the respective building. etc.
What I want is to have the total of all employees in all buildings for each manager.
I can add row/column totals to get the sum for Miss Piggy, however I also want totals for all buildings for the other managers. Meaning Miss Piggy has people that sit in Buildings: Two, R2D2 and three. Animal Muppet has people that sit in Buildings: One, Two and Three.
Ideally I would like the report to look like this.
Buildings
-------------------------------------------------
Employee | One | Two | R2D2 | Three |
----------------------------------------------------------------------
Miss Piggy | 15 | 100 | 5 | 6 |
Walter Muppet | | 50 | | |
80's robot | | | 2 | |
Andy Pig | | | | 2 |
Randy Pig | | | | 1 |
Animal Muppet | 15 | 50 | | 3 |
... | | | | 2 |
... | | | 1 | |
Any ideas how to accomplish?
Related
I'm designing a database for a workout tracker app. Each user should be able to track multiple workouts (routines). A workout can have multiple exercises an exercise can be used in many workouts. Each exercise will have a specific track type (weight and reps, distance and time, only reps).
My tables so far:
| User | |
|------|-------|
| id | name |
| 1 | Ilka |
| 2 | James |
| Exercise | | |
|----------|---------------------|---------------|
| id | name | track_type_id |
| 1 | Barbell Bench Press | 1 |
| 2 | Squats | 1 |
| 3 | Deadlifts | 1 |
| 4 | Rowing Machine | 3 |
| Workout | | |
|---------|---------|-----------------|
| id | user_id | name |
| 1 | 1 | Chest & Triceps |
| 2 | 1 | Legs |
| Workout_Exerice (Junction table) | |
|-----------------|------------------|------------|
| id | exersice_id | workout_id |
| 1 | 1 | 1 |
| 2 | 2 | 1 |
| 3 | 4 | 1 |
| Workout_Sets | | | |
|--------------|---------------------|------|--------|
| id | workout_exersice_id | reps | weight |
| 1 | 1 | 12 | 120 |
| 2 | 1 | 10 | 120 |
| 3 | 1 | 8 | 120 |
| 4 | 2 | 10 | 220 |
| 5 | 3 | null | null |
| TrackType | |
|-----------|-----------------|
| id | name |
| 1 | Weight and Reps |
| 2 | Reps Only |
| 3 | Distance Time |
My issue is how to incorporate the TrackType table for each workout set, my first option was to create columns in the Workout_Sets table for each tracking type (weight and reps, distance and time, only reps) but that means for many rows I will have many nulls. Another option I thought was to use an EAV type table but I'm not sure. Also do you think my design is efficient (Over-normalization)?
I would say that the most efficient way is to have nulls in your table. The alternative would require you to split many of the category's into separate tables. Also a recommendation is that you start factoring a User ID table into your database
Your description states that “Each exercise will have a specific track type” suggesting a one-to-one relationship between Exercise and TrackType, and that the relationship is unchanging. As such, the exercise table should have a TrackType column.
I suspect, however, that your problem description may be lacking specificity, making it difficult to give you sound advice. For instance, if the TrackType can vary for any given exercise, your TrackType column may belong on the Workout_Sets table. If the relationship between TrackType and Exercise/Workout_Sets is many-to-many, then you will need another junction table.
Your question regarding “over-normalization” depends upon many factors that are specific to your solution. In general, I would say no - the degree of normalization appears to be appropriate.
I am using Teradata SQL Assistant Version TD 16.10.06.01 ...
I have seen a lot people transpose data for set smallish tables but I am working on thousands of clients and need the break the columns up into Line Item Values to compare orders/highlight differences between orders. Problem is it is all horizontally linked and I need to transpose it to Id,Transaction id,Version and Line Item Value 1, Line Item Value 2... then another column comparing values to see if they changed.
example:
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
| Id | First Name | Last Name | DOB | transaction id | Make | Location | Postcode | Year | Price |
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
| 1 | John | Smith | 15/11/2001 | 1654654 | Audi | NSW | 2222 | 2019 | $ 10,000.00 |
| 2 | Mark | White | 11/02/2002 | 1661200 | BMW | WA | 8888 | 2016 | $ 8,999.00 |
| 3 | Bob | Grey | 10/05/2002 | 1667746 | Ford | QLD | 9999 | 2013 | $ 3,000.00 |
| 4 | Phil | Faux | 6/08/2002 | 1674292 | Holden | SA | 1111 | 2000 | $ 5,800.00 |
+----+------------+-----------+------------+----------------+--------+----------+----------+------+-------------+
hoping to change the data to :
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
| id | trans_id | Vers_ord | Item Val | Ln_Itm_Dscrptn | Org_Val | Updt_Val | Amndd_Ord_chck | Lbl_Rnk | ... |
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
| 1 | 1654654 | 2 | 11169 | Make | Audi BLK | Audi WHT | Yes | 1 | |
| 1 | 1654654 | 2 | 11189 | Location | NSW | WA | Yes | 2 | |
| 1 | 1654654 | 2 | 23689 | Postcode | 2222 | 6000 | Yes | 3 | |
+----+----------+----------+----------+----------------+----------+----------+----------------+---------+-----+
Recently with smaller data I created a table added in Values then used a case statement when value 1 then xyz with a product join ... and the data warehouse admins didn't mention anything out of order. but I only had row 16 by 200 column table to transpose ( Sum, Avg, Count, Median(function) x 4 subsets of clients) , which were significantly smaller than my current tables to make comparisons with.
I am worried my prior method will probably slow the data Warehouse down, plus take me significant amount of time to type the SQL.
Is there a better way to transpose large tables?
Lets say we have a client table for sports brands like nike and adidas.
+--------------+------------+
| Client Table | |
+--------------+------------+
| Id | ClientName |
| 1 | Nike |
| 2 | Adidas |
+--------------+------------+
We also record customer information and their preferred sport and fitness level. Sports and fitness level are used in dropdown lists.
+--------------+------------+
| Sports Table | |
+--------------+------------+
| Id | Name |
| 1 | Basketball |
| 2 | Volleyball |
+--------------+------------+
+------------------+---------------+
| Fitnesslvl Table | |
+------------------+---------------+
| Id | Fitness Level |
| 1 | Beginner |
| 2 | Intermediate |
| 3 | Advance |
+------------------+---------------+
+----------------+--------------+----------+----------------+
| Customer Table | | | |
+----------------+--------------+----------+----------------+
| Id | CustomerName | SportsId | FitnessLevelId |
| 1 | John | 1 | 1 |
| 2 | Doe | 2 | 3 |
+----------------+--------------+----------+----------------+
Then sports brands want to filter our customer via sports and fitness level. In this example nike wants all sports while adidas only wants customer interested in basketball. Likewise, nike wants customer in all fitness level while adidas only wants advanced fitness level.
+---------------+----------+----------+
| Sports Filter | | |
+---------------+----------+----------+
| Id | ClientId | SportsId |
| 1 | 1 | 1 |
| 2 | 1 | 2 |
| 3 | 2 | 1 |
+---------------+----------+----------+
+-------------------+----------+--------------+
| Fitnesslvl Filter | | |
+-------------------+----------+--------------+
| Id | ClientId | FitnessLvlId |
| 1 | 1 | 1 |
| 2 | 1 | 2 |
| 3 | 1 | 3 |
| 4 | 2 | 3 |
+-------------------+----------+--------------+
How can we handle logging in this case when we want to record failed filters for the sports and fitness level? I'm thinking of two options
Create different table for each failed filter.
-Sports Failed Filter
-FitnessLevel Failed Filter
+----------------------+-------------+----------------+
| Sports Failed Filter | | |
+----------------------+-------------+----------------+
| Id | CustomerId | SportsFilterId |
| 1 | 1 | 2 |
| 2 | 1 | 3 |
+----------------------+-------------+----------------+
However if we have 10 filters, this means we will also have 10 failed filters table. I think this is very difficult to maintain.
Instead of different table for dropdown values like sports and fitness level, we can create lookup table, and a single failedfilter table.
I think the tradeoff is its not simple and there is no strict referential integrity.
Please let me know if you have different solution for this.
EDIT:
This filters are used in a backend application and the filtering logic is there. I dont plan to include this logic in the database as the query will be very complex and hard to maintain.
I'm quite new to SQL - hope you can help:
I have several tables that all have 3 columns in common: ObjNo, Date(year-month), Product.
Each table has 1 other column, that represents an economic value (sales, count, netsales, plan ..)
I need to join all tables on the 3 common columns giving. The outcome must have one row for each existing combination of the 3 common columns. Not every combination exists in every table.
If I do full outer joins, I get ObjNo, Date, etc. for each table, but only need them once.
How can I achieve this?
+--------------+-------+--------+---------+-----------+
| tblCount | | | | |
+--------------+-------+--------+---------+-----------+
| | ObjNo | Date | Product | count |
| | 1 | 201601 | Snacks | 22 |
| | 2 | 201602 | Coffee | 23 |
| | 4 | 201605 | Tea | 30 |
| | | | | |
| tblSalesPlan | | | | |
| | ObjNo | Date | Product | salesplan |
| | 1 | 201601 | Beer | 2000 |
| | 2 | 201602 | Sancks | 2000 |
| | 5 | 201605 | Tea | 2000 |
| | | | | |
| | | | | |
| tblSales | | | | |
| | ObjNo | Date | Product | Sales |
| | 1 | 201601 | Beer | 1000 |
| | 2 | 201602 | Coffee | 2000 |
| | 3 | 201603 | Tea | 3000 |
+--------------+-------+--------+---------+-----------+
Thx
Devon
It sounds like you're using SELECT * FROM... which is giving you every field from every table. You probably only want to get the values from one table, so you should be explicit about which fields you want to include in the results.
If you're not sure which table is going to have a record for each case (i.e. there is not guaranteed to be a record in any particular table) you can use the COALESCE function to get the first non-null value in each case.
SELECT COALESCE(tbl1.ObjNo, tbl2.ObjNo, tbl3.ObjNo) AS ObjNo, ....
tbl1.Sales, tbl2.Count, tbl3.Netsales
I'm not sure if Pivot is the way to go with this, but I am looking to take part of a row and create a new column with it.
This is my example:
+--------+------------+--------+
| Person | PetName | PetAge |
+--------+------------+--------+
| 1 | Apple | 2 |
| 1 | Banana | 6 |
| 1 | Grapefruit | 3 |
| 2 | Red | 53 |
| 2 | Blue | 8 |
+--------+------------+--------+
This is my result/goal:
+--------+---------+--------+---------+--------+------------+--------+
| Person | PetName | PetAge | PetName | PetAge | PetName | PetAge |
+--------+---------+--------+---------+--------+------------+--------+
| 1 | Apple | 2 | Banana | 6 | Grapefruit | 3 |
| 2 | Red | 53 | Blue | 8 | | |
+--------+---------+--------+---------+--------+------------+--------+
How can I get the result from my example?
UPDATE: I just noticed that your table just had the Person in the first row.
I've done something similar. What I did was add a RowNumber per pet by person (OVER PARTITION BY PERSON) to the data. This will allow the data to be broken up and an order of numbers for each pet per person.
Make your normal table with just the PetName and PetAge.
Add a Tablix with just one column and row and put the previous table in it.
For the Column grouping, use ROW_NUM. For Row use Person.