DAX Calculate function to sumif many values across data sources - powerpivot

I've been able to get the =calculate(sum(Filter,earlier) method to work on items in a single data table, but I need to extend these values into a separate table. Here is the situation:
two data sources, PoList and MinList. PoList is a list of purchases orders and open units by item barcode. MinList is a list of every store/barcode combination. I need to see total open units per bardcode in MinList. Problem is, there are many different instances of each barcode on PoList as each barcode is on many orders. Here was my approach:
1) create a new column on PoList to calculate the total units per barcode,which i did successfully using
= CALCULATE(SUM('PoList'[Open Units]), FILTER('PoList', 'PoList'[Barcode] = EARLIER ('PoList'[Barcode])))
2) Then, pull the first instance of the [Open Units] into MinList for each Barcode value. (like a vlookup, but can't create a relationship since there are multiple barcode instances on PoList
Notes:
-Im trying to avoid creating a pivottable of PoList to eliminate duplicates. Since i can't subsequently make that pivot table, a data source, I would be creating something that would need manually updated.
Any thoughts? Again, I can get it to work in a single data source, I just need that aggregate value to be calculated in a separate data source than the one that is being analyzed to calculate the sum.
Thanks in advance!

Related

How do I create SQL script to create a new column in the dataset and generating a subcategory on each item if it matches another table?

enter image description hereMy business task is to match our sales data with subcategories based on UPCs in our master spreadsheet. That way, we can make better data viz comparing product sales within each subcategory. What is the best way to approach this problem?
I have 4 year's worth sales data that carries product name, quantity sold, and main categories per line. It does not have subcategories. I oversee the data through Connected Sheets, generate a pivot table with desired outputs in the rows and columns, and then I built a FILTER function/VLOOKUP function in google sheets in order to search for a product subcategory based on its UPC. Since the data set is huge, I am running into some problems where it would not produce subcategory results if its a discontinued item or if there are duplicates in my master list of subcategories per product.
I think it would be more efficient if I created a table of our items from our master spreadsheet and downloaded that table into Bigquery so that I would have a subcategory matched to the UPC code. From there, I'd create a query to have a new column generated in the sales data to populate a subcategory IF the UPC code of the product on the line would match the UPC on the table I've added.
Does that make sense? How would I go about doing that? What would the SQL look like in order to perform such task?
I haven't started the task yet, still exploring options on BigQuery and/or Connected Sheets. Any help would be appreciated!

How to populate all possible combination of values in columns, using Spark/normal SQL

I have a scenario, where my original dataset looks like below
Data:
Country,Commodity,Year,Type,Amount
US,Vegetable,2010,Harvested,2.44
US,Vegetable,2010,Yield,15.8
US,Vegetable,2010,Production,6.48
US,Vegetable,2011,Harvested,6
US,Vegetable,2011,Yield,18
US,Vegetable,2011,Production,3
Argentina,Vegetable,2010,Harvested,15.2
Argentina,Vegetable,2010,Yield,40.5
Argentina,Vegetable,2010,Production,2.66
Argentina,Vegetable,2011,Harvested,15.2
Argentina,Vegetable,2011,Yield,40.5
Argentina,Vegetable,2011,Production,2.66
Bhutan,Vegetable,2010,Harvested,7
Bhutan,Vegetable,2010,Yield,35
Bhutan,Vegetable,2010,Production,5
Bhutan,Vegetable,2011,Harvested,2
Bhutan,Vegetable,2011,Yield,6
Bhutan,Vegetable,2011,Production,3
Image of the above csv:
Now there is a very small country lookup table which has all possible countries the source data can come with, listed. PFB:
I want to have the output data's number of columns always fixed (this is to ensure the reporting/visualization tool doesn't get dynamic number columns with every day's new source data ingestions depending on the varying distinct number of countries present).
So, I've to somehow join the source data with the country_lookup csv and populate all those columns with default value as F. Every country column would be binary with T or F being the possible values.
The original dataset from the above has to be converted into below:
Data (I've kept the Amount field unsolved for column Type having Derived Yield as is, rather than calculating them below for a better understanding and for you to match with the formulae):
Country,Commodity,Year,Type,Amount,US,Argentina,Bhutan,India,Nepal,Bangladesh
US,Vegetable,2010,Harvested,2.44,T,F,F,F,F,F
US,Vegetable,2010,Yield,15.8,T,F,F,F,F,F
US,Vegetable,2010,Production,6.48,T,F,F,F,F,F
US,Vegetable,2010,Derived Yield,(2.44+15.2)/(6.48+2.66),T,T,F,F,F,F
US,Vegetable,2010,Derived Yield,(2.44+7)/(6.48+5),T,F,T,F,F,F
US,Vegetable,2010,Derived Yield,(2.44+15.2+7)/(6.48+2.66+5),T,T,T,F,F,F
US,Vegetable,2011,Harvested,6,T,F,F,F,F,F
US,Vegetable,2011,Yield,18,T,F,F,F,F,F
US,Vegetable,2011,Production,3,T,F,F,F,F,F
US,Vegetable,2011,Derived Yield,(6+10)/(3+9),T,T,F,F,F,F
US,Vegetable,2011,Derived Yield,(6+2)/(3+3),T,F,T,F,F,F
US,Vegetable,2011,Derived Yield,(6+10+2)/(3+9+3),T,T,T,F,F,F
Argentina,Vegetable,2010,Harvested,15.2,F,T,F,F,F,F
Argentina,Vegetable,2010,Yield,40.5,F,T,F,F,F,F
Argentina,Vegetable,2010,Production,2.66,F,T,F,F,F,F
Argentina,Vegetable,2010,Derived Yield,(2.44+15.2)/(6.48+2.66),T,T,F,F,F,F
Argentina,Vegetable,2010,Derived Yield,(15.2+7)/(2.66+5),F,T,T,F,F,F
Argentina,Vegetable,2010,Derived Yield,(2.44+15.2+7)/(6.48+2.66+5),T,T,T,F,F,F
Argentina,Vegetable,2011,Harvested,10,F,T,F,F,F,F
Argentina,Vegetable,2011,Yield,90,F,T,F,F,F,F
Argentina,Vegetable,2011,Production,9,F,T,F,F,F,F
Argentina,Vegetable,2011,Derived Yield,(6+10)/(3+9),T,T,F,F,F,F
Argentina,Vegetable,2011,Derived Yield,(10+2)/(9+3),F,T,T,F,F,F
Argentina,Vegetable,2011,Derived Yield,(6+10+2)/(3+9+3),T,T,T,F,F,F
Bhutan,Vegetable,2010,Harvested,7,F,F,T,F,F,F
Bhutan,Vegetable,2010,Yield,35,F,F,T,F,F,F
Bhutan,Vegetable,2010,Production,5,F,F,T,F,F,F
Bhutan,Vegetable,2010,Derived Yield,(2.44+7)/(6.48+5),T,F,T,F,F,F
Bhutan,Vegetable,2010,Derived Yield,(15.2+7)/(2.66+5),F,T,T,F,F,F
Bhutan,Vegetable,2010,Derived Yield,(2.44+15.2+7)/(6.48+2.66+5),T,T,T,F,F,F
Bhutan,Vegetable,2011,Harvested,2,F,F,T,F,F,F
Bhutan,Vegetable,2011,Yield,6,F,F,T,F,F,F
Bhutan,Vegetable,2011,Production,3,F,F,T,F,F,F
Bhutan,Vegetable,2011,Derived Yield,(2.44+7)/(6.48+5),T,F,T,F,F,F
Bhutan,Vegetable,2011,Derived Yield,(10+2)/(9+3),F,T,T,F,F,F
Bhutan,Vegetable,2011,Derived Yield,(6+10+2)/(3+9+3),T,T,T,F,F,F
The image of the above expected output data for a structured look at it:
Part 1 -
Part 2 -
Formulae for populating Amount Field for Derived Type:
Derived Amount = Sum of Harvested of all countries with T (True) grouped by Year and Commodity columns divided by Sum of Production of all countries with T (True)grouped by Year and Commodity columns.
So, the target is to have a combination of all the countries from source and calculate the sum of respective Harvested and Production values which then has to be divided. The commodity can be more than one in the actual scenario for any given country, but that should not bother as the summation of amount happens on grouped commodity and year.
Note: The users in the frontend can select any combination of countries. The sole purpose of doing it in the backend rather than dynamically doing it in the frontend is because AWS QuickSight (our visualisation tool), even though can populate sum on selected column filters but doesn't yet support calculation on those derived summed fields. Hence, the entire calculation of all combination of countries has to be pre-populated (very naive approach) in order to make it available in report on dynamic users selection of countries.
Also if you've any better approach (than the above naive approach mentioned in note) to solve this problem, you are most welcome to guide me. I've also posted a question on the same problem without writing my expected approach for experts to show me the path on how we can solve this kind of a problem better than this naive approach. If you want to help solve it with some other technique, you're most welcome, here is the link to that question.
Any help shall be greatly acknowledged.

Merge two CSV and collate data

I have two CSV files, the first like so:
Book1:
ID,TITLE,SUBJECT
0001,BLAH,OIL
0002,BLAH,HAMSTER
0003,BLAH,HAMSTER
0004,BLAH,PLANETS
0005,BLAH,JELLO
0006,BLAH,OIL
0007,BLAH,HAMSTER
0008,BLAH,JELLO
0009,BLAH,JELLO
0010,BLAH,HAMSTER
0011,BLAH,OIL
0012,BLAH,OIL
0013,BLAH,OIL
0014,BLAH,JELLO
0015,BLAH,JELLO
0016,BLAH,HAMSTER
0017,BLAH,PLANETS
0018,BLAH,PLANETS
0019,BLAH,HAMSTER
0020,BLAH,HAMSTER
And then a second CSV with items associated with the first list, with ID being the common attribute between the two.
Book2:
ID,ITEM
0001,PURSE
0001,STEAM
0001,SEASHELL
0002,TRUMPET
0002,TRAMPOLINE
0003,PURSE
0003,DOLPHIN
0003,ENVELOPE
0004,SEASHELL
0004,SERPENT
0004,TRUMPET
0005,CAR
0005,NOODLE
0006,CANNONBALL
0006,NOODLE
0006,ORANGE
0006,SEASHELL
0007,CREAM
0007,CANNONBALL
0007,GUM
0008,SERPENT
0008,NOODLE
0008,CAR
0009,CANNONBALL
0009,SERPENT
0009,GRAPE
0010,SERPENT
0010,CAR
0010,TAPE
0011,CANNONBALL
0011,GRAPE
0012,ORANGE
0012,GUM
0012,SEASHELL
0013,NOODLE
0013,CAR
0014,STICK
0014,ORANGE
0015,GUN
0015,GRAPE
0015,STICK
0016,BASEBALL
0016,SEASHELL
0017,CANNONBALL
0017,ORANGE
0017,TRUMPET
0018,GUM
0018,STICK
0018,GRAPE
0018,CAR
0019,CANNONBALL
0019,TRUMPET
0019,ORANGE
0020,TRUMPET
0020,CHERRY
0020,ORANGE
0020,GUM
The real datasets are millions of records, so I'm sorry in advance for my simple example.
The problem I need to solve is getting the data merged and collated in a way where I can see which item groupings most commonly appear together on the same ID. (e.g. GRAPE,GUM,SEASHELL appear together 340 times, ORANGE and STICK 89 times, etc...)
Then I need to see if there is any change/deviation to the general results in common appearance when grouped by SUBJECT.
Tools I'm familiar with are Excel and SQL, but I also have PowerBI and Alteryx at my disposal.
Full disclosure: Not homework, or work, but a volunteer project, thus my unfamiliarity with this kind of data manipulation.
Thanks in advance.
An Alteryx solution:
Drag the two .csv files onto your canvas (seen as book1.csv and book2.csv in my picture; Alteryx will create "Input" tools for you.
Drag a "Join" tool on and connect the two .csv files to its inputs; select "ID" as the join field; unselect the "Right_ID" as output since it's merely a duplicate of "ID"
Drag a "Summary" tool on and connect the Join tool's output to the Summary tool's input; select all three of the outputs and add as a "group by"... then add the ID column with a "count"
Drag a browse tool on and connect the summary's output to the browse tool's input.
run the workflow
After all that, click on the browse tool and you should see what is seen in my screenshot: (which is showing just the first ten rows of output):
+1 for taking on a volunteer project - I think anyone who knows data can have a big impact in support of their favourite group or cause.
I would just pull the 2 files into Power BI as 2 separate tables (Get Data / From File). Create a relationship between the 2 tables based on ID (it might get auto-generated). It should be one to many.
Then I would add a Calculated Column to the Book1 table to Concatenate the related ITEM values, eg.
Items =
CALCULATE (
CONCATENATEX (
DISTINCT ( 'Book2'[ITEM] ),
'Book2'[ITEM],
", ",
'Book2'[ITEM], ASC
)
)
Now you can use that Items field in visuals (e.g. a Table), along with Count of ID to get the frequency.
Adding Subject to a copy of the table (e.g. to the Columns well of a Matrix) will produce your grouped scenario, or you could add a Subject Slicer.
As you will be comparing subsets of varying size, I would change Count of ID to Show value as - % of grand total.
Little different solution using Alteryx.
With this dataset, there are very few repeating 3 or 4 item groups. You can do the two item affinity analysis and get a probability of 3 or 4 item groups, or you can count the 3 and 4 item groups individually. I believe what you want is the latter as your probability of getting grapes with oranges may be altered by whether you have bananas in the cart or not.
Anyway, I did not join in the subject until after finding all of my combinations. I found all the combinations by taking the Cartesian join of two, then three, then four of the original set. I then removed all duplicates by ensuring items were always in alphabetical order in each row. I then counted occurrences of each combination. More joins can be added in the same pattern to count groups of 5,6,7...
Once you have the counts of occurrences, then I would join back with the subjects and perform this analysis on each group and compare to the overall results.
I'm supposed to disclose that I work for Alteryx.
first of all if you are using windows
just navigate to the directory which contains the CSV and write the following command:
copy pattern newfileName.csv
#example
copy *.csv merged.csv
now you created one csv file, the file is too large now you can't process it once, depending on your programming language you can use appropriate way, for python you can use generators to process line by line, or pandas you can read chunk by chunk it will be easy.
I hope this help you.

What is difference between Pentaho DI "variables" and "fields"?

Could not find much information about this. I can see that fields can have multiple copies per row in a transformation. But what are variables? Are they unique across all rows a transformation produces? But, by the name, variables are meant to vary.
What is difference between fields and variables exactly?
Can someone enlighten me please
Thank you
PDI transformations work with a stream of rows that pass through all the steps. The rows consist of a number of fields that the steps can act on, converting them, filtering them, sorting, etc.
Variables are more like a configuration help and have a single value in the transformation. It's very important to remember that they can NOT be set/changed and used within the same transformation, because all the steps execute in parallel!
Example
In your transformation you have a variable called "last_staging_run" and its value is "2017/01/19 05:00:00". This one has been passed to the transformation from the parent job.
You then use it in a Table Input:
SELECT id, product_id, price, number
FROM sales
WHERE purchase_date > ${last_staging_run}
This will give you the new rows since the last staging run with the fields id, product_id, price and number. You might then lookup the product names or filter products with a zero price with other steps, then store it in a table again.

SQL Server Query: Daily Data Snapshot Comparison (Counting Delta Occurrences)

I am working towards counting customer subscription ("package") changes. To do this, I am selecting all data from my package table once, every day. I am calling the daily query results "snapshots" (approx 500k rows). I then load the snapshot data into a new table. After 10 days I have a total of 5 million rows in the snapshots table (500k rows * 10 days). The majority of customers do not changes packages (65%). I need to report which customers, of the remaining 35%, are switching packages, when they are switching packages, what package changes they are making (from "package X" to "package y") and which customers are changing packages most frequently.
The query I have written uses a self-join. I am identifying the changes but my results contain duplicate rows.
This is my query:
select *
from UserPackageDump UPD1, UserPackageDump UPD2
where UPD1.user_id = UPD2.user_id
and UPD1.package_id <> UPD2.package_id
How can I change this query to yield only distinct results?
SELECT
DISTINCT *
FROM
UserPackageDump UPD1
JOIN UserPackageDump UPD2
ON UPD1.user_id = UPD2.user_id
WHERE
UPD1.package_id <> UPD2.package_id
You have many options for doing this, and I'm not sure your approach is the right one to take. Firstly to answer your specific question, you could perform a DISTINCT as per #sqlab's answer. Or you could include the date in the join, ensuring that UDP1 only matches a record in UDP2 that is one day different.
However, to come back to the approach, there should be no need to take a full copy of all the data. You have lots of other options for more efficient data storage, some of which being:
Put a "LastUpdated" datetime2 field in the database, to be populated each time the row is changed. Copy only those rows that have a LastUpdated more recent than the last time the copy was made. Assuming the only change possible to the table is to change the package_id then you will now only have rows in the table for users that have changed.
Create a UserPackageHistory table into which rows are written each time a user subscribes to a package, at the same time that UserPackage is updated. This then leaves you with much the same result as the first bullet, but in advance of running the copy job.
Then, with any one of these sets of data, to satisfy the reporting requirements you could populate a cube. Your source would be a set of rows containing user_id, old_package_id, new_package_id and date. You would create a measure group containing these measures:
Distinct count of user_id
Count of switches (basically just the row count of the source data)
This measure group could then be related to the following dimensions:
Date, so you can see when the switches are taking place
User, so you can drill down on who is switching
Switch Type, which is a dimension built from the selecting the old_package_id and new_package_id from your source data. This gives you the ability to see the popularity of particular shifts.