This might be simple, but I can't wrap my head around it... I need a Custom Column in Power Query to return data from a specific column in another row
Currently I have location data for all employee ID numbers, but for some, the location is blank. In this data, in any given employee's row, there is also their manager's ID#.
What I need is a custom row that returns the employee's manager's location IF the employee's location is blank. For now, I am not looking to fix manager's that also do not have a location, if the manager's location is blank, I am ok with the Employee's pulling blanks in these cases only.
Any help would be greatly appreciated.
Merge the table on top of itself using the manager ID column on top matched to the employee ID column on bottom.
Expand [x] location using the arrows atop the new column.
Add column ... custom column ... with formula
=if [location] =null then [putnameofnewcolumnyouexpandedhere] else [location]
Right click remove extra columns
let Source = Excel.CurrentWorkbook(){[Name="Table2"]}[Content],
#"Merged Queries" = Table.NestedJoin(Source, {"ManagerID"}, Source, {"ID"}, "Source", JoinKind.LeftOuter),
#"Expanded Source" = Table.ExpandTableColumn(#"Merged Queries", "Source", {"Location"}, {"Manager.Location"}),
#"Added Custom" = Table.AddColumn(#"Expanded Source", "CombinedLocation", each if [Location]=null then [Manager.Location] else [Location]),
#"Removed Columns" = Table.RemoveColumns(#"Added Custom",{"Location", "Manager.Location"})
in #"Removed Columns"
You can do something like this:
let
Source = Excel.CurrentWorkbook(){[Name="Table1"]}[Content],
AddLocation
= Table.AddColumn(
Source,
"LocationCleaned",
(a) =>
if a[location] = null then
Table.SelectRows(Source, each [emp ID] = a[manager] )[location]{0}
else
a[location]
)
in
AddLocation
Related
Can you please help me to get the item with the highest count using DAX?
Measure = FIRSTNONBLANK('Table1'[ItemName],CALCULATE(COUNT('Table2'[Instance])))
This shows the First ItemName in the table but doesnt get the ItemName of the Highest Value.
Thanks
Well, it's more complicated than I would have wanted, but here's what I came up with.
There things that you are hoping to do that are not so straightforward in DAX. First, you want an aggregated aggregation ;) -- in this case, the Max of a Count. The second thing is that you want to use a value from one column that you identify by what's in another column. That's row-based thinking and DAX prefers column-based thinking.
So, to do the aggregate of aggregates, we just have to slog through it. SUMMARIZE gives us counts of items. Max and Rank functions could help us find the biggest count, but wouldn't be so useful for getting Item Name. TOP N gives us the whole row where our count is the biggest.
But now we need to get our ItemName out of the row, so SELECTCOLUMNS lets us pick the field to work with. Finally, we really want a value not a 1-column, 1-row table. So FirstNonBlank finishes the job.
Hope it helps.
Here's my DAX
MostFrequentItem =
VAR SummaryTable = SUMMARIZE ( 'Table', 'Table'[ItemName], "CountsByItem", COUNT ( 'Table'[ItemName] ) )
VAR TopSummaryItemRow = TOPN(1, SummaryTable, [CountsByItem], DESC)
VAR TopItem = SELECTCOLUMNS (TopSummaryItemRow, "TopItemName", [ItemName])
RETURN FIRSTNONBLANK (TopItem, [TopItemName])
Here's the DAX without using variables (not tested, sorry. Should be close):
MostFrequentItem_2 =
FIRSTNONBLANK (
SELECTCOLUMNS (
TOPN (
1,
SUMMARIZE ( 'Table', 'Table'[ItemName], "Count", COUNT ( 'Table'[ItemName] ) ),
[Count], DESC
),
"ItemName", [ItemName]
),
[ItemName]
)
Here's the mock data:
let
Source = Table.FromRows(Json.Document(Binary.Decompress(Binary.FromText("i45WcipNSspJTS/NVYrVIZ/nnFmUnJOKznRJzSlJxMlyzi9PSs3JAbODElMyizNQmLEA", BinaryEncoding.Base64), Compression.Deflate)), let _t = ((type text) meta [Serialized.Text = true]) in type table [Stuff = _t]),
#"Changed Type" = Table.TransformColumnTypes(Source,{{"Stuff", type text}}),
#"Renamed Columns" = Table.RenameColumns(#"Changed Type",{{"Stuff", "ItemName"}})
in
#"Renamed Columns"
I have in Power Query a Column "% sum of all". I need to create a custom column "Sum Consecutive" that each row has as value the "% sum of all" of the current row + the value of "Sum Consecutive" of the previous row.
Current row situation
New Custom Column Expectation
You can see two images that show the current situation and the next situation I need in the Power Query.
Can you please help me find a code/command to create this new column like that?
Although there are similar solved questions in DAX, I still need to keep editing the file after that, so it should be performed in M language in power query.
Thank you!
Not sure how performant my approaches are. I would think both should be reasonably efficient as they only loop over each row in the table once (and "remember" the work done in the previous rows). However, maybe the conversion to records/list and then back to table is slow for large tables (I don't know).
Approach 1: Isolate the input column as a list, transform the list by cumulatively adding, put the transformed list back in the table as a new column.
let
someTable = Table.FromColumns({List.Repeat({0.0093}, 7) & List.Repeat({0.0086}, 7) & {0.0068, 0.0068}}, {"% of sum of all"}),
listToLoopOver = someTable[#"% of sum of all"],
cumulativeSum = List.Accumulate(List.Positions(listToLoopOver), {}, (listState, currentIndex) =>
let
numberToAdd = listToLoopOver{currentIndex},
sum = try listState{currentIndex - 1} + numberToAdd otherwise numberToAdd,
append = listState & {sum}
in
append
),
backToTable = Table.FromColumns(Table.ToColumns(someTable) & {cumulativeSum}, Table.ColumnNames(someTable) & {"Cumulative sum"})
in
backToTable
Approach 2: Convert the table to a list of records, loop over each record and add a new field (representing the new column) to each record, then convert the transformed list of records back into a table.
let
someTable = Table.FromColumns({List.Repeat({0.0093}, 7) & List.Repeat({0.0086}, 7) & {0.0068, 0.0068}}, {"% of sum of all"}),
listToLoopOver = Table.ToRecords(someTable),
cumulativeSum = List.Accumulate(List.Positions(listToLoopOver), {}, (listState, currentIndex) =>
let
numberToAdd = Record.Field(listToLoopOver{currentIndex}, "% of sum of all"),
sum = try listState{currentIndex - 1}[Cumulative sum] + numberToAdd otherwise numberToAdd, // 'try' should only be necessary for first item
recordToAdd = listToLoopOver{currentIndex} & [Cumulative sum = sum],
append = listState & {recordToAdd}
in
append
),
backToTable = Table.FromRecords(cumulativeSum)
in
backToTable
I couldn't find a function in the reference for M/Power Query that sums a list cumulatively.
I am using Visual Studio 2017 and I installed the latest crystal reportviewer (22)
What I want is to click a button and create a report from the customer that is selected in the datagridview and the addresses that are shown in the second datagridview.
I managed to do all that but the problem is that a few fields contain numbers which need to be converted to text. An SQL query I would use to do this would be like:
SELECT c.customer_nr, c.status, s.rename FROM CUSTOMERS c INNER JOIN SETUP s on s.id = c.status WHERE s.afk = 'STA'
In my SETUP database I have the columns ID,AFK and RENAME so if the status would be 1 it would convert to text: "ACTIVE", if status = 2 it would convert to "INACTIVE" for example.
I could do something with a formula field like this:
IF ({c.status} = 1) THEN "ACTIVE" ELSE
IF ({c.status}) = 2 THEN "INACTIVE"
but that is not good because i could add another status or change the name in the database etc.
So then I tried with an SQL expression field and I put something like this:
(
SELECT "SETUP"."RENAME" FROM SETUP
WHERE "SETUP"."AFK" = 'STA' AND "SETUP"."ID" = "CUSTOMERS"."STATUS"
)
There must be something wrong because I get the correct conversion but there is only one address in the database but I get 7 pages all with the same address. There should only be one address like I get when I remove the SQL expression field. Where does it go wrong?
* EDIT *
I found the problem. When I create a new database that contains only unique id's then it works. In my original database I have multiple times the id's 1,2,3,4,5 but with different abbreviations in column AFK. Somehow the query looks for the id value and every time it finds this id no matter the AFK value it generates an entry for the address value.
Maybe in the future I will find out how this exactly works for now I have a workaround.
Create a new table for example CrRepSta and add the following entries:
ID,AFK,RENAME
1,STA,Active
2,STA,Inactive
etc
The new query:
(
SELECT "CrRepSta"."RENAME" FROM CrRepSta
WHERE "CrRepSta"."AFK" = 'STA' AND "CrRepSta"."ID" = "CUSTOMERS"."STATUS"
)
And by the way the statement "CrRepSta"."AFK" = 'STA' is not really needed.
Hi, I have a PowerBI report which has 1 Static column Object1 and Value's as Dynamic column. I want to add a Calculated Column which calculates the difference between LAST 2 columns, this is to calculate increase in the Sales for last month. Any idea how this can be done in PowerBI using DAX or Power Query? Thanks
This is a bit clunky, but I think it does what you want.
#"Unpivoted Columns" = Table.UnpivotOtherColumns(PreviousStepNameHere, {"Object1"}, "Attribute", "Value"),
#"Filtered Last 2" = Table.SelectRows(#"Unpivoted Columns", each List.Contains(List.LastN(#"Unpivoted Columns"[Attribute], 2), [Attribute])),
#"Added Custom" = Table.AddColumn(#"Filtered Last 2", "Custom", each if List.Contains(List.LastN(#"Unpivoted Columns"[Attribute], 1), [Attribute]) then [Value] else -[Value]),
#"Grouped Rows" = Table.Group(#"Added Custom", {"Object1"}, {{"Value", each List.Sum([Custom]), type number}}),
#"Added Custom1" = Table.AddColumn(#"Grouped Rows", "Attribute", each "Calculated_Column_Difference_Last2_Columns"),
#"Appended Query" = Table.Combine({#"Unpivoted Columns", #"Added Custom1"}),
#"Pivoted Column" = Table.Pivot(#"Appended Query", List.Distinct(#"Appended Query"[Attribute]), "Attribute", "Value")
Unpivot should preserve the column order. You filter the last two and switch the sign of the 2nd to last to get the difference when you group and sum. Add the desired column name as a custom column named Attribute. Append that back to your original unpivoted table and then re-pivot.
I'm using Access forms and tables for the purposes of our project. We have an Access form template that sends data to a table (Defaults) when something is changed, added or deleted on the form. The form stores the value in the "DefaultValue" and updates the table with it. Basically, it sends data to the table, updates it on-the-spot (if needed) and retrieves it afterwards to the "DefaultValue" so we can work in our forms with it.
That form data goes into a table [Defaults], which has 3 columns : FormName, ControlName and DefaultVal. Each value in a form gets categorized in the table by its [FormName] and [ControlName], both are primary key. We use the same templated form with different form names multiple times. The form field names in the template are the same. [DefaultVal] has a Memo/LongText column format.
Example: A row in "Defaults" reads Form_Sample|Text224|Test. This means there is a form named "Form_Sample" with a field named "Text224" with the DefaultValue "Test".
So my question is, I want to pull the data from [Defaults].[DefaultVal] to the table "Analysis". The information needs to go into 7 columns (lets call them column1,2,3,4,etc due to classified company information). To define what data from column "DefaultVal" goes into one of the 7 columns in Analysis, the [ControlName] column needs to be used. Since form field names in our different forms are the same, further categorization is needed so that they don't overwrite. In order to define if the [DefaultVal] is unique, first [FormName] must be used then [ControlName] needs to be added.
Example: If you have Text224|Apple from one form and Text224|Orange from another, they would overwrite due to the same [ControlName]. However, if we use two primary keys and have Form1|Text224|Apple and Form2|Text224|Orange, they would be unique. This is how our table defines if the value in [DefaultVal] should be overwritten or a new one should be created. The values in [DefaultVal] will change constantly and can be the same so I cannot pull the data based on that column.
I need specific values in [DefaultVal] to be on the same row based on the [FormName] and in different columns based on the [ControlName]. The table "Analysis" must be able to update information from "Defaults" and add new such.
This is how I would picture the Query:
General Functions: The query can identify if there are new rows in [Defaults] and add them according to the below logic. The query can identify if some of the values in [Defaults] have changes and updates them accordingly.
Data: Text1, Text2, Text3, Text4, Text5, Text6 and Text7 exist in [Defaults] as [ControlName]. They all have some value in [DefaultVal]. They are all part of the form "Sample" [FormName]. So their [FormName]|[ControlName]|[DefaultVal] would look like this in the columns Sample|Text1|Value; Sample|Text2|Value2 etc;;; There is also another set of Text1-7 [ControlName]s, but with a different [FormName]. The values in [DefaultValue] can be the same or different.
Rule: The query knows that the [DefaultVal] value of "Text1" as a [ControlName] needs to go into [Column1] from the table [Analysis]. Text2's value goes to [Column2], Text3 into [Column3] etc.
Logic: The query takes both instances of Text1's values and wants to put them in [Column1] of [Analysis]. It seems that they have a different [FormName] so it puts them on different rows. Then it takes both instances of Text2's values, it sees that they have a different [FormName] so it wants to put them on different rows. It sees that both [FormName]s exist in Analysis and puts them on the corresponding rows.
I'm really sorry if this is too long, I'm guessing it is, but I don't know how to explain it otherwise. I have some basic understanding of SQL queries, but this goes way beyond my understanding and I don't even know what commands to use and where.
Can someone please help? =/
INSERT INTO Analysis ([Control Name], [Control (Sub)-Section], [Control Objective], [Control Owner], [Branch Location], [Review Period], [Operating Effectiveness], [Conclusion and Recommendations])
SELECT Defaults.DefaultVal, Analysis.*
FROM Defaults as Def, Analysis as An
UNION ALL
On An.[Control Name] = Def.DefaultVal, An.[Control (Sub)-Section] = Def.DefaultVal, An.[Control Objective] = Def.DefaultVal, An.[Control Owner] = Def.DefaultVal, An.[Branch Location] = Def.DefaultVal, An.[Review Period] = Def.DefaultVal, An.[Operating Effectiveness] = Def.DefaultVal, An.[Conclusion and Recommendations] = Def.DefaultVal
PIVOT Def.FormName
WHERE NOT EXISTS
(
'SELECT *
FROM Defaults as Def
WHERE Def.ControlName = "Text684", Def.ControlName = "Text688", Def.ControlName = "Text580", Def.ControlName = "Text473", Def.ControlName = "Label548", Def.ControlName = "Label545", Def.ControlName = "Text925", Def.ControlName = "Text929"
PIVOT FormName'
)
ELSE
UPDATE Analysis
SET [Control Name], [Control (Sub)-Section], [Control Objective], [Control Owner], [Branch Location[, [Review Period], [Operating Effectiveness], [Conclusion and Recommendations] =
(
'SELECT DefaultVal
FROM Defaults
WHERE Def.ControlName = "Text684", Def.ControlName = "Text688", Def.ControlName = "Text580", Def.ControlName = "Text473", Def.ControlName = "Label548", Def.ControlName = "Label545", Def.ControlName = "Text925", Def.ControlName = "Text929"
PIVOT FormName'
)
;