So I have this QVD where records seize to update incrementally, the last record is dated 25/04/2022. However, I enable to generate new dates to calculate the interest accrued with the expression; RANGESUM(BELOW(SUM(DailyInterest),0,NOOFROWS())) + SUM(Interest).
My challenge is whenever I select any subsequent date, the amount defaults to 5,263.25, that's the initial amount as of 25/04/2022.
Table without any date selection
Table with 27/04/2022 selection
Apparently, in the above scenario, the amount should read 5,409.45 and not 5,263.25.
Help me out here, please!
Table with Daily Interest column
Disclaimer, there is guesswork in this answer, it might not work.
Your formula of rangesum + below (x,x,noofrows()) access the data table and sums all(noofrows) rows with date less or equal than the dimension value.
There is no set analysis in your formula; That means, when you make a selection, the other dates' data are removed from the calculation. Obviously 5263.25 (or maybe 5263.25-73.10) is an initial value, maybe sum(Interest).
One solution would be to make a set analysis that would make DailyInterest ignore your selections:
RANGESUM(BELOW(SUM({1}DailyInterest),0,NOOFROWS())) + SUM(Interest).
That might cause your chart to re-show dates you don't need. In this case, you have the generic problem of "I want to see less rows, but not actually select less data". You can solve this by using "Hide zeroes" and using IFs on every measure to make sure they are equal to zero/null for the dimension values you don't want.
You could of course calculate the accrued in your data model, which would be simpler.
Related
I am using SQL assistant and my data brings in snapshots from a huge database in the form of timestamps. Occasionally the snapshots bring in multiples per hour. The data is correct, multiple snapshots do happen from time to time within an hour, not always but it does happen.
I am bringing this into Spotfire and viewing by an hour and when more than one snapshot happens in the hour, the data shows as doubled.
I only want to display one per hour preferably the last(max) timestamp for the hour. Example; for the 7 am hour the data has a snapshot for 7:10 am and one for 7:55 am.
These are correct but I only want to display the last(max) timestamp, 7:55 am in this case. I can't figure the issue out in Spotfire so I am leaning towards a fix in SQL. How can I display only 1 for each hour?
You'd do this similarly to how you'd probably do it in SQL -- using a ranking/rownumber function.
The basic way Rank in Spotfire works is Rank(Order columns, order direction, partitioned columns, tie method)
You need to partition by the combination of Date and Hour, and then sort descending by your timestamp column.
So the code to identify the rows that you want to isolate should be something along the lines of:
Rank([TimestampColumn], "desc", Date([TimestampColumn]), Hour([TimestampColumn]), "ties.method=first")
What you do with it from here is going to depend on how you plan to use the data - for example, you can Limit Data Using Expression and set the code above = 1 which will limit your table accordingly (helpful if you don't want your users to accidentally forget to filter), or you can create a calculated column which turns it into a flag of some form like here:
If(Rank([TimestampColumn], "desc", Date([TimestampColumn]), Hour([TimestampColumn]), "ties.method=first") = 1, "Latest", "Duplicate")
Which allows your users to filter by this property. This way, they have the option to look at the extra rows.
Ultimately, though, if you want to only ever see these rows, and have no use for the earlier records, I'd probably do it in SQL, if you have that ability. This reduces the number of rows you have to load into your analytic.
I'm new to this forum, so if I make mistakes, tell me so I can learn ;).
So the question is, I want to make a summation of a column of a table in Excel, but only if it complies with two conditions. Table1 has 3 columns: Col1 contains a Date, Col2 a price and Col3 a catagory in which the price is logged.
I want the sum of all prices, for which the date falls within a certain month, and the Category complies with a choosen Category.
The code for both individual requirements works, and looks like this:
{=SUM(IF(MONTH(Table1[Date])=MONTH(A3);Table1[Price];0))}
{=SUM(IF(Table1[Category]="Category1";Table1[Price];0))}
However, the combined sum, =SUM(IF(AND(MONTH(Table1[Date])=MONTH(A3);Table1[Category]="Category1");Table1[Price];0)) does not work.
Do you know what I do wrong?
Thanks in advance
I think not really "do wrong" though perhaps a poor choice of approach (in my opinion, I happen not to be a fan of structured references). It is just that AND here does not return an array, only either TRUE (when the result would be the sum of all prices) or, much more likely FALSE (any one condition does not match, when the result is 0).
Instead I would recommend a PivotTable.
I have a problem here, i would like to sum the work time from my employee based on the data (time2 - time 1) daily and here is my query:
Effective Minute Work Time = 24. * 60 * (LASTNONBLANK(time2,0) -FIRSTNONBLANK(time1,0))
It works daily, but if i drill up to weekly / monthly data it show the wrong sum as it shown below :
What i want is summary of minute between daily different times (time2-time1)
Thanks for your help :)
You have several approaches you can take: the hard way or the easier way :). The harder (at least for me :)) is to use DAX to do this. You would:
1) create a date table,
2) Use the DAX calculate function to evaluate your last non-blank and first non-blank values (you might need to use calculate table, but I'm not sure; DAX experts jump in). Then subtract one vs. the other.
This will give you correct values for a given day for a given person. You can enforce the latter condition by putting a 'has one value' guard on the person name so that your measure informs the report author if they're not using it right.
Doing the same for dates is a little trickier. In the example you show you are including the date in the row grouping. But if you change your mind and want instead to have 'total hours worked by person' or 'total hours worked by everyone' you're not done with modelling yet.
Your next step is to use calculate table in combination with calculate to create a measure that returns the total. You'll use calculate table so you evaluate each date and the hours worked on that date by person. Then you'll use calculate to summarize that all down to a single number. If you're not careful with your DAX (or report authoring) you might mix which person you're summarizing for so that your first/last non blank are not at the person level. It gets intense quickly.
Your easier solution, though it might be more limited in its application - depends really on your scenario - is to use the query to transform the data into a summary by day and person using the group by command. This will give you a row per person per day with their start and end times. Then you can quickly calculate the hours worked on that day. Then you can quite easily build visuals on top of the summary data. Of course you give up some of the flexibility of the having a proper data model. However if you have a date table, a person table, and your summary table and then setup your relationships correctly you can achieve answers to the most common questions.
While designing a table which have impact based on date (e.g. Currency Rate) which one is better?
Effective date (Find out max(Effective date) and get the current value)
From Date - To Date condition (With greater than equals sysdate)
Rgrds.
It depends. You have a table representing changes to a particular entity, and you want to record when the entity changed, and when the change was replaced by a subsequent change.
1. If you record just the Effective Date for each event, INSERTs are simpler: they don't need to find the earlier record to update it. However, querying becomes more complex - you'll need to run a window over the dataset to find which record is applicable at any one time, potentially resulting in poorer performance.
Another downside is that this model is open-ended; you can't record a currency as "permanently closed" (which you could do if you had an End date).
2. If you record the Start and End dates for each event, queries are simpler: you only need to look at each row individually to know whether it is "current" as of any point in time or not. INSERTs, however, are slightly more complex - when you insert a new event, you have to update the earlier event to mark its End date.
This model is closed-ended; you can put a End date on the final event and have no "open" record.
Lets say I have a website that sells widgets. I would like to do something similar to a tag cloud tracking best sellers. However, due to constantly aquiring and selling new widgets, I would like the sales to decay on a weekly time scale.
I'm having problems puzzling out how store and manipulate this data and have it decay properly over time so that something that was an ultra hot item 2 months ago but has since tapered off doesn't show on top of the list over the current best sellers. What would be the logic and database design for this?
Part 1: You have to have tables storing the data that you want to report on. Date/time sold is obviously key. If you need to work in decay factors, that raises the question: for how long is the data good and/or relevant? At what point in time as the "value" of the data decayed so much that you no longer care about it? When this point is reached for any given entry in the database, what do you do--keep it there but ensure it gets factored out of all subsequent computations? Or do you archive it--copy it to a "history" table and delete it from your main "sales" table? This is relevant, as it has to be factored into your decay formula (as well as your capacity planning, annual reporting requirements, and who knows what all else.)
Part 2: How much thought has been given to the decay formula that you want to use? There's no end of detail you can work into this. Options and factors to wade through include but are not limited to:
Simple age-based. Everything before the cutoff date counts as 1; everything after counts as 0. Sum and you're done.
What's the cutoff date? Precisly 14 days ago, to the minute? Midnight as of two Saturdays ago from (now)?
Does the cutoff date depend on the item that was sold? If some items are hot but some are not, does that affect things? What if you want to emphasize some things (the expensive/hard to sell ones) over others (the fluff you'd sell anyway)?
Simple age-based decays are trivial, but can be insufficient. Time to go nuclear.
Perhaps you want some kind of half-life, Dr. Freeman?
Everything sold is "worth" X, where the value of X is either always the same or varies on the item sold. And the value of X can decay over time.
Perhaps the value of X decreased by one-half every week. Or ever day. Or every month. Or (again) it may vary depending on the item.
If you do half-lifes, the value of X may never reach zero, and you're stuck tracking it forever (which is why I wrote "part 1" first). At some point, you probably need some kind of cut-off, some point after which you just don't care. X has decreased to one-tenth the intial value? Three months have passed? Either/or but the "range" depends on the inherent valud of the item?
My real point here is that how you calculate your decay rate is far more important than how you store it in the database. So long as the data's there that the formalu needs to do it's calculations, you should be good. And if you only need the last month's data to do this, you should perhaps move everything older to some kind of archive table.
you could just count the sales for the last month/week/whatever, and sort your items according to that.
if you want you can always add the total amonut of sold items into your formula.
You might have a table which contains the definitions of the pointing criterion (most sales, most this, most that, etc.), then for a given period, store in another table the attribution of points for each of the criterion defined in the criterion table. Obviously, a historical table will be used to store the score for each sellers for a given period or promotion, call it whatever you want.
Does it help a little?