I am in a process of making OLAP cubes for data mining purposes.
The domain is Instruments which run tests and tests has status id's 1,2,3 which means ok, warning and error. I have already deployed the cube and its working perfectly.
My measure is the Sum of my tests. I have a timetable associated with the test table, for when the test was run.
I have four dimensions:
Instrument: which holds information about instrument.
Test: contains all the tests with information about the time it ran.
Status: contains the three status mentioned above.
Time: sort out tests in time
My question is, I have another status called 'NotRun'. Like the other statuses NotRun tests can not be saved in the database, but is calculated with a query.
Notrun is calculated by selecting all instruments from instrument table and then extract those instruments that are to find in test table within a given time period.
I want to use MDX to do the thing mentioned above, but instead of giving a time period i want the cube to handle that for me dynamically.
I don't want to pick a specific year instead i would like to take care of that with my time dimension dynamically.
where ([Date].[Calendar Year].&[2002])
I am really stucked. Any idea how we can acheive that in Business Studio Intelligence 2008?
All the best,
Hassan.
To answer the question "how do I get MDX to pick a date member by itself" see my answer at enter link description here
Or did you want Business Studio Intelligence to pop up a box and ask for the dates each time you run the report?
Related
Hope I can explain the problem I'm having trouble with.
I have to write a stepwise methodology using pseudocode/SQL query to auto generate a list of products/items with low stock/expiry from the inventory database.The list must be updated at 12 a.m. daily.
I tried this
CREATE EVENT IF NOT EXISTS update_table
ON SCHEDULE EVERY 1 DAY STARTS '2022-05-22 00:00:00'
ON COMPLETION PRESERVE ENABLE
Do
Select inventory.products from inventory where inventory.stocks <
inventory.required_stocks.
Your stated requirement is to run some sort of report very soon after the beginning of each calendar day.
The next question you must answer is this: What will you do with that report? Will you simply drop it into "low_stock" table someplace in your database? Will you format it into an email message and send it to your purchasing department? It will be difficult to make "pseudocode" for your requirement without first analyzing the overall business process you intend to enhance.
Various RDBMS systems have ways of doing scheduled things at particular times of day. You've shown the EVENT setup provided by MariaDB / MySQL. SQL Server has their "Jobs" system. postgreSQL has the pg_cron extension. Yo
The thing is, you can't just do SELECT operations from within these scheduled database actions: the result sets have noplace to go from that context. You can do CREATE TABLE midnight_run AS SELECT whatever ... to place the results in a table. But then the results are in another table.
If you want to get the results out of the DBMS, you'll need a UNIXish cron job or a Windowsish scheduled task running an appropriate application at midnight each day.
Pro tip Do your best to avoid scheduling stuff for precisely midnight. Many things run then. If you wait until a couple of minutes after the hour, your code is less likely to contend with other midnight code.
I am trying to calculate the hours for each task list to generate only one line with all of the information given. I have input an excel file into Access and generated the following required information.
Task List ------- Hours --- Progress --- Time Logged --- Billable Hours
General Task ------10-----------0------------0------------------0----------
General Task -------8-----------0------------8-----------------20----------
General Task -------4----------100----------10------------------0----------
General Task -------0----------100----------20------------------0----------
Project Initiation -22----------25----------24------------------0----------
Project Initiation -12----------25----------12------------------0----------
Project Initiation -16----------25----------16------------------0----------
Project Initiation -4-----------25-----------8------------------0----------
Requirements -------16---------100-----------0------------------0----------
Requirements -------14----------50----------44-----------------14----------
Requirements --------5----------75----------32-----------------12----------
Requirements --------0-----------0-----------8------------------0----------
Design--------------240----------0-----------0------------------0----------
Design -------------120----------0-----------0------------------0----------
Design -------------120----------0-----------0------------------0----------
Prototype------------24----------0-----------0------------------0----------
Prototype -----------42----------0-----------0------------------0----------
Prototype -----------32----------0-----------0------------------0----------
Prototype -----------16----------0-----------0------------------0----------
Prototype -----------12----------0-----------0------------------0----------
Testing -------------16----------0-----------0------------------0----------
Testing -------------24----------0-----------0------------------0----------
Testing --------------8----------0-----------0------------------0----------
Testing --------------0----------0-----------0------------------0----------
Testing --------------0----------0-----------0------------------0----------
And I would like to come up with the final output looking like this!
Each of the task lists combined with the hours, time logged, and billing time summed up. Progress would be summed up and divided by the entries (ex requirements progress is (100+50+75+0)/4=56.25 progress total)
Task List --------- Hours --- Progress --- Time Logged --- Billable Hours
General Task --------22---------50------------38---------------20----------
Project Initiation --54---------25------------60----------------0----------
Requirements --------35-------56.25-----------84---------------26----------
Design--------------480---------0--------------0----------------0----------
Prototype-----------126---------0--------------0----------------0----------
Testing -------------48---------0--------------0----------------0----------
I tried looking at Concatenating multiple rows into single line in MS Access
and working off some of the code there, but was unable to make it work... This is where I started but was getting the error "the SELECT statement includes a reserved word or argument..."
Concatenating multiple rows is not appropriate for this requirement. Use an aggregate query:
SELECT [Task List], Sum(Hours) AS SumHrs, Avg(Progress) AS AvgProg,
Sum([Time Logged]) AS SumTime, Sum([Billable Hours]) AS SumBill
FROM table
GROUP BY [Task List];
Could instead build a report with raw table as source and use report Sorting & Grouping features with aggregate calcs. Report allows display of detail records as well as summary data
I'm trying to add a new column to my SSAS cube. The column is a date field, and links to my DimDate table (a Date dimension). This date represents the project completion date.
However.... not all of the projects have a project completion date due to old projects not ever being assigned this value. And this is expected. We don't want to put bogus dates into the field just to get SSAS to work.
When processing the cube, it crashes with:
Errors in the OLAP storage engine: The attribute key cannot be found when
processing: Table: 'dbo_FactMyTable', Column: 'MyDate_id', Value: '0'.
The attribute is 'Date Id'.
I can't disable "missing values" for the entire project because in most cases, this really is an error. How can I disable missing values for this dimension?
Or is there a better way to handle missing dates/values like this?
Small correction - based on your question, you need to change Processing error handling for special Measure Group, not Dimension. You can do it for all dimensions linked to some measure group, but not to specific dimension.
You can process individual measure group with _Table: 'dbo_FactMyTable'_ first with necessary missing value settings, and then - process rest of your cube with default settings.
Main problem here - how to process rest of the cube. You might have sophisticated system which creates processing XMLA scripts dynamically based on data update knowledge (I do it with SSIS); in this case you would not ask this question. Suppose your environment is simpler - you update cube and would like to process it as a whole completely. In such scenario I would sudgest the following workflow:
Process Default all Dimensions (will do initial processing or in structure changes)
Process Update all Dimensions
Process Cube with Unprocess - invalidating it
Process your special measure group
Process Cube with Process Default
This will first update Dimensions, then - clear processing status flag from all measure groups in the Cube. After that you process your measure group with special flags; this set processing status for this MG. And then during Process Default on Cube - only unprocessed MGs will be covered, which excludes your special MG from processing scope.
The answer is a bit complicated, but this article did a great job of explaining it, including screen shots for the SSAS-challenged like me.
http://msbusinessintelligence.blogspot.com/2015/06/handling-null-dates-in-sql-server.html?m=1
I have a .RDL report which I designed in BIDS and have deployed to my report server. The report asks for three parameters before viewing report: Year, Month and Customer ID. The report works great and does exactly what it is supposed to.
While I used to run each report individually because there were 2-3 customers, now there are 30+ customers who receive the report, so I wanted to switch to a more automated fulfillment method to get the reports generated. After doing some research it appears that a using Report Manager to create a "Data Driven Subscription" (DDS) using the "Windows File Share" option gives me the capabilities I need.
As part of creating the DDS, I created a table called [Subscription] which is a table containing one row for each customer receiving the report and has the following columns:
Year
Month
CustomerID
FileName
FileLocation
Overwrite
Format
...so through using the DDS Wizard in Report Manager, I was able to successfully set up a Data Driven Subscription (which is linked to various columns in the [Subscription] table) which creates a new report for each customer in the [Subscription] table, saves [and overwrites, if necessary] it in a location of my choosing as a PDF (specified in [Subscription].[FileLocation], or the FileLocation column of my table for each row), and runs every minute (I plan on changing frequency to once a week, eventually).
This works flawlessly, giving me a new set of 30 reports in the directory of my choosing, with each report having a name I assigned in the FileName column of my table. Exactly what I was looking for.
HERE'S THE PROBLEM: When I update the FileLocation or FileName (or anything, really) in the [Subscription] table - it doesn't pick up the changes right away. Sometimes it doesn't even pick it up at all (for example I updated the [ReportName] column for one customer from Report_711622 to SpecialReport_711622, so that the output file for that customer should be named SpecialReport_711622 while all of the other reports should be called Report_XXXXX [no Special prefix]. But the file name of report for Customer 711622 remains the same!
It's almost like the job only see's what it needs to do once a day, and then does not go back and reference the [Subscription] table until I leave for the night, then when I come back in the morning it picks up the change.
Since I am about to scale this process out to a large customer-base using a different report, I need to be able to make edits to the [Subscription] table and have them get picked up by the Data Driven Subscription immediately (and if not immediately, at least a fixed interval of time that I can adjust, so that I can know 100% when the change will get picked up).
Does anyone know what's causing my lag? How do I change it so that updates to the Subscription table get picked up regularly? I'm also having issues with creating new DDS on other reports (following the exact process outlined above) - I've created the subscriptions, for every minute, and it says they are running and the number of outputs match the number of customers with 0 errors, but there are no files in the drive I specified (or anywhere else I've looked, for that matter).
Any help would be greatly appreciated!
I think the answer lies in the mechanism SSRS uses. There are a few places "lag" can occur.
The subscription is in fact an SQL Agent job which creates a record in the Event table. This table is a queue that SSRS checks to do scheduled tasks.
There is a small amount of time between the moment the subscription creates the Event record and the moment SQL reads it and starts creating the dataset for your DDS. The creation of the DDS dataset takes some time, too. In this time, the subscription will be in the Pending state. If you change anything in the data during this time, The subscription will still use the old data as report parameters. So obviously you will not notice your change until the next scheduled run.
Which brings me to the following: if a subscription is still being run and the next schedule kicks in (chances are, because yours runs every minute), the engine will not execute it, but wait for the next subscription schedule, and so on. So that's another possibility of lag - and cause of missing reports for a certain schedule minute. The subscription processes reports sequentially, one row from your DDS recordset at a time. Again, this takes some time. You can also see that in the subscription window when it says: # of # processed.
I suggest you look at the Event table in the database ReportServer during an execution. Also the ExecutionHistory views (there are 3) may be interesting. A scheduled run shows up as a RequestType = 1 and generates one record for each report. You can see the exact timing and parameters of each report that is run in the subscription. You may be able to extract the data you need to resolve your other issues.
EDIT: Here is a more elaborate guide to DDS data and events
http://blogs.msdn.com/b/deanka/archive/2009/01/13/diagnosing-and-troubleshooting-subscriptions.aspx
http://blogs.msdn.com/b/deanka/archive/2010/02/16/troubleshooting-subscriptions-part-ii-using-the-report-services-trace-log-file.aspx
Could this "Double-Hop" problem be the source of my issues? I'm so stuck on this one!
The Double-Hop Problem - MSDN Knowledgecast
I am very new to BO webintelligence.
I am doing a very simple query, retrieve Sales Amount for dates between 2012 and 2013. Just for this simple query when I run it, my BO crashes or gets stuck on "please wait" window. Why is this happening? If I select like 3 or 4 days like between Jan 1st 2012 and Jan 4th 2012, it runs fine. Is there anything from my end that I am doing wrong? This is in production mode.
I also wanted to point out that I have tried limiting my data set for specific region etc.
Too many unknowns to successfully assist. I presume this is webi 3.1.x or 4.1.x and I presume that the Sales Amount is a measure and that your query includes a date which does not include a time component. If you only have these two objects in your results pane and have the region in your conditions then pulling something like [Date] and sum([Sales Amt]) should not take long to execute. As previous poster suggests, try to execute the sql in a tool like SQL*Developer or Management Studio or however the source database / OLAP is stored.
Even though you've limited the data to a "region" this may be too much data. Try selecting a smaller result set of 100 rows or changing the option for "retrieve duplicate rows" on the query panel.
If possible, post the query from your report using generic object names?