I have a very large Tableau workbook, which takes a very long time to load. I have determined that this is due to some very large calculated fields which I want to replace to improve the load time.
My plan is to have this calculated field replicated on the database level however, I am unsure what the best approach is and was hoping someone can help me.
The calculated field is essentially a large mapping table, which roles up an area (say area 2) to a higher level named area 1. This is several lines of code and unfortunately I am not able share this due to work.
e.g.
IF [area_2] = "abc" THEN [area_2]
ELIF [area_2"] = "def" THEN "xyz"
ELIF [area_2"] = "ghi" THEN "mno"
....
....
END
My initial idea was to create a view on top of the database table these attributes come from with a IF Statement. However I have come to understand the views cannot be created with PL/SQL If statements.
I have tried to begin learning PL/SQL elements (i.e. Procedures, Packages, Functions), but finding it hard to determine what is the correct option I should go for.
Any guidance will be greatly appreciated.
Thanks
I think you have three ways to solve your problem
If a record always have the same value after the calculation
You can store the calculation value: add to your table columns to store the results of the calculation and create a TRIGGER in your table that calculate the values after every insert.
Create an Materialized View: If your table don't have a high frequency of inserts, you can create a MATERIALIZED VIEW, that store the view result after every insert.
Materialized View Docs: https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_6002.htm
If a record can change the value after the calculation
Create a view using CASE WHEN or DECODE if the calculations are simple as your example, if they're not, use a FUNCTION that return the value passing your value and params to calculate the value.
Related
I have an Azure SQL database, and my records inside table Spiderfood_RITMData in that database includes 13 different fields. Lots of stuff. I have confirmed in SQL-SMS that the records have data in each field.
There are way more items in the database than PowerApps can see using LOOKUP (1600-9000 records or more). However, I know FOR A FACT that there is only ONE record that has any given value in the NUMBER column. It's not a primary key, but it is unique in the table.
In PowerApps, I am trying to pull that field so that I can eventually parse out the individual items.
So, the commands I'm trying are:
ClearCollect(MLE_test1, Filter('Spiderfood_RITMData', "RITM2170467" in Number));
ClearCollect(MLE_test2, Search('Spiderfood_RITMData',"RITM2170467", "Number"));
However, the Collection results for MLE_test1 and MLE_test2 both are empty EXCEPT for the value of NUMBER. Say what?!
I'm trying to use the examples posted on https://learn.microsoft.com/en-us/powerapps/maker/canvas-apps/functions/function-filter-lookup but I am honestly getting baffled by this.
How should I be formatting this call such that I can pull the whole record?
Big picture explanation: I need to do a lot of data LOOKUPS into my table Spiderfood_RITMData table, but it has way more than 2000 rows, and PowerApps will not perform the Lookup correctly. So my presumably smart idea is to create a MUCH SMALLER "version" of Spiderfood_RITMData as a local collection, using a more delegateable function (such as FILTER or IN). If I filter by all records containing the values of NUMBER, then I go from, say a 10,000-record SQL table to a 10-record Collection. And I can do LOOKUPS against that collection for the rest of the function (uh, I think -- I'm still trying to experiment accordingly). Please let me know if this is crazy or not.
LookUp is just used to get one record, instead try this:
ClearCollect(MLE_test1, Filter('Spiderfood_RITMData', "RITM2170467" = Number));
This gets a collection with all the items where Number is = to "RITM2170467"
Collections are limited to only 2000 records in each collections.
I had same issue. Go to App settings. Under Upcoming Features make sure Explicit column selection is turned off. Hope this does it for you.
I need some help and I know I am not the only one to deal with this issue but I am wondering if you might have some ideas on how to handle the situation of comparing two rows of data filling out start and end dates.
To give you some context, we have a huge hierarchy (approx 8,000 rows and about 12 columns wide) that is updated each year. Sometimes the values change and sometimes they don’t. When the values don’t change, then I don’t need to adjust the dates. When the values do change and a new row is added, I need to change the data.
I have attached some fake data to try and illustrate my data. I am building this in MS Access, so I think this is more of a DBA type question that is going to be manipulated via a recordset type method.
In my example I have two tables – Old Table and New Table. In each table there is a routing code field that represents my join field and primary key for this table.
The Old table represents existing data - tblMain. The New Table represents the data to be appended - tblTemp.
To append the data, I have an append query set up in Access. I perform a left join between the Old and New tables, joining on every field and append the rows that are null in the Old table. That’s fine and that is not where my issue is.
What is causing me issue is how to fill out the start and end dates.
So as you can see from my tables, we are running a zoo. Let’s just say for the sake of the argument, our zoo started off pretty simple and has become more sophisticated. We now want our hierarchy to expand out and become a bit more detailed as we are now capturing the type of animal (Level 4) and the native location (Level 5).
As you can see when comparing one table to another the routing codes are the same, so the append query has to have a join on each field. When you do this, you return the Result Table which is essentially the Old and New tables stacked on top of each other. You might think about a Union query but this is going to give me duplicates and I don’t want that.
If you notice in the Result Table there is a Start and End Date. Let’s just say I get the start and end dates via message box that pops up upon the import of the data and is held in a variable. I think there are dates in my real data but still trying to verify this.
So how do I compare (pseudo code for the logic needed)?
• For each routing code:
Compare Levels 1-5
If the routing code is the same but Levels 1 -5 are not the same
fill out the end date of the old record
fill out the start date of the new record
This idea of comparing two records and filling out a data is quite prevalent in my organization but I haven’t found a way of creating the logic that consistently works so any help or suggestions would be appreciated.
Old Table
New Table
Result Table
I have a table input and I need to add the calculation to it i.e. add a new column. I have tried:
to do the calculation and then, feed back. Obviously, it stuck the new data to the old data.
to do the calculation and then feed back but truncate the table. As the process got stuck at some point, I assume what happens is that I was truncating the table while the data was still getting extracted from it.
to use stream lookup and then, feed back. Of course, it also stuck the data on the top of the existing data.
to use stream lookup where I pull the data from the table input, do the calculation, at the same time, pull the data from the same table and do a lookup based on the unique combination of date and id. And use the 'Update' step.
As it is has been running for a while, I am positive it is not the option but I exhausted my options.
It's seems that you need to update the table where your data came from with this new field. Use the Update step with fields A and B as keys.
actully once you connect the hope, result of 1st step is automatically carried forward to the next step. so let's say you have table input step and then you add calculator where you are creating 3rd column. after writing logic right click on calculator step and click on preview you will get the result with all 3 columns
I'd say your issue is not ONLY in Pentaho implementation, there are somethings you can do before reaching Data Staging in Pentaho.
'Workin Hard' is correct when he says you shouldn't use the same table, but instead leave the input untouched, and just upload / insert the new values into a new table, doesn't have to be a new table EVERYTIME, but instead of truncating the original, you truncate the staging table (output table).
How many 'new columns' will you need ? Will every iteration of this run create a new column in the output ? Or you will always have a 'C' Column which is always A+B or some other calculation ? I'm sorry but this isn't clear. If the case is the later, you don't need Pentaho for transformations, Updating 'C' Column with a math or function considering A+B, this can be done directly in most relational DBMS with a simple UPDATE clause. Yes, it can be done in Pentaho, but you're putting a lot of overhead and processing time.
tldr: Can not update records from query because of aggregate functions. What workarounds do you suggest?
I have a table containing decision criteria to which a user can assign a relative weight. I calculate the absolute weight in an SQL query using an aggregate function (as described here Divide the value of each row by the SUM of this column).
qryDecisionCriteria
name relative_weight absolute_weight (calculated)
price 2 50 %
quality 1 25 %
experience 1 25 %
I would like to present the query result in a form, where the user can update the relative weights, and then sees the absolute_weights.
However, the query results are not updatable, because the query involves an aggregate function.
What alternative methods or workarounds could I use, so that a user can edit relative_weights and view absolute_weights as a kind of visual feedback?
I read about temporary tables here http://www.fmsinc.com/MicrosoftAccess/query/non-updateable/index.html but I'm not sure, how to implement this.
Maybe I could also create an additional "edit form" based on a simple query, that is automatically invoked when the user selects a record in qryDecisionCriteria data?
Or maybe just display data from two queries (one updatable, one with the calculated field) next to each other in the form?
Which options would you recommend and why?
Make the Record Source for the form the updatable base query. In the text box which shows the calculated absolute weight set the control source to
=DSum("relative_weight","<base table name>")/Forms!<Form Name>!relative_weight
You'll need to be sure that you do two things with this
When you drag fields onto a form in Access it makes the name of the control the same as the control source column. this is really annoying and can cause a lot of headaches. Rename your control to something like txtColumnName. That way Forms!<Form Name>!relative_weight is guaranteed to reference the field and not the textbox.
in the AfterChange event for the relative_weight textbox you should add an event handler in which the following code is run
txtabsolute_weight.Requery
This will make sure the formula is recalculated whenever someone changes a weight. Otherwise they need to hit F5.
This is with respect to search of a text in a table
Table_Name:
Details
Columns:
Fname,Mname,Lname,NName
This table contains nearly one lakh records
We are using Oracle forms for some querying option
The user input one name the form searches the table for the name and based on the name either(Fname/Mname/Lname/NName) in which column its is present further actions are proceeeded.
The search is taking a long time since we have huge amount of data present in the table.
I tried with Functional indexes for the table but t did not work its also taking more time
Later i tried with something like this
concatenated all the names into one name and put it into a cursor.
Using the cursor output i tried with Instring but it is hanging
I also tried with searching for building a dynamic cursor but it did not work.
My database is Oracle
Can u help me to out to find an effective solution or please help me if i have missed something.
Thanks
First of all, 1 lakh (100,000) records is not in itself a large table.
The problem I can see is the query appears to be doing an OR against the Fname/Mname/Lname/NName columns.
This means the query will be doing at least one full-table scan to obtain the results.
You may wish to use debug to obtain the query it is firing against the database and attempt to tune this at the SQL prompt using auto trace.
You may need to clarify if the search is also doing something like a LIKE against these columns rather than an EQUALS. As a LIKE will impact the query further and affect indexes.
Certainly the use of INSTR will disable indexes on your searched column.
It is not clear if what your block is based on ie. table, view, query, procedure
You may want to try using the hint on the block properties of the form FIRST_ROWS.