Materialized View: How to automatically refresh it upon table data changes? - sql

Is there a way in Oracle Materialized views so that it automatically refresh itself when there are changes on the tables used in the materialized view? What is the Refresh Mode and Refresh Method that I should use? What options should I use using Sql Developer?
Thank you in advance

Yes, you can define a Materialized View with ON COMMIT, e.g.:
CREATE MATERIALIZED VIEW sales_mv
BUILD IMMEDIATE
REFRESH FAST ON COMMIT
AS SELECT t.calendar_year, p.prod_id ... FROM ...
In this case after every commit the MV is refreshed, provided the last transaction was done on master table, of course.
Since refresh is done after each commit it is strongly recommendd to use FAST REFRESH, rather than COMPLETE this would last too long.
You have several restrictions and pre-conditions in order to use FAST REFRESH, check Oracle documentation: CREATE MATERIALIZED VIEW, FAST Clause for details.

I don't think there's any way to 'automatically' replicate the changes to the m.view right after they are made. But there are ways to use FAST (incremental) refresh on demand, you'd only have to schedule a job for the m.view or and m.view group to do the refresh. You can also use m.view log to keep track of all the dml and the have it propagated to the m.view with a fast refresh on a remote database through the db link.
If you need the changes to be replicated as soon as they are made, then I recommend using golden gate or streams (if you don't want do license GG). Just beware that oracle discontinued support for streams in favor of Golden Gate, so if you have any issues, you're on your own. But anyway, it's a pretty solid replication tool, once you get the hang of it.

Related

Oracle: Use of Materialized view for avoiding Socket Read Timeout;

We have a spring application. We generally have to execute several SQL queries on the view exposed to us by the Client.
In one scenario our queries work fine but the count(*) over the same queries creates problems. It returns
org.springframework.dao.RecoverableDataAccessException - StatementCallback;
IO Error: Socket read timed out; nested exception is java.sql.SQLRecoverableException: IO Error: Socket read timed out]
We asked the client to increase the oracle.jdbc.ReadTimeout property.
He instead has offered to expose a materialized view.
Can a materialized view helps in situations like these (where count queries lead to timeouts.)
How Materialized views can ve leveraged upon to increase performance of Queries
A materialized view is a great solution to your problem. Materialized views store the results of queries in a table, and can significantly improve performance. Your client seems to be doing you a huge favor, as they will be responsible for maintaining the objects that support the query.
The only potential downside depends on how they implement the materialized view. If they create a fast-refresh materialized view, it will automatically store the correct result after every change to the data. But there are many limitations to fast-refresh materialized views, and most likely your client will provide a complete refresh materialized view, which must have a schedule. If they provide a complete refresh materialized view, make sure the application can work with old data.
(Or course, the database timeout settings may still be inappropriate. There could be a bad profile, a bad sqlnet.ora parameter, a bad setting for resource manager, an ORA-600 bug, etc. You might want to find out the specific reason why your query timed out. Not that I think the client is trying to hide things from you; a terrible DBA would have just said, "tough luck, fix your stupid query". The fact that you're being offered a materialized view is a good sign that they are really trying to solve the problem.)

When to invalidate cache - .net core api

How do I know when to invalidate the cache, if a table change is made from an outside source?
I have an api call that returns an employee table. The first time this call is made, I will cache the results so that on subsequent calls it will pull the data from the cache instead of the database. This makes sense, however, what happens if someone adds a new record to the employee table from outside of the api, how does the cache know that it is now invalid?
If the user made the change to the employee table through the API I can capture that, but we have a separate desktop app that doesn't use the API, and that app can directly make changes to the employee table. Is there any accepted standards for handling this?
The only possible solution I can think of is to add a trigger to the employee table, and somehow use that to know when a table has changed. But, we have over a thousand tables, and we are making an api call for each table - So, I do not think that adding a thousand triggers to our database is an acceptable solution.
Yes you could add a trigger as suggested. Or you could use a caching system that support expiry time/sliding expiry. So you would be serving up stale data some of the time but not always.
As the other answer a suggests your trigger idea is ok, however as you've stated that would be a lot of triggers.
If your cache is not local to the API, which i assume it isn't if triggers would be able to access. Could you not access it from your desktop application? You could invalidate your cache by removing the employee record from the cache with the desktop application when it makes a successful change to the employee table.
It boils down to..
You have a cache (which is essentially a read store).
You have two options to update it
- Either it times out and fetches (which is ok, if you dont need up to the minute real time data)
- Or is has to be told its data is no longer valid.
Two ways to solve this
Push model
Pull model
Push Model: Using a database trigger for SQL server table to populate an intermediate audit table and polling that using a background task.
Pull Model: Using CLR Trigger and pushing the updates to an API. Whenever DML happens the CLR trigger will call the Api, qhich in-turn can update the cache!
Hope this helps!

Emulating materialized views in PostgreSQL with concurrent refreshes

I'm using PostgreSQL 9.2.4 and would like to emulate a materialized view. Are there any well-known methods for doing this, including concurrent refreshes?
PostgreSQL wiki - materialized views links to two trigger-based implementations.
The general idea is to put AFTER INSERT OR UPDATE OR DELETE ... FOR EACH ROW triggers on each involved table that do partial updates on the target table. Implementation is fairly specific to the nature of the view.
For some more complex views you can't really do partial updates and need to do a concurrent view refresh instead. That usually involves creating a new table, populating it, committing, beginning a new transaction, dropping the old table, renaming the new one to the name of the old one, and committing again.
Starting from 9.5, Postgres supports Concurrent Refresh as stated here in the official documentation. However, there are two preconditions that needs to be satisfied to do so:
You must create an unique index on the materialized view
The unique index must include all the records of the materialized view. In other words you cannot have a WHERE clause in your create index command.
The command to refresh the materialized concurrently view is following:
REFRESH MATERIALIZED VIEW CONCURRENTLY *mat_view_name*;
Note that refreshing the materialized view concurrently is relatively slower than the normal refresh. However, it will make sure that none of your queries on the materialized view is blocked during the concurrent refresh.

materialized view logging exclude deletes

I am using MVIEWs with Fast refresh to replicate some tables across a network. Everything works great, however I ran into an issue when considering my Delete/Purge process.
The source for the MVIEWs that are feeding the log tables have a data retention of 7 days. Ie I will be running a nightly purge process to delete data older than 7 days from current date.
The target MVIEWs however are on an ODS and have a data retention policy of 30 days. Also, these MVIEWs are NOT currently populating another schema or set of tables.
Problem is, when I Delete from the source tables, those delete statements will propagate through to the target MVIEWs and now I no longer have 30 days worth of data - only 7.
Is there a way to exclude logging DELETE for the MVIEW log tables? I noticed in the MLOG$_Table_Name there is a column 'DMLTYPE$$'. Could I somehow delete from the Log table all records where DMLTYPE$$ = 'D'?
Thanks everyone, and yes, I did try researching this online first.
Regards,
Steve
I suppose that you could manually delete data from the materialized view logs before running the refresh. That would probably work. But it would not be a solution that I'd be really comfortable with. It would be a very bespoke solution that would probably not be officially supported. And it if there might ever be another materialized view that depends on the materialized view log, you'd have to ensure that you're only deleting those rows that relate to your materialized view's subscription. Plus, the materialized view on the destination would need to be updatable in order for you to be able to manually remove the rows older than 30 days via a separate process.
If these are the business requirements, something like Oracle Streams (or GoldenGate) would be a much more appropriate architectural solution. Those products are designed to give you more flexibility about which logical change records (LCRs) you apply. In Streams, for example, it is easy enough to create a custom apply handler that discards delete LCRs. And since you're applying LCRs to a table on the destination rather than a materialized view, your 30 day purge process is much easier to manage. This would be a relatively common Streams setup rather than a very unique materialized view setup.

Life cycle of an Oracle Materialized view

I am looking for the life cycle of an Oracle materialized view. For example the statement:
Create materialized view foo
Refresh On Commit
...
Will this view be updated every time there is a commit to my database, or just one of the tables referenced in the view statement? Also beyond this at what point does Oracle destroy the old cache and replace it with the new one? Specifically what is the window of "staleness" for a materialized view? Meaning is it dependent on how long it takes to create the materialized view.
The ON COMMIT clause will modify the commit process of all transactions that issue DML on a base table:
Specify ON COMMIT to indicate that a fast refresh is to occur whenever the database commits a transaction that operates on a master table of the materialized view. This clause may increase the time taken to complete the commit, because the database performs the refresh operation as part of the commit process.
The commit will be dependent upon the success of the refresh of the materialized view (which means that a commit can fail because a dependent MV can't be refreshed).
The refresh takes place in the same transaction as the one that issues the commit. This means that as soon as the commit is complete, the changes are visible to all sessions (data is thus never stale).
Some of the things you have to be aware of:
The use of on-commit MVs has a performance cost: materialized view logs (adds DML "triggers" to the base table) increase the work on DML and obviously the commit will perform more work than usual. Benchmark your workload to make sure the extra work won't be a burden.
In aggregate on-commit MV, concurrent transactions can update the same MV row, which can lead to some contention during the commit on top of the extra work.
Some tools don't expect a commit to fail, this can lead to some UI problems (usually old client-server apps).