Any one can tell me is there any Time based Trigger Policy available in Apache Ignite?
I Have an Object Having Expiry Date When That Date(Time-stamp) Expire I want to update This value and override it in Cache is it possible in Apache Ignite
Thanks in advance
You can configure a time-based expiration policy in Apache Ignite with eager TTL: Expiry Policies. This way objects will be eagerly expired from cache after a certain time.
Then you can subscribe a javax.cache.event.CacheEntryExpiredListener, which will be triggered after every expiration, and update the cache from that listener. However, it looks like there will be a small window when the entry will have been already expired from a cache and before you put and updated value into cache.
If the above window is not acceptable to you, then you can simply query all entries from a cache periodically and update all the entries that are older than a certain expiration time. In this case you would have to ensure that all entries have a timestamp filed, which will be indexed and used in SQL queries. Something like this:
SELECT * from SOME_TYPE where timestamp > 2;
More on SQL queries here: Distributed Queries, Local Queries.
Maybe like this:
cache.withExpiryPolicy(new CreatedExpiryPolicy(new Duration(TimeUnit.SECONDS, 123))).put(k, v);
The expiration will be applied only to this entry.
For trigger try continuous queries: apacheignite.readme.io/docs/continuous-queries
Related
I have a need to audit changes where triggers are not performing well enough to use. In the audit I need to know exactly who made the change based on a column named LastModifiedBy (gathered at login and used in inserts and updates). We use a single SQL account to access the database so I cant use that to tie it to a user.
Scenario: Now we are researching the SQL transaction Log to determine what has changed. Table has a LastUpdatedBy column we used with trigger solution. With previous solution I had before and after transaction data so I could tell if the user making the change was the same user or a new user.
Problem: While looking at tools like DBForge Transaction Log and ApexSQL Audit I cant seem to find a solution that works. I can see the Update command but I can't tell if all the fields actually changed (Just because SQL says to update a field does not mean it actually changed value). ApexSQL Audit does have a before and after capability but if the LastUpdatedBy field does not change then I don't know what the original value is.
Trigger problem: Large data updates and inserts are crushing performance because of the triggers. I am gathering before and after data in the triggers so I can tell exactly what changed. But this volume of data is taking a 2 second update of 1000 rows and making it last longer than 3 minutes.
Thanks is advance for any help. Here is the scenario that I am trying to recreate in Mulesoft.
1,500,000 records in a table. Here is the current process that we use.
Start a transaction.
delete all records from the table.
reload the table from a flat file.
commit the transaction.
in the end we need the file in a good state, thus the use of the transaction. If there is any failure, the data in the table will be rolled back to the initial valid state.
I was able to get the speed that we needed by using the Batch element < 10 minutes, but it appears that transactions are not supported around the whole batch flow.
Any ideas how I could get this to work in Mulesoft?
Thanks again.
A little different workflow but how about:
Load temp table from flat file
If successful drop original table
Rename temp table to original table name
You can keep your Mule batch processing workflow to load the temp table and forget about rolling back.
For this you might try the following:
Use XA transactions (since more than one connector will be used,
regardless of the use of the same transport or not)
Enlist in the transaction the resource used in the custom Java code.
This also can be applied within the same transport (e.g. JDBC on the Mule configuration and also on the Java component), so it's not restricted to the case demonstrated in the PoC, which is only given as a reference.
Please refer to this article https://dzone.com/articles/passing-java-arrays-in-oracle-stored-procedure-fro
From temp table poll records.You can contruct array with any number of records. With 100K size will only involve 15 round trips in total.
To determine error records you can insert records in an error table but that has to be implemented in database procedure.
Basically I have a table that grows very fast as it registers all user impressions. However most of the data is useless, I only need the latest entry made for each user. (The table is used to authenticate users).
I'm looking to delete the old data, so the table should end up having a stable number of rows around the total number of registered users.
I can use a cron job, then there's the option of simply adding a line at the end of the authentication script that deletes old rows. It would run on every page load.
DELETE WHERE `Date` < NOW() - SOME INTERVAL
Is this efficient, should I use a CRON JOB or what else?
Executing this from the page would add up to the time for the user login.
That is a bad approach. Better use cronjob or some other job scheduling tool like Jenkins
I would say, you could CREATE a temp table to hold your latest records.
And then DROP old table alltogether. Faster :) dropping than deleting.
Rename your temp table to the old one.
So this logic could be in your CRONJOB if you prefer.
I have a database using SQL 2005 merge replication and ther has been data inserted into the subscriber that never went over to the publisher. I believe there was a conlict that happened over the 14 day retention period ago and I do not see it any more. Can I manually add them into the publisher? Any ideas or directing me to a good link is appreciated. Thank you.
If the conflict occurred before the current retention period, I don't think there is any magic that will get it back. Can you drop the subscription and re-create it (synchronizing the deltas manually in the meantime)? Probably the safest action.
Before I answer this please note that the following directions can be very dangerous and must be done with the utmost care. This solution works for me because the tables in question are only written to one(1) subscriber and no where else. Basically what I did was to:
Pause replication(I actually disabled the replication job for the subscriber I was working on and enabled it when done)
Set the Identity Insert for the table to ON(auto identity is used on the table)
Alter the table to NOCHECK CONSTRAINT the repl_identity_range_(some Hex Value here)
Disabled the MSmerge_ins_(some Hex Value here) trigger for the table.(MAKE SURE TO ENABLE THIS WHEN COMPLETE)!!!
Inserted the rows
Set Indentity_Insert off
Enabled the MSmerge_ins_(some Hex Value here) trigger
Alter the table to CHECK CONSTRAINT the repl_identity_range_(some Hex Value here)
You can find the name of the repl_identity_range constraint by running sp_help. I reccomend that you use a tool such as Red Gates data compare to validate once you are complete just to make sure. Depending on you situation you may have to manually insert the data at all the subscribers as well. FYI-I had to do this on a production database without interruption the end users. Please use caution.
I think someone with shared access to my SQL Server '05 DB is deleting records from a table in a DB for their own reasons.
Is there any audit table I can check to see manual delete queries which may have been run on the DB in the last X number of days?
Thanks for your help.
Ed
May want to consider using a trigger temporarily.
Here's an example.
I'd add an on delete trigger to the table in question. That would allow you to keep an exact log of deleted records (ie, if on your trigger you insert into another table, etc)
SELECT deqs.last_execution_time AS [Time], dest.TEXT AS [Query]
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
ORDER BY deqs.last_execution_time DESC
SQL Server Profiler is probably the easiest way to do this. You can set it to dump all executed queries to a table in the database, or to a file which might be more suitable in your case. You can also set a filter to capture just the queries you're interested in, or the log files become huge.
Unless you've set things up beforehand (via triggers, running Profiler traces, or the like) no, there is no simple native way to "pull out" commands that have been run against a SQL Server database.
#David's idea of querying the procedure cache is one possibility, but would only work if the execution plan(s) are still in memory.
There are third-party transaction log readers available. They could be used to read the contents of the transaction log, but again that only helps if the data/commands are still in there, and after "X days" that seems unlikely.
Another work-around would depend on backups.
Restore a copmlete backup from before your problem time, and compare and contrast with the current version. This would show if data has been deleted, but not how.
If you are in Full backup mode and you have transaction log backups, you can perform various types of incremental restores and actually observer the deletions happening (if they are), but this would probably require a lot of point-in-time recoveries and would be very time intensive.