Axon, event store and sql insert - sql

We use Axon 2 to our CQRS-ES
For some (very bad) reasons , we are force to update the content of the eventstore table directly in the database, without using axon. Then we relaunch the axon denormalizer to replay the event and integrate the change in the views
My issue is, when I do it, the newly insert event are not considered by the aggregate ( like there was some sort of cache) .
How can I ask axon to refresh the cache of the eventstore?
I know insert event that way is absolutely not a good practice, but we need a workarround.

There is such a cache. To prevent having to replay all events for that aggregate every time an aggregate instance is loaded, Axon stores a snapshot of the aggregate state every so many events
I think your problem will go away when you delete the snapshots. It's probably in a table called snapshot_event_entry.
https://legacy-docs.axoniq.io/reference-guide/v/2.2/single.html#d5e1274

Related

Alternative to indexed view on text column

I have a database table I don't own or control being loaded from another system - and all the fields are Text - so obviously it's useless for any queries performance-wise.
I thought to solve the problem by creating an indexed view, which just converts every field to int, date or varchar... But apparently you can't create an indexed view on a text field.
I know I can do a create table as select... but that's a once off, and it won't automatically update if someone does another load into the underlying table.
Is there any way I can make a live table without text columns from one with text columns?
You don't own it or control it, so I guess a trigger is out of the question. I might give Change Tracking a try. You can use it to either sync changes as they come or trigger a reload of your version of the table. If you can't tolerate any delays in the sync, then this might not work for you.
If the updates come in large batches or via a complete reload only once in a while, then triggering a reload might be the way to go. Verify that there are no changes for a minute or so to insure the data is stable before reloading. A job scheduled to run every few minutes could handle the reload.
If doing a faster sync, then a job running a script using a loop and waitfor (seconds or minutes) could process new changes since the last run or loop.
There should be very little overhead for detecting the changes.

Fields calulated after the webhook

I use the podio API to create an item. In the form I have a few calculations. When I retrieve the item immediately after its creation, using the api, the fields are not calculated yet. The calculation is asynchronous, so that makes sense.
When I use a create hook, and fetch the itembased on the hook, the calculated fields are there.
Does anyboofy know if I can depend on his, meaning is the create hook fired after the fields are calculated?
Yes the Javascript calculations are asynchronous.
Plus also related, the MongoDB (which Podio uses on the back end) is "eventually consistent".
I faced this same problem, and ended up making a queueing system for my incoming webhooks, where I waited 30 seconds before actioning any record retrieval from Podio, to get updated values for our local reporting database to cache.
Also related more to the MongoDB asynchronicity... if you are using Globiflow to trigger updates in related tables using the javascript calulated field from the parent table, I found there was occasional incorrect values.
I solved it by adding a 30 second delay in the Globiflow script, before updating the related app/table with calculated fields from the parent app/table. This gave enough time for javascript to calculate and mongodb to save the calculated value
https://www.globiflow.com/help/wait-delay.php

When to invalidate cache - .net core api

How do I know when to invalidate the cache, if a table change is made from an outside source?
I have an api call that returns an employee table. The first time this call is made, I will cache the results so that on subsequent calls it will pull the data from the cache instead of the database. This makes sense, however, what happens if someone adds a new record to the employee table from outside of the api, how does the cache know that it is now invalid?
If the user made the change to the employee table through the API I can capture that, but we have a separate desktop app that doesn't use the API, and that app can directly make changes to the employee table. Is there any accepted standards for handling this?
The only possible solution I can think of is to add a trigger to the employee table, and somehow use that to know when a table has changed. But, we have over a thousand tables, and we are making an api call for each table - So, I do not think that adding a thousand triggers to our database is an acceptable solution.
Yes you could add a trigger as suggested. Or you could use a caching system that support expiry time/sliding expiry. So you would be serving up stale data some of the time but not always.
As the other answer a suggests your trigger idea is ok, however as you've stated that would be a lot of triggers.
If your cache is not local to the API, which i assume it isn't if triggers would be able to access. Could you not access it from your desktop application? You could invalidate your cache by removing the employee record from the cache with the desktop application when it makes a successful change to the employee table.
It boils down to..
You have a cache (which is essentially a read store).
You have two options to update it
- Either it times out and fetches (which is ok, if you dont need up to the minute real time data)
- Or is has to be told its data is no longer valid.
Two ways to solve this
Push model
Pull model
Push Model: Using a database trigger for SQL server table to populate an intermediate audit table and polling that using a background task.
Pull Model: Using CLR Trigger and pushing the updates to an API. Whenever DML happens the CLR trigger will call the Api, qhich in-turn can update the cache!
Hope this helps!

Order excecution in Siddhi on wso2cep

I'm new on Stackoverflow even if I solved a lot of problems with your hints. Now I have a problem I have not found the solution.
I'm developing a pushing service using the WSO2 CEP and the GCM. CEP handles the subscribe/unsubscribe requests and the push events. The subscriptions keys are stored on my own server using MySQL together with other info.
My problems come with the subscribe step. This step has to handle either the new subscriptions (insert) and existing subscription (update). To make the operation easier, I decided to normalise the two operations by deleting and inserting the records (even if the record could be already on the DB).
To handle this, I developed an execution plan using Siddhi. The plan defines 2 streams: an event stream and a table stream linked to a MySQL table.
In the Execution Plan, first a delete is done using the key taken from the event and after a new record is inserted using the info contained into the event.
But it seems that the sequence of the operations (delete and insert) differs, so sometimes I found two or more records with the same GCM key on my server. I applied a workaround by adding a unique constraint on the table, but I'd like to know if there is a way to fix a deterministic order on the Siddhi operations.
Regards
Michele de Rosa
Since you are using same stream to update and insert to table there is no guarantee that delete query will execute earlier. All queries which are receiving from same stream will execute in parallel and we do not have any control over order. Only way we can enforce order is by either introducing a query pipeline or using a pattern query to delay events.
However your requirement you can use newly added insert overwrite functionality in event tables. This will automatically handle your requirement of updating if exists and inserting otherwise.
Hope this helps!!
Thanks
Tishan

Fire SQL Trigger only when a particular user update the row

There is a trigger in postgres that gets called whenever a particular table is updated.
It is used to send updates to another API.
Is there a way one can control the firing of this trigger?
Sometimes when I update the table I don't want the trigger to be fired. How do I do this?
Is there a silence trigger sql syntax?
If not
Can I fire triggers when a row is updated by PG user X and when PG user Y updates the table no trigger should be fired?
In recent Postgres versions, there is a when clause that you can use to conditionally fire the trigger. You could use it like:
... when (old.* is distinct from new.*) ...
I'm not 100% this one will work (can't test atm):
... when (current_user = 'foo') ...
(If not, try placing it in an if block in your plpgsql.)
http://www.postgresql.org/docs/current/static/sql-createtrigger.html
(There also is the [before|after] update of [col_name] syntax, but I tend to find it less useful because it'll fire even if the column's value remains the same.)
Adding this extra note, seeing that #CraigRinger's answer highlights what you're up to...
Trying to set up master-master replication between Salesforce and Postgres using conditional triggers is, I think, a pipe dream. Just forget it... There's going to be a lot more to it than that: you'll need to lock data as appropriate on both ends (which won't necessarily be feasible in a reasonable way), manage the resulting deadlocks (which might not automatically get detected), and deal with conflicting data.
Your odds of successfully pulling this off with a tiny team is about about zero -- especially if your Postgres skills are at the level where investing time in reading the manual would answer your own questions. You can safely bet that someone much more competent at Salesforce or some major SQL shop (e.g. like the one Craig works for) considered the same, and either miserably failed or ruled it out.
Moreover, I'd stress that implementing efficient, synchronous, multi-master replication is not a solved problem. You read that right: not solved. Just a few years ago, doing it at all wasn't well solved enough to make it in the Postgres core. So you've no prior art that works well to base your work on and iterate upon.
This seems to be the same problem as this post a few minutes ago, approaching it from a different direction.
If so, while you can indeed do as Denis suggests, don't attempt to reinvent this wheel. Use an established tool like Slony-I or Bucardo if you are attempting two-way (multi-master) replication. You also need to understand the major limitations involved in multi-master when dealing with conflicting updates.
In general, there are a few ways to control trigger firing:
Let the trigger fire, then put logic in the PL/PgSQL trigger body to cause it to take no action if a certain condition is met. This is often the only option when the rules are complex.
As Denis points out, use a trigger WHEN clause to conditionally fire the trigger
Use session_replication_role to control the firing of all triggers
Directly enable/disable triggers.
In particular, if your application shares a single SQL-level user ID for all database access and does its own user management above the SQL level, and you want to control trigger firing on a per-user basis, the only way to do it will be with in-trigger logic. You might find this prior answer about getting user IDs within triggers useful:
Passing user id to PostgreSQL triggers