Fields calulated after the webhook - podio

I use the podio API to create an item. In the form I have a few calculations. When I retrieve the item immediately after its creation, using the api, the fields are not calculated yet. The calculation is asynchronous, so that makes sense.
When I use a create hook, and fetch the itembased on the hook, the calculated fields are there.
Does anyboofy know if I can depend on his, meaning is the create hook fired after the fields are calculated?

Yes the Javascript calculations are asynchronous.
Plus also related, the MongoDB (which Podio uses on the back end) is "eventually consistent".
I faced this same problem, and ended up making a queueing system for my incoming webhooks, where I waited 30 seconds before actioning any record retrieval from Podio, to get updated values for our local reporting database to cache.
Also related more to the MongoDB asynchronicity... if you are using Globiflow to trigger updates in related tables using the javascript calulated field from the parent table, I found there was occasional incorrect values.
I solved it by adding a 30 second delay in the Globiflow script, before updating the related app/table with calculated fields from the parent app/table. This gave enough time for javascript to calculate and mongodb to save the calculated value
https://www.globiflow.com/help/wait-delay.php

Related

Axon, event store and sql insert

We use Axon 2 to our CQRS-ES
For some (very bad) reasons , we are force to update the content of the eventstore table directly in the database, without using axon. Then we relaunch the axon denormalizer to replay the event and integrate the change in the views
My issue is, when I do it, the newly insert event are not considered by the aggregate ( like there was some sort of cache) .
How can I ask axon to refresh the cache of the eventstore?
I know insert event that way is absolutely not a good practice, but we need a workarround.
There is such a cache. To prevent having to replay all events for that aggregate every time an aggregate instance is loaded, Axon stores a snapshot of the aggregate state every so many events
I think your problem will go away when you delete the snapshots. It's probably in a table called snapshot_event_entry.
https://legacy-docs.axoniq.io/reference-guide/v/2.2/single.html#d5e1274

When to invalidate cache - .net core api

How do I know when to invalidate the cache, if a table change is made from an outside source?
I have an api call that returns an employee table. The first time this call is made, I will cache the results so that on subsequent calls it will pull the data from the cache instead of the database. This makes sense, however, what happens if someone adds a new record to the employee table from outside of the api, how does the cache know that it is now invalid?
If the user made the change to the employee table through the API I can capture that, but we have a separate desktop app that doesn't use the API, and that app can directly make changes to the employee table. Is there any accepted standards for handling this?
The only possible solution I can think of is to add a trigger to the employee table, and somehow use that to know when a table has changed. But, we have over a thousand tables, and we are making an api call for each table - So, I do not think that adding a thousand triggers to our database is an acceptable solution.
Yes you could add a trigger as suggested. Or you could use a caching system that support expiry time/sliding expiry. So you would be serving up stale data some of the time but not always.
As the other answer a suggests your trigger idea is ok, however as you've stated that would be a lot of triggers.
If your cache is not local to the API, which i assume it isn't if triggers would be able to access. Could you not access it from your desktop application? You could invalidate your cache by removing the employee record from the cache with the desktop application when it makes a successful change to the employee table.
It boils down to..
You have a cache (which is essentially a read store).
You have two options to update it
- Either it times out and fetches (which is ok, if you dont need up to the minute real time data)
- Or is has to be told its data is no longer valid.
Two ways to solve this
Push model
Pull model
Push Model: Using a database trigger for SQL server table to populate an intermediate audit table and polling that using a background task.
Pull Model: Using CLR Trigger and pushing the updates to an API. Whenever DML happens the CLR trigger will call the Api, qhich in-turn can update the cache!
Hope this helps!

Trigger for a lot of data

I have a table that records a lot of information at any moment, for example, 100 rows per second.
After completing each row, certain operations must be performed. That is, some of these rows should be copied to another table.
Now a few questions:
Can I use triggers to do this? Given the high number of entry rows
If multiple conditions are checked for copying to the table, can the triggers be responsive?
Additional explanation: the records added to this table are added by the fingerprint recorder
first of all, check these :
1.refer to define your trigger it can be called in insert or update etc. which not need to be executed for all operations(not required for all inserts)
2.you can forget your business during the times by changing some rules of your application
you need to pay attention to it for every change (prevent to introduce bugs)
4....
I strongly suggest you do not define trigger unless you have not any other choices.
if you have an application, you can do it in that and with putting the business
(for Instance, make a thread in your application to check and do your business)
you can have a windows service to do that for you
if you have just database access you can define a job in that to do it for you (not recommended)
finally, to avoiding blocks if you decided to use multi-thread(second thread according to your question is just for read data from your original table and insert into another), you can turn on the is_read_committed_snapshot_on in your database

Rest philosophy for updating and getting records

In my app I'm displaying Race objects that essentially have three states: pending, inProgress and completed. I want to display all Races that are currently pending or inProgress, but not the ones that are completed. To do this, I want to create a RESTful API for getting these resources from my server, but I'm not sure what the best (i.e. most RESTful) approach would be.
The issue is that when someone opens or refreshes the app, I need to two things:
Perform a GET on all the Races that are currently displayed in the client to update their status.
GET all of the new pending or inProgress Races that have been created since the client last updated
I've come up with a few different solutions, though I don't know which, if any, would be best:
Simply delete the old Race records on the client and always GET all new records
Perform 2 separate GET operations, the first which updates all the old records, and the second where I GET all the new pending / inProgress Races
Perform a single GET operation where I specify the created date of the last client record, and GET all records that are newer.
To me, this seems like a pretty common scenario but I haven't been able to find a specific answer to this type of problem. I'd like to see what SO thinks :)
Thanks in advance for your help!
Simply delete the old Race records on the client and always GET all new records
This is probably the easiest solution. However you shouldn't do that if you need a very smooth update on your client (for games, data visualization, etc.).
Perform 2 separate GET operations (...) / Perform a single GET operation where I specify the created date of the last client record, and GET all records that are newer.
I would definitely do it with a single operation. Better than an update timestamp (timestamp operations are costly, and several operations could happen at the same time), I would use a sequence number. This is the way CouchDB handles "changes".
Moreover, as you will see in the documentation, this solution can then be upgraded for asynchronous notifications (if you need so).

Is it safe to insert into crm database using sql?

We need to insert data(8k records) into a CRM Entity, the data will come from other CRM Entities. Currently we are doing it through code but it takes too much time (Hours). I was wondering if we use SQL to insert directly into the CRM Database it will be a lot easier and will take only minutes. But before moving farward I have few questions:
Is it safe to insert directly into CRM Database, using SQL?
What is the best practice for insert data into CRM using SQL?
What things should i consider before trying it?
EDIT:
4: How do I increase the insert performance?
No, it is not. It is considered unsupported
Don't do it
Rollup 12 was just released and contains a new API feature. There is now a ExecuteMultipleRequest which could be used for batched bulk imports. See http://msdn.microsoft.com/en-us/library/jj863631.aspx
It shouldn't take hours to insert 8000 records. It would help to see your code, but here are some things to consider to improve performance:
Reuse your IOrganizationService. I've found a 10x increase in performance by reusing a IOrganizationService, rather than creating a new one with each record that is being updated
Use multi-threading. You have to be careful with this one, because it could lead to worse performance if the function to check for the entity existing is your bottle neck.
Tweak your exists function. If the check for the entity existing is taking a long time, consider pulling back the entire table and storing it in memory (assuming it's not ridiculously huge). This would remove 8000 separate select statements.
Turn off plugins that may be degrading performance. If you have any plugins registered on the entity, see if performance increases if you disable them during the import.
Create a new, "How do I increase the insert performance" question with your code posted for additional help.
I have not used the CRM application you are referring to, but if you bypass the code you might bypass certain restrictions or even triggers that the code has in place based on certain values sent in.
For example, if you sent a number in through the code, it might perform some mathematical function on that number and add it to some other value and end up storing two values in the database (one value for the number you entered, and another value representing the total including the newly added one).
So if you had just inserted the one value straight into the database, then the total wouldn't get updated with it.
That is just a hypothetical scenario. You may not run into any problems like that or any others, but there could be the chance.
Well i found this article very helpful. It says:
The direct SQL writes to CRM database are not supported.The reason for this is that creating a record in CRM database is so much more than just INSERT INTO…-statement. The first step of optimizing is to understand what happens behind the scenes and can affect the speed:
1. CRM entities usually consist of 2 physical tables.
2. Cascade rules/Sharing: If created record has any relationships with
cascade rules, web service will handle the cascades automatically.
For example cascaded sharing will lead to additional records being
created in PrincipalObjectAccess table. In case of one-time
migrations, disabling the cascade rules while migration runs can
save lot of time
3. Record Ownership: If you are inserting records, make sure you are
setting the owner as an attribute for create and not as an
additional owner assign request. Assigning owner actually takes
4. Money/Time: Web Service handles currencies and time zones.
5. Workflows/Plugins: If the system has any custom workflows and/or
plugins, I strongly recommend pausing them for the duration of
migration.