Let's assume we have some configuration GUI that in its current form uses direct DB transactions to submit new configurations for more than one configurable component in a consistent manner.
Now let's move the data (DB) stuff behind some SOAP/WS API. The GUI has no direct DB access anymore. The transactional behaviour must remain, but the API should NOT be designed to explcitly accommodate the GUI form submissions. In fact, I don't even know how the new GUI will work or how the user input will be structured. Therefore I need to provide something like WS-AtomicTransaction on the API server side. However, there are (at least) two caveats:
The GUI is written in PHP: I don't think there is any WS-Transaction support in PHP available.
I don't want to keep DB transactions open on the server side while waiting for additional client requests.
Solutions I can think of:
using Camel's aggregation. However, that would make things more complicated in at least two ways:
You cannot use DB row ids of newly inserted rows in the subsequent calls inside the same transaction. You need to use some sort of symbolic back-referencing because there would be no communication between client and server while processing the aggregated messages.
call replies would not be immediate (or the immediate and separate reply to each single call would only be some sort of a stub, ie. not containing any useful information beyond "your message has been attached to TX xyz" -- if that's at all possible in the Camel aggregation case).
the two disadvantages of the previous solution make me think of request batches where possibly the WS standards provide means for referencing call results in subsequent calls inside the batch transaction. Is there any such thing already available? Maybe even as a PHP client?
trying to eliminate lock contention in the database by carefully using row-level locks etc. However, when inserting new elements, my guess is that usually pages and index pages need to be locked by the DB.
maybe some server-side persistence layer using optimistic locking? But again, that would not return any DB IDs back to the client before the final commit if DB writes would be postponed until the commit (don't know if that's possible at all).
What do YOU think?
Transactions are a powerful tool and we easily get into a thinking pattern in which we see every problem as a nail we hit with this big hammer. I can relate to your confusion because I've experienced it myself. Unfortunately I have no better advice for you than to try not think in terms of transactions but of atomic API calls.
When I think in terms of transactions, my thought pattern usually goes like this:
start transaction
read (repeat as required)
update (repeat as required)
commit/roll back
It takes some time to realize that we overuse this pattern. Actual conflicts are rare and there are many other ways of dealing with them. Here is a commonly used one in APIs
read and send data to client (atomic API call)
update data (on the client)
send original + updates back to the server (atomic API call)
start transaction (on server)
read
compare with original from client
if not same, return error (client should retry)
if same, update
commit
The last six points are part of the implementation of the API call.
Ferenc Mihaly
http://theamiableapi.com
Related
We have Apache Geode connected to Postgres using an AEQ + AsyncCacheListener configured to write data to Postgres. During async write, we submit the list of events that we want to persist and it asynchronously inserts those events. Let's say I have two client applications which calls processEvents for async writing and both have some events in common which violate some key. But, after client calls processEvents, control is immediately returned to client. In such cases how will client know if some issue occurred? What are the best practices to tackle this?
What do you mean by the events in common "violate some key"? Like a primary or foreign key constraint, or some other database constraint perhaps (e.g. uniqueness, non-null values, etc)?
Handling a conflict depends on the importance and nature of the data being inserted, or written to the backend (Postgres) database from Geode and its significance to the application, from a requirements and business logic POV.
If 2 (or more) client applications are writing to the same cache/database entries/records, then certainly some type of collision will eventually occur, and how it is handled will depend on the data and the type of operation performed on the data.
In general, handling the violation closer to where and when the violation occurs (e.g. inside the AsyncEventListener itself) maybe preferable or ideal, since then you should have most of the necessary information (e.g. DataAccessException, events, additional capabilities to query the DB) to deal with the situation.
Inside the AEQ Listener, you could employ different strategies depending on the data and operation as determined by the application:
First update wins (enforced by optimistic locking)
Perform a merge
Log [failed] event(s)
Overwrite value(s) (last update wins).
...
You could employ Geode to conflate events stored in the AEQ for the same key, which should minimize collisions/conflicts.
If the client (as in "client" in a client/server topology) needs to be informed, then you could write the failed events to another Region where a client registers a CQ to be notified when entries are written to this (failed events) Region. The client-side handler associated the CQ could then take the appropriate action, such as notifying the end-user, refreshing and then retrying the operation, and so on.
Given the async nature of the initial write, then you can only respond asynchronously once the violation occurs. This is not unlike in a Reactive world (namely with onSuccess/onFailure event handlers).
So, in this situation, I don't think there really is a "best practice" per-say, rather only "recommendations". For example, handling the situation as near to the actual occurrence of the violation as possible, since handling the violation usually involves having the necessary information readily available to make the best possible, informed decision on the right course of action.
Sometimes you can automate the recovery, other times you might need manual intervention. Most definitely, do not guess. Clearly document your application/systems (configured) behavior when it can handle a situation and when it cannot.
I don't think there is a general, 1 size fits all solution in this case.
I hope this gives you some ideas to think about.
In my scenario, I connect my ABAP system to a non ABAP based system using HTTP destination.
I want to implement caching in ABAP such that the performance of the application can be used and I don't have to hit the backend every time.
I guess in ABAP the caching can be only implemented by using the Shared memory.
https://help.sap.com/doc/abapdocu_751_index_htm/7.51/en-US/abenuse_shared_memory_guidl.htm
Is this correct?
I guess that by "buffering" you mean "loaded into ABAP memory and avoiding additional database roundtrips"?
If yes, I share your understanding that shared memory would be the only means to do that.
However, consider that on top of your database, you can have 1..n application servers, each of which can have 1..n work processes. Shared memory will allow you to buffer stuff across the work processes within one application server, but not across application servers.
If you take other means of "buffering" into account, such as aggregated views on otherwise slow-to-join data, you could get additional means by using appropriate database views or materializations.
Each time the ABAP AS get's a http request you get a new "session" (roll area). You are right that shared memory could be an option to implement a buffer. Another option could be to switch on table buffering in SE11.
But the typical way is really to start always from the database and read the data again.
To improve the performance of the application you could try to reduce the calls from http to the ABAP AS an implement more logic on the http side.
Please consider also that the DBMS caches too. Maybe that could also improve the response-time in many configurations.
You have to start all the select on ABAP then, send all of your's data to http request. You can elaborate it on the other system and do whatever you want. At the end, send results to abap and complete your purpose. This is the fast way
It depends on what your service is doing.
If it accesses a table you can simply buffer the table itself:
from ABAP tx SE11, go to Technical Settings->Buffering Switch On->Fully buffered.
That should be enough to speed up your service.
I will have multiple computers on the same network with the same C# application running, connecting to a SQL database.
I am wondering if I need to use the service broker to ensure that if I update record A in table B on Machine 1, the change is pushed to Machine 2. I have seen applications that need to use messaging servers to accomplish this before but I was wondering why this is necessary, surely if they connect to the same database, any changes from one machine will be reflected on the other?
Thanks :)
This is mostly about consistency and latency.
If your applications always perform atomic operations on the database, and they always read whatever they need with no caching, everything will be consistent.
In practice, this is seldom the case. There's plenty of hidden opportunities for caching, like when you have an edit form - it has the values the entity had before you started the edit process, but what if someone modified those in the mean time? You'd just rewrite their changes with your data.
Solving this is a bunch of architectural decisions. Different scenarios require different approaches.
Once data is committed in the database, everyone reading it will see the same thing - but only if they actually get around to reading it, and the two reads aren't separated by another commit.
Update notifications are mostly concerned with invalidating caches, and perhaps some push-style processing (e.g. IM client might show you a popup saying you got a new message). However, SQL Server notifications are not reliable - there is no guarantee that you'll get the notification, and even less so that you'll get it in time. This means that to ensure consistency, you must not depend on the cached data, and you have to force an invalidation once in a while anyway, even if you didn't get a change notification.
Remember, even if you're actually using a database that's close enough to ACID, it's usually not the default setting (for performance and availability, mostly). You need to understand what kind of guarantees you're getting, and how to write code to handle this. Even the most perfect ACID database isn't going to help your consistency if your application introduces those inconsistencies :)
I have data conversion and caching service running as self-hosted WCF service.
Now it uses database polling in constant short intervals to update its data.
I think it's unnecessary. The data can be changed only if one of the tables is changed, and when the data is changed depends on system users actions.
There is no problem in setting a trigger for specific tables, however I would need an action outside SQL-Server to update my cache. My WCF service could perform update when receiving specific URI via HTTP. So all I need is a command in table trigger which would send a request. Is it even possible?
I think about a hack I used back in the days with HTTP requests. I halted HTTP request response at server until data packet from somewhere else arrived. There was no delay between polling requests. I achieved fully asynchronous, "real-time" updates.
Maybe this approach is possible to apply with SQL? I think about a query which blocks termination until receives a signal. Well, it eventually times out, but it's good enough to try. Then - how to signal and wait in SQL? By locking and unlocking shared resource, like cursor or dummy table?
Any other options?
I need the cache update done at lowest possible frequency (because it's pretty expensive, so once per minute is great), but I need immediate update when the data is changed.
To answer your question, have you looked at xp_cmdshell?
https://msdn.microsoft.com/en-us/library/ms175046.aspx
However, the security/performance implications of such a decision could be non-trivial depending on your use case.
I'm creating a RESTful web service (in Golang) which pulls a set of rows from the database and returns it to a client (smartphone app or web application). The service needs to be able to provide paging. The only problem is this data is sorted on a regularly changing "computed" column (for example, the number of "thumbs up" or "thumbs down" a piece of content on a website has), so rows can jump around page numbers in between a client's request.
I've looked at a few PostgreSQL features that I could potentially use to help me solve this problem, but nothing really seems to be a very good solution.
Materialized Views: to hold "stale" data which is only updated every once in a while. This doesn't really solve the problem, as the data would still jump around if the user happens to be paging through the data when the Materialized View is updated.
Cursors: created for each client session and held between requests. This seems like it would be a nightmare if there are a lot of concurrent sessions at once (which there will be).
Does anybody have any suggestions on how to handle this, either on the client side or database side? Is there anything I can really do, or is an issue such as this normally just remedied by the clients consuming the data?
Edit: I should mention that the smartphone app is allowing users to view more pieces of data through "infinite scrolling", so it keeps track of it's own list of data client-side.
This is a problem without a perfectly satisfactory solution because you're trying to combine essentially incompatible requirements:
Send only the required amount of data to the client on-demand, i.e. you can't download the whole dataset then paginate it client-side.
Minimise amount of per-client state that the server must keep track of, for scalability with large numbers of clients.
Maintain different state for each client
This is a "pick any two" kind of situation. You have to compromise; accept that you can't keep each client's pagination state exactly right, accept that you have to download a big data set to the client, or accept that you have to use a huge amount of server resources to maintain client state.
There are variations within those that mix the various compromises, but that's what it all boils down to.
For example, some people will send the client some extra data, enough to satisfy most client requirements. If the client exceeds that, then it gets broken pagination.
Some systems will cache client state for a short period (with short lived unlogged tables, tempfiles, or whatever), but expire it quickly, so if the client isn't constantly asking for fresh data its gets broken pagination.
Etc.
See also:
How to provide an API client with 1,000,000 database results?
Using "Cursors" for paging in PostgreSQL
Iterate over large external postgres db, manipulate rows, write output to rails postgres db
offset/limit performance optimization
If PostgreSQL count(*) is always slow how to paginate complex queries?
How to return sample row from database one by one
I'd probably implement a hybrid solution of some form, like:
Using a cursor, read and immediately send the first part of the data to the client.
Immediately fetch enough extra data from the cursor to satisfy 99% of clients' requirements. Store it to a fast, unsafe cache like memcached, Redis, BigMemory, EHCache, whatever under a key that'll let me retrieve it for later requests by the same client. Then close the cursor to free the DB resources.
Expire the cache on a least-recently-used basis, so if the client doesn't keep reading fast enough they have to go get a fresh set of data from the DB, and the pagination changes.
If the client wants more results than the vast majority of its peers, pagination will change at some point as you switch to reading direct from the DB rather than the cache or generate a new bigger cached dataset.
That way most clients won't notice pagination issues and you don't have to send vast amounts of data to most clients, but you won't melt your DB server. However, you need a big boofy cache to get away with this. Its practical depends on whether your clients can cope with pagination breaking - if it's simply not acceptable to break pagination, then you're stuck with doing it DB-side with cursors, temp tables, coping the whole result set at first request, etc. It also depends on the data set size and how much data each client usually requires.
I am not aware of a perfect solution for this problem. But if you want the user to have a stale view of the data then cursor is the way to go. Only tuning you can do is to store only the data for 1st 2 pages in the cursor. Beyond that you fetch it again.