I keep getting this response whenever I try to call SessionCreateRQ
<soap-env:Fault>
<faultcode>soap-env:Client.ReachedTALimit</faultcode>
<faultstring>You have reached the limit of Host TAs allocated to you</faultstring>
<detail>
<StackTrace>com.sabre.universalservices.base.exception.ApplicationICEException: errors.authentication.USG_RESOURCE_UNAVAILABLE</StackTrace>
</detail>
</soap-env:Fault>
How can I keep track of opened session, is there a way to Terminate unused active session tokens if I don't have these tokens.
The EPR you use in SessionCreateRQ is associated with a pool of connections (similar in concept to a database connection pool). Sabre support would advise you what the maximum size of that pool is. When you have the maximum number of concurrent sessions active, calling SessionCreateRQ will return the error you are getting.
SessionCloseRQ will release a connection back to the TA Pool, or they will be automatically released after 15 minutes of inactivity. If you are sharing the same pool with other EPRs (or the same EPR in different applications) and you don't have access to those session tokens, there's not much you can do to free up connections in your TA pool other than wait for those sessions to either close (via the other application calling SessionCloseRQ) or timeout.
There's a few ways to keep track of open sessions, related to connection pooling. I've seen a database table used for this purpose. A SessionCreateRQ wrapper service was created that checked if there were any existing unused tokens in a database table. If so, that existing token is returned, otherwise the sabre SessionCreateRQ service is called to create a new token, which is then inserted into that table. A SessionCloseRQ wrapper service would mark that token as free in the database table, without calling the underlying sabre SessionCloseRQ service. That's the high level concept and there are other implementation details that need to be considered, such as sabre transactions that might be associated with sessions if you are going to reuse them and handling free tokens that have timedout after 15 minutes and need to be removed from the table. Having that database table then gives you visibility of all the session tokens you have that are in use, or free, and lets you manage the size of the connection pool.
You have reached the maximum number of open sessions for your credential.
You must now close unused sessions to return to the session open operational limit.
To avoid this type of situation, you must create a session manager or manage the opening and closing of each open session in a workflow.
If you are having this issue with BargainFinderMaxRQ or AdvancedAirshoppingRQ services then I suggest using the TokenCreateRQ service for flight availability.
The management of TokenCreateRQ is done by SABRE and in this case you will be free with SessionCreateRQ sessions to use in the booking Create, booking issue, among others.
Related
Currently I am using Aerospike in my application.
I faced lots of timeout issues as shown below when I was creating new java client for each transaction and I was not closing it so number of connection ramp up dramatically.
Aerospike Error: (9) Client timeout: timeout=1000 iterations=1 failedNodes=0 failedConns=0
so to resolve this timeout issue,I didn't made any changes to client, read and write policy, I just created only one client, stored it's instance in some variable and used this same client for all transaction (get or put requests).
now I want to understand how moving from multiple client to one client resolved my timeout issue.
how these connection were not closing automatically.
The AerospikeClient constructor requests peers, partition maps and racks for all nodes in the cluster and initializes connection pools and async eventloops. This is an expensive process that is only meant to be performed once per cluster at application startup. AerospikeClient is thread-safe, so instances can be shared between threads.
If AerospikeClient close() is not called, connections residing in the pools (at least one connection pool per node) will not be closed. There are no finalize() methods in AerospikeClient.
The first transaction(s) usually need to create new connections. This adds to the latency and can cause timeouts.
The client does more than just the application's transactions. It also monitors the cluster for changes so that it can maintain one hop per transaction. Also, I believe when we initialize the client, we create an initial pool of sockets.
It is expected that most apps would only need one global client.
I'm working on an ASP.NET MVC application which use EF 6.x to work with my Azure SDL Database. Recently with an increased load app start to get into a state when it's unable to communicate with the SQL server anymore. I can see that there are 100 active connections to my database using exec sp_who and any new connection is unable to create with the following error:
System.Data.Entity.Core.EntityException: The underlying provider
failed on Open. ---> System.InvalidOperationException: Timeout
expired. The timeout period elapsed prior to obtaining a connection
from the pool. This may have occurred because of all pooled connections
were in use and max pool size was reached.
Most of the time app works with average active connection count from 10 to 20. And any load doesn't change this number... Event when load is high it stays at level 10-20. But in certain situations, it could just up to 100 in less than a minute without any ramp up time and this causes app state when all my requests are failing. All those 100 connection are in sleeping state and awaiting command.
The good part is I found a workaround which helped me to mitigate the issue - clear connection pool from the client side. I'm using SqlCoonection.ClearAllPools() and it instantly closing all the connections and sp_who shows me my regular 10-20 connection after that.
The bad part, I still don't know the root cause.
Just to clarify the app load is about 200-300 concurrent users, which generate 1000 requests per minute
With the great suggestion #DavidBrowne to track leaked connection with a simple pattern I was able to find leaked connections while configuring Owin engine
private void ConfigureOAuthTokenGeneration(IAppBuilder app)
{
// here in create method I'm creating also a connection leak tracker
app.CreatePerOwinContext(() => MyCoolDb.Create());
...
}
Basically with every request, Owin creates a connection and doesn't let it go and when the WebAPI load is increased I have troubles.
Could it be the real cause and I Owin smart enough to lazy create a connection when needed (using the function provided) and let it go when it was used?
It's very unlikely that this is caused by anything other than your application code leaking connections.
Here's a helper library you can use to track when a connection is leaked, and report the call site that initially opened the connection.
http://ssbwcf.codeplex.com/SourceControl/latest#SsbTransportChannel/SqlConnectionLifetimeTracker.cs
In few words, if I am not wrong, a session is used when I want to ensure that the packages are sent in order, and to be able to use sessions is needed a reliable connection.
But my doubt what kind of applications need that? In my case is a simple application in which a client request to a service data from a database, the service get the data from the database and send to the client the results. Also the client can requeset to add, modify or delete data from database. In this case, should I need a reliable connection and sessions or not?
Thanks.
Session presumes that for some period of time you want to retain some data. Such a period of time, as far as session is concerned, refers to client's lifecycle that is when client opens up proxy, both service along with session are created, when client closes proxy service and session terminate their actions. There is exception when closing proxy does not actually perform it right away and this occures when you invoke one-way-operation. Service will keep working as long as operation performs its action despite the fact that it previously received an order to get rid of instance.
Based on provided information I assume the best choice would be PerCall. You do not store any data between calls and every single call can be perceived separately. Additionaly, leverage of ConcurrencyMode set to multiple so as to allow services being created simultaneously.
Personally, I find session useful in MSMQ, whenever I want to specific number of messages be wrapped into single queue-message. If error occures, regardless of whether which message is in charge of it, the whole queue-message is rolled back.
This service is a remote session pool. I need to ask for a session to work with other services. In most cases, this pool will have a session available, so in 15ms i will have a response. But sometimes, it will need to create a session on demand, requiring up to 800ms to create it.
I have two options in mind to handle this situation:
To set a 15ms timeout, and to implement a retry policy up with an exponential back off until 800ms. This service will create the required session no matter whether I am connected to it.
To set a 800ms timeout, and to keep connected to the service until a session is available for me.
In both cases, there's no guarantee that I will have a session after 800ms.
So the question is: Which are the pro/cons for each option?
1 . To set a 15ms timeout, and to implement a retry policy up with an exponential back off until 800ms. This service will create the required session no matter whether I am connected to it.
Pro
Detects that the session is not available immediately, don't need to wait almost a second for this.
It's up to the client to request again for the session or go by other way, you have more flexibility for different use cases.
You could differentiate the undesired event of waiting for a session more than 15msec reporting each time the fallback strategy goes on, useful for abnormal session pool behaviour detection.
Cons
The code is more complex because of the fallback behaviour.
Multiple parameters because of different timeouts.
2 . To set a 800ms timeout, and to keep connected to the service until a session is available for me.
Pro
Simple and straight-forward implementation
Simple parametrization
Cons
You can't notice the session creation event delay from the session pool. This is important for tracing and diagnostics, this simple approach could hide session pool problems.
Not flexible implementation for different clients use cases.
-
I think the decision driver is if you need a solution that just works for this use case or if this approach will be used for different clients and use cases.
PS: If you need to create a solution for different clients maybe will be worth to create a more complex protocol, like:
// just takes a session if available, no more than 15msec delay expected
get_session(...) : session
// if not available, creates one
get_session_or_create(...) : session
available_sessions(...) : int
// between 0 and 1, the proportion of available sessions
availability(...) : double
...
It's up to the client how to use it.
And over dimension the timeout parameters by some safe %, depending on the session creation delay variance.
I have client application that uses WCF service to insert some data to backend database. Client application is going to call service on per event basis (it can be every hour or every second).
I'm wondering what's the best way of calling that service.
Should I create communication channel and keep it open all the time, or should I close channel after each call and create it again?
The first question is whether your server needs to maintain any state about the client directly (i.e. are you doing session-like transactions?) If you are, you will need to be able to manage how the server holds the information between communications.
My initial feeling of your question is that if there is no need to leave a connection open, then close it each time and recreate a new connection on demand. This will avoid issues where a connection can be placed into a faulted state between calls. The overhead of creating and destroying connections is minimal, and it will (probably) save you a lot of time in debugging when something goes wrong.
I would think you probably wanna implement a Keep Alive pattern, with a configurable duration to inform your underlying mechanism to close the connection if past beyond the Keep-alive duration with zero communication activity.