Weblogic 12c datasource getting suspended mode - datasource

I have a db with 2 ips, 10.x.x.41 and 10.x.x.42, these 2 ips are active and at any time one ip can go down. So In weblogic we have created 2 generic datasources (pool1 and pool2). These 2 datasources are put in one multi datasource with Fail over as Algorithm type. Now application works fine for some time and after some time one of the datasource get in suspended mode and after some time the other one will also get in suspended and application stops. Once it get in suspended mode it is not coming back to running state even if database is up and avaiable.
IS there any specific configuration to bring back the datasource to running mode automatically once it get to suspended mode

Weblogic datasource goes into suspended mode when "test connections on reserve" is set and 2 successive tests fails.
To avoid this from happening set a higher value for "Test Frequency:" and higher value for "Seconds to Trust an Idle Pool Connection:" This should then cover any network glitches you may encounter.

Related

When I start My SAP MMC EC6 server one service is not getting to wait mode

Can someone of you help me, how to make the following service selected in the image get into wait mode after starting the server.
Please let me know if developer trace is required to be posted for resolving this issue.
that particular process is a BATCH process, a process that runs scheduled background tasks (maintained by transaction SM36/SM37). If the process is busy right after starting the server, that means there were scheduled tasks with status released waiting for execution, and as soon as the server was up, it started those tasks.
If you want to make sure the system doesn't immediately start released background tasks, you'll have to set the status back to scheduled (which, thanks to a bit of weird translation, means they won't be executed because they are not released).
if you want to start the server without having a chance to first change the job status in SM37, you would either have to reset the status on database level (likely not officially supported by SAP) or first start the server without any BATCH processes (which would give you a number of great big warning messages upon login) and change the job status before then restarting the server with the BATCH processes. You can set the number of processes for each type in the profile of your instance (parameter rdisp/wp_no_btc).

Akka.net Cluster Debugging

The title is a bit misleading, so let me explain further.
I have a non thread-safe dll I have no choice but to use as part of my back end
servers. I can't use it directly in my servers as the thread issues it has causes
it to crash. So, I created an akka.net cluster of N nodes each which hosts a single
actor. All of my API calls that were originally to that bad dll are now routed through
messages to these nodes through a round-robin group. As each node only has a single, single
threaded actor, I get safe access, but as I have N of them running I get parallelism, of a sort.
In production, I have things configured with auto-down = false and default timings on heartbeats
and so on. This works perfectly. I can fire up new nodes as needed, they get added to the group,
I can remove them with Cluster.Leave and that is happy as well.
My issue is with debugging. In our development environment we keep a cluster of 20 nodes each
exposing a single actor as described above that wraps this dll. We also have a set of nodes that act as
seed nodes and do nothing else.
When our application is run it joins the cluster. This allows it to direct requests through the round-robin
router to the nodes we keep up in our cluster. When doing development and testing and debugging the app, if I configure things to use auto-down = false
we end up with problems whenever a test run crashes or we stop the application with out going through
proper cluster leaving logic. Such as when we terminate the app with the stop button in the debugger.
With out auto-down, this leaves us with a missing member of the cluster that causes the leader to disallow
additions to the cluster. This means that the next time I run the app to debug, I cant join the cluster and am
stuck.
It seems that I have to have auto-down set to get debugging to work. If it is set, then when I crash my app
the node is removed from the cluster 5 seconds later. When I next fire up my
app, the cluster is back in a happy state and I can join just fine.
The problem with this is that if I am debugging the application and pause it for any amount of time, it is almost immediately
seen as unreachable and then 5 seconds later is thrown out of the cluster. Basically, I can't debug with these settings.
So, I set failure-detector.acceptable-heartbeat-pause = 600s to give me more time to pause the app
while debugging. I will get shutdown in 10 min, but I don't often sit in the debugger for that long, so its an acceptable
trade-off. The issue with this is of course that when I crash the app, or stop it in the debugger, the cluster thinks it
exists for the next 10 minutes. No one tries to talk to these nodes directly, so in theory that isn't a huge issue, but I keep
running into cases where the test I just ran got itself elected as role leader. So the role leader is now dead, but the cluster
doesn't know it yet. This seems to prevent me from joining anything new to the cluster until my 10 min are up. When I try to leave
the cluster nicely, my dead node gets stuck at the exiting state and doesn't get removed for 10 minutes. And I don't always get
notified of the removal either, forcing me to set a timeout on leaving that will cause it to give up.
There doesn't seem to be any way to say "never let me be the leader". When I have run the app with no role set for the cluster
it seems to often get itself elected as the cluster leader causing the same problem
as when the role leader is dead but unknown to be so, but at a larger level.
So, I don't really see any way around this, but maybe someone has some tricks to pull this off. I want to be able to debug
my cluster member without it being thrown out of the cluster, but I also don't want the cluster to think that leader nodes
are around when they aren't, preventing me from rejoining during my next attempt.
Any ideas?

In a WAS Liberty connection pool, can I validate connections on borrow?

We are currently migrating an applications to run on a Liberty server (8.5.5.9). We have found that connections between the app server and the database are occasionally terminated by the firewall, for being idle for an extended period of time. When this happens, on the next HTTP request, the application will receive one of these broken connections.
Previously, we had been using Apache Commons DBCP to manage the connection pool. One of the configuration parameters in a DBCP conneciton pool is to "testOnBorrow", which prevents the application from being handed one of these bad connections.
Is there such a configuration parameter in a Liberty-managed datasource?
So far, we have configured our datasource like this:
<dataSource jndiName="jdbc/ora" type="javax.sql.DataSource">
<properties.oracle
user="example" password="{xor}AbCdEfGh123="
URL="jdbc:oracle:thin:#example.com:1521:mydb"
/>
<connectionManager
minPoolSize="3" maxPoolSize="10" maxIdleTime="10m"
purgePolicy="ValidateAllConnections"
/>
<jdbcDriver id="oracle-driver" libraryRef="oracle-libs"/>
</dataSource>
The purgePolicy currently is set to validate all connections if one bad one is found (e.g., overnight when all connection have been idle for a long time). But all this does is prevent multiple bad connection from being sequentially handed to the applications.
One option in the connectionManager would be to set an agedTimout="20m" to automatically remove connections that are old enough to have already been terminated by the firewall. However, this would also terminate connections that have been recently used (which prevents the firewall from breaking them).
Am I missing something obvious here?
Thanks!
In this scenario I would reccommend using the maxIdleTime, which you are already using, but reduce your minPoolSize to 0 (or remove it, since the default value is 0).
Per the maxIdleTime doc:
maxIdleTime: Amount of time after which an unused or idle connection can be discarded during pool maintenance, if doing so does not reduce the pool below the minimum size.
Since you have your minPoolSize=3, the pool maintenence won't kick in if there are only 3 bad connections in the pool for example, because the maintenance thread won't won't take the pool size below the minimum according the the doc. So setting the minPoolSize=0 should allow the maxIdleTime to clean up all of the bad connections like you would expect in this scenario.
So here is the final configuration that I would suggest for you:
<dataSource jndiName="jdbc/ora" type="javax.sql.DataSource">
<properties.oracle user="example" password="{xor}AbCdEfGh123="
URL="jdbc:oracle:thin:#example.com:1521:mydb"/>
<connectionManager maxPoolSize="10" maxIdleTime="18m"/>
<jdbcDriver id="oracle-driver" libraryRef="oracle-libs"/>
</dataSource>
The value of maxIdleTime assumes that your firewall kills the connections after 20 mins, and to trigger cleanup after 18 mins in order to give the cleanup thread a 2 minute window to clean up the soon-to-be-bad connections.
it's an old question but it should be usefull to someone else :
you can use "validationTimeout" property of "dataSource". According to the documentation "when specified, pooled connections are validated before being reused from the connection pool.".
This will not close the connections as soon as they will be cut by the firewall but this will prevent the application to crash beacause of a stale connection.
You can then combine this with purgePolicy="ValidateAllConnections" to revalidate all connections as soon as one is detected as stale.
Reference : https://openliberty.io/docs/21.0.0.1/reference/config/dataSource.html#dataSource

If you have two distinct Data Source Connections in ColdFusion with the same settings do they share the same pool?

I have created 2 distinct data source connections (to MS SQL Server 2008) in the ColdFusion Administrator that have exactly the same settings except for the actual name of the connection. My question is will this create two distinct connection pools or will they share one?
They will have different pools. The pools are defined at the data source level and you have two distinct data sources as far as ColdFusion is concerned. Why would you have two different data sources with the exact same settings? I guess if you wanted to force them to use different connection pools. I can't think of a reason why though.
I found this page that documents how database connections are handled in ColdFusion. Note that the "Maintain Database Connections" setting is set for each data source.
Here is the section related to connection pooling from that page (in case it goes away):
If the "Maintain Database Connections" is set for a data source, how does ColdFusion Server maintain the connection pool?
When "Maintain Database Connections" is set for a data source, ColdFusion keeps the connection open after its first connection to the database. It does not log out of the database after this first connection. You can change this setting according to the instructions in step d above. Another setting in the ColdFusion Administrator, called "Limit cached database connection inactive time to X minutes," closes a "maintained" database connection after X inactive minutes. This setting is server wide and determines when a "maintained" connection is finally closed. You can modify this setting by going to the "Caching" tab within the ColdFusion Administrator. The interface for modifying the "Limit cached database connection inactive time to X minutes" looks like the following:
If a request is using a data source connection that is already opened, and another request to the data source comes in, a new connection is established. Since only one request can use a connection at any time, the simultaneous request will open up a new connection because no idle cached connections are available. The connection pool can increase up to the setting for simultaneous connections limit which is set for each data source. This setting, called, "Limit Connections," is in the ColdFusion Administrator. Click on one of the data source tabs and then click on one of your data sources. Click on "CF Settings" and put a check next to "Limit Connections" and enter a number in the sentence, "Enable the limit of X simultaneous connections." Please note that if you do not set this under the data source setting, ColdFusion Server will use the server wide "Simultaneous Requests" setting.
At this point, there is a pool of two database connections that ColdFusion Server maintains. Each connection remains in the pool until either the "Connection Timeout" period is reached or exceeds the inactivity time. If neither of the first two options are implemented, the connections remain in the pool until ColdFusion is restarted.
The "Connection Timeout" setting closes the connection and eliminates it from the pool whether or not it has been active or inactive. If the process is active, it will not terminate the connection. You can change this setting by going to "CF Settings" for your data source in the ColdFusion Administrator. Note: Only the "Cached database connection inactive time" setting will end the connection and eliminate it from the pool if it hasn't been used. You can also use the "Connection Timeout" to override the"Cached database connection inactive" setting as it applies only to a single data source, not all data sources.
They have different pools. Pooling is implemented by cf java code. (Or was that part in the jrun code.... ). It doesn't use any jdbc based pooling. Cf10 could have switched to jdbc based pooling but I doubt it.
As a test
Set the 'verify connection' sql to wait-for delay '00:01:00' or similar (wait for 1 minute) on both pools. As pool access is single-threaded for each pool - including the time taken to run the verify - have 2 pages each accessing a different data source , request both. If they complete after 1 minute it's 2 pools, if one page takes 1 minute and the other takes 2 minutes - it's one pool
As a side note, if during this 1 minute verify you yank out the network cable (causing the jdbc socket to stay open forever waiting for a response ) your thread pool is now dead and you need to restart CF
Try to create temporary table with two different datasource, if you get error for second query it use same pool and run perfectly file means different pool.

Automatic hibernation of application instance on cloudbees

I have a cloudbees enterprise instance that I use for performance and automated UI testing.
The free instance (which is limited in memory) cannot support the memory or request per second that we have for testing.
I would like to have the instance automatically hibernated when I am not using it but have it wake up when requests come in. I would configure a jenkins job to wake the app up (by issuing a request) before kicking off my sauce lab based selenium jobs.
My question is how do I configure automatic hibernation? The control panel has minimum of one instance which I guess means that the one instance stays up.
You are right - currently automatic hibernation is only for free applications. When an application is hibernated (vs stopped) then it will be automatically woken whenever someone needs to access it.
What you could do for this is to have a job set your application to hibernated, say once a day, (or at certain time of the day when you know it won't be needed). When it is needed again - you won't need to do anything - simply accessing it will cause it to be activated (woken) again - so your test script can just insure that is the case (and ideally, after a test run, set it to hibernated again).
It really depends how often the app is needed - if you can work out what points it isn't needed and trigger the hibernate off that (eg after a test run) then that is ideal (you minimise cost).