I fixed some bug related to the way we were using BasicDataSource and though I understand part of it I still have some questions unanswered :)
Problem:
The application was not able to auto-connect to the database after a db failure.
Application is using org.apache.commons.dbcp.BasicDataSource class as a TCP-connection pool for a JDBC connection to Oracle db.
Fix:
After some research I discovered that in BasicDataSource testOnBorrow and testOnreturn were not set. I provided the validation query to test connections. This fixed the problem
Max no of connections in pool was set to 1
My Understanding:
The connection pool would hand over a connection to the application.
What I think was happening was the application MAGICALLY returned the bad collection to the pool when it db crashed . Now since the Pool does not know if it is a bad connection it would hand over the same connection to the application next time it needs it causing the application to not auto-reconnect to db.
Now, after the fix.. whenever a bad connection is returned to the connection pool it would be discarded and wont be used again because of the fix I made above.
Now I know that BasicDataSource wraps the connection before giving to the application, such that whenever application says con.close ..BasicDataSource would know that the connection is not used any more.. it will take care of either returning the connection to the pool or discardigg etc.
Unanswered Question:
However what I do not understand is what makes the application MAGICALLY return the connection to the connection pool when its broken[Note that con.close method is not called when the connection exits un-gracefully]. There is no way of BasicDataSource to know that the connection closed or there is ?. Can someone point me to code for that ?
I my overall understanding connect of why the fix worked ??
Now, I know that this is kind of an old thread, but it's high on google search results, so I thought I might give it a quick answer. For more information on configuring the BasicDataSource, you should reference the DBCP Configuration page: http://commons.apache.org/proper/commons-dbcp/configuration.html
To answer the "unanaswered" question of "How does BasicDataSource know when a connection is abondoned and needs to be returned to the connection pool?" (paraphrased)...
org.apache.commons.dbcp.BasicDataSource is able to monitor traffic and usage on the connections it offers by using a wrapper class for the Connection. Every time you call a method on the connection or any Statements created from the connection, you are actually calling a wrapper class that implements an interface or extends a base class with those same methods (Hurray for Polymorphism!). These custom methods allow the DataSource to know whether or not a Connection is active.
On the BasicDataSource itself, there is a property called "removeAbandoned" and another called "removeAbandonedTimeout" that are used to configure this behavior of returning abondonded connections to the connection pool.
"removeAbandoned" is a boolean that indicates whether abandoned connections should be returned to the pool. Defaults to "false".
"removeAbandonedTimeout" is an int, that represents the number of seconds of inactivity that should be allowed to pass before a connection is considered to be abandoned. Default value is 300 (about 5 minutes).
Looking at the test for abandoned connections, it appears that when all connections in the pool are "in use" when a new connection is requested, the "in-use" connections are tested for abandonment (they maintain a timestamp of last used time).
See BasicDataSource#setRemoveAbandoned(boolean) and BasicDataSource#setRemoveAbandonedTimeout(int)
Regardless of how clever or not your connection pool is in closing abandoned connections, you should always ensure each connection is closed in a finally block, e.g.:
Connection conn = getConnection();
try {
... // perform work
} finally {
conn.close();
}
Or use some other means such as Apache DBUtils.
Related
I was wondering if it was possible to keep an RFC called via JCO opened in SAP memory so I can cache stuff, this is the scenario I have in mind:
Suppose a simple function increments a number. The function starts with 0, so the first time I call it with import parameter 1 it should return 1.
The second time I call it, it should return 2 and so on.
Is this possible with JCO?
If I have the function object and make two successive calls it always return 1.
Can I do what I'm depicting?
Designing an application around the stability of a certain connection is almost never a good idea (unless you're building a stability monitoring software). Build your software so that it just works, no matter how often the connection is closed and re-opened and no matter how often the session is initialized and destroyed on the server side. You may want to persist some state using the database, or you may need to (or want to) use the shared memory mechanisms provided by the system. All of this is inconsequential for the RFC handling itself.
Note, however, that you may need to ensure that a sequence of calls happen in a single context or "business transaction". See this question and my answer for an example. These contexts are short-lived and allow for what you probably intended to get in the first place - just be aware that you should not design your application so that it has to keep these contexts alive for minutes or hours.
The answer is yes. In order to make it work, you need to implement two tasks:
The ABAP code needs to store its variable in the ABAP session memory. A variable in the function group's global section will do that. Or alternatively you could use the standard ABAP technique "EXPORT TO MEMORY/IMPORT FROM MEMORY".
JCo needs to keep the user session between calls. By default, JCo resets the backend-side user session after every call, which of course destroys all data stored in that user session memory. In order to prevent it, you need to use JCoContext.begin() and JCoContext.end() to get a stateful RFC connection that keeps the user session alive on backend side.
Sample code:
JCoDestination dest = ...
JCoFunction func = ...
try{
JCoContext.begin(dest);
func.execute(dest); // Will return "1"
func.execute(dest); // Will return "2"
}
catch (JCoException e){
// Handle network problems, ABAP exceptions, SYSTEM_FAILUREs
}
finally{
// Make sure to release the stateful connection, otherwise you have
// a resource-leak in your program and on backend side!
JCoContext.end(dest);
}
I do asynchronous requests in LoadState method of a certain Page. I use HttpClient to make a request and I expect the splashscreen to go away while I await the result.
If I am not connected to any networks, the splashscreen immediately goes away and I get a blank page because the request obviously didn't happen.
But if I am connected to a network but have connectivity issues (for example, I set a wrong IP address) it seems to start a request and just block.
My expectation was that the HttpClient would realize that it cannot send a request and either throw an exception or just return something.
I managed to solve the issue of blocking by setting a timeout of around 800 milliseconds, but now it doesn't work properly when the Internet connection is ok. Is this the best solution, should I be setting the timeout at all? What is the timeout that's appropriate which would enable me to differentiate between an indefinitely blocking call and a proper call that's just on a slower network?
I could perhaps check for Internet connectivity before each request, but that sounds like an unpredictable solution...
EDIT: Now, it's really interesting. I have tried again, and it blocks at this point:
var rd = await httpClient.SendAsync(requestMsg);
If I use Task.Run() as suggested in the comments and get a new Thread, then it's always fine.
BUT it's also fine without Task.Run() if there is no Internet access but the network access is not "Limited" (it says that the IPv4 connectivity is "Internet access" although I cannot open a single website in a browser and no data is returned from the web service. It just throws System.Net.Http.HttpRequestException which was something I was expecting in the first place) Only blocks when the network connection is Limited.
What if instead of setting a timeout, you checked the connection status using
public static bool IsConnected
{
get
{
return NetworkInformation.GetInternetConnectionProfile() != null;
}
}
This way if IsConnected, then you make the call; otherwise, ignore it.
I'm not sure if you are running this in App.xaml.cs? I've found requests made in that class can be fickle and it may be best to move the functionality to an extended splash screen to ensure the application makes it all the way through the activation process.
http://msdn.microsoft.com/en-us/library/windows/apps/xaml/Hh868191(v=win.10).aspx
I have a bit of a problem using the JDBCSessionManager in Jetty 7. For some reason the it tries to persist the SessionManager when persisting the SessionAuthentication :
16:46:02,455 WARN org.eclipse.jetty.util.log - Problem persisting changed session data id=b75j2q0lak5s1o2zuryj05h9y
java.io.NotSerializableException: org.eclipse.jetty.server.session.JDBCSessionManager
Setup code:
server.setSessionIdManager(getSessionIdManager());
final SessionManager jdbcSessionManager = new JDBCSessionManager();
jdbcSessionManager.setIdManager(server.getSessionIdManager());
context.setSessionHandler(new SessionHandler(jdbcSessionManager));
server.setHandler(context);
private SessionIdManager getSessionIdManager() {
JDBCSessionIdManager idMan = new JDBCSessionIdManager(server);
idMan.setDriverInfo("com.mysql.jdbc.Driver", "jdbc:mysql://localhost/monty?user=xxxx&password=Xxxx");
idMan.setWorkerName("monty");
return idMan;
}
Has anyone experienced something similar?
I wouldn't recommend serializing anything regarding JDBC to session. My preferred mode of operation is the acquire, use, and close any and all database resources such as connection, statements, and result sets in the narrowest scope possible. I use connection pools to amortize the cost of opening database connections. That's the way I think you should go, too.
Besides, you have no choice if the class doesn't implement java.io.Serializable. Perhaps the designers were trying to express my feelings in code.
I checked the javadocs for JDBCSessionManager. Neither the leaf class nor any of its superclasses implement Serializable.
I've got a real lemon on my hands. I hope someone who has the same problem or know how to fix it could point me in the right direction.
The Setup
I'm trying to create a WCF data service that uses an ADO Entity Framework model to retrieve data from the DB. I've added the WCF service reference and all seems fine. I have two sets of data service calls. The first one retrieves a list of all "users" and returns (this list does not include any dependent data (eg. address, contact, etc.). The second call is when a "user" is selected, the application request to include a few more dependent information such as address, contact details, messages, etc. given a user id. This also seems to work fine.
The Lemon
After some user selection change, ie. calling for more dependent data from the data service, the application stops to respond.
Crash error:
The request channel timed out while waiting for a reply after 00:00:59.9989999. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.
I restart the debugging process but the application will not make any data service calls until after about a minute or so, VS 08 displays a message box with error:
Unable to process request from service. 'http://localhost:61768/ConsoleService.svc'. Catastrophic failure.
I've Googled the hell out of this error and related issues but found nothing of use.
Possible Solutions
I've found some leads as to the source of the problem. In the client's app.config:
maxReceivedMessageSize > Set to a higher value, eg. 5242880.
receiveTimeout > Set to a higher value, eg. 00:30:00
I've tried these but all in vain. I suspect there is an underlying problem that cannot be fixed by simply changing some numbers. Any leads would be much appreciated.
I've solved it =P.
Cause
The WCF service works fine. It was the data service calls that was the culprit. Every time I made the call, I instantiated a new reference to the data service, but never closed/disposed the service reference. So after a couple of calls, the data service reaches its maximum connection and halts.
Solution
Make sure to close/dispose of any data service reference properly. Best practice would be to enclose in a using statement.
using(var dataService = new ServiceNS.ServiceClient() )
{
// Use service here
}
// The service will be disposed and connection freed.
Glad to see you fixed your problem.
However, you need to be carefull about using the using statement. Have a look at this article:
http://msdn.microsoft.com/en-us/library/aa355056.aspx
I have a utility class that creates & returns a db connection:
Public Shared Function GetConnection() as OracleConnection
dim c as New OracleConnection()
... set connection string...
c.Open()
Return c
End Function
Is there any risk of concurrent calls returning the same connection? The connection string enables pooling.
Since you are returning a new connection each time you will not have any concurrency issues. If you were using a Shared method to return multiple references to the same instance, that could be a problem but that is not what you are doing here.
You are safe to use this method in this way as long as you are always returning a new instance of your database connection object each time. Any connection pooling will also work the same as it always would - you won't need to worry about your Shared method usage created problems there either.
Forget the concurrent calls issue for a moment. If there is any connection pooling going on you will absolutely have reuse of the same underlying database connections, even if they don't use the same object.
This is generally a desirable thing as opening a connection to the DB can be an expensive operation.
Are you worried about closing the connection object out from under another caller? If so, as another response pointed out, I think you are safe with the code you provided.
I don't think so.
Since c is a local variable ("stack variable") and not a static one every call has it's own instance of c.
Next you create a new object (the connection) and return this.
Shouldn't be any problem with concurrency, because each call is a new connection.
I might make one change, though: make the method private.
This will force you to put all your data access code in one class and push to create a nice, separate data access layer. At very least make it internal so that your data access layer is confined to a single assembly (that's separate from the rest of your code).