In our application we have several (actuall many, about 30) web services. Each web service resides in its own WAR file and has its own Spring context that is initialised when application starts.
We also have a number of annotation-driven aspect classes that we apply to web service classes. In the begining the poincut expression looked like this:
#Pointcut("execution(public * my.package.service.business.*BusinessServiceImpl.*(..))")
public void methodsToBeLogged() {
}
And AOP was enabled on services through entry in configuration.
But when the number of web serivces grew, we began to experience OutOfMemoryExceptions on our servers. After doing some profiling and analysis it appeared that memory is taken by the cache that is kept by instances of AspectJExpressionPointcut class.
Each instance's cache was about 5 MBs. And as we had 3 aspects and 30 services it resulted in 90 instances holding 450MBs of data in total.
After examining the contents of the cache we realised that it contains Java reflection Method instances for all classes existing in the WAR even those which are not part of my.package.service.business package. After modifing the point cut expression to have additionally within clause:
#Pointcut("execution(public * my.package.service.business.*BusinessServiceImpl.*(..)) &&
within(my.package.service.business..*)")
public void methodsToBeLogged() {
}
Memory usage was down to normal again. And all AspectJExpressionPointcut instances took less than 1MB all together.
Can someone explain why is that? And why first point cut expression is not enough? Why the cache of AspectJExpressionPointcut is not shared?
The AspectJExpressionPointcut uses a cache (shadowMatchCache) which speeds up the decision of whether AOP should be applied to a certain method call or not, based on the pointcut expression. This cache possibly consumes a lot of memory.
Additionaly, before offering all methods of a specific bean to see if there is a pointcut expression match or not, Spring first checks if a bean class, could possibly match or not, by calling AspectJExpressionPointcut.matches(Class targetClass).
This method delegates to AspectJ's PointcutExpressionImpl.couldPossiblyMatch() method. This will perform a fast check whether a class could 'possibly' match a pointcut expression or will never 'definetely' match.
According to the AspectJ developers using a within pointcut, results in more definite no's. They also recommend to never use a standalone kind of pointcuts (execution, call, get, set), but combine these with within.
The shadowMatchCache can not be shared however, because it contains the result of match or no match per pointcut expression.
But at least you can limit what gets cached. I also think Spring could possibly improve on this by not keeping this whole cache around, once the applicationContext is started. F.e. they could possibly throw away all the no matches, at the expense of redoing some of the matching, when a new bean is dynamically added to the applicationContext after it is already started.
Another possible memory hog inside the AspectJExpressionPointcut class is the pointCutParser. This parser could possibly be shared across all AspectJExpressionPointcuts in the applicationContext. Take a loot at JIRA ticket SPR-7678.
Related
I have two different Java 8 projects that will live on different servers and which will both use Akka (specifically Akka Remoting) to talk to each other.
For instance, one app might send a Fizzbuzz message to the other app:
public class Fizzbuzz {
private int foo;
private String bar;
// Getters, setters & ctor omitted for brevity
}
I've never used Akka Remoting before. I assume I need to create a 3rd project, a library/jar for holding the shared messages (such as Fizzbuzz and others) and then pull that library in to both projects as a dependency.
Is it that simple? Are there any serialization (or other Akka and/or networking) considerations that affect the design of these "shared" messages? Thanks in advance!
Shared library is a way to go for sure, except there are indeed serialization concerns:
Akka-remoting docs:
When using remoting for actors you must ensure that the props and messages used for those actors are serializable. Failing to do so will cause the system to behave in an unintended way.
For more information please see Serialization.
Basically, you'll need to provide and configure the serialization for actor props and messages sent (including all the nested classes of course). If I'm not mistaking default settings will get you up and running without any configuration on your side, provided that everything you send over the wire is java-serializable.
However, default config uses default Java serialization, which is known to be quite inefficient - so you might want to switch to protobuf, kryo, or maybe even json. In that case, it would make sense to provide the serialization implementation and bindings as a shared library - either a dedicated one or a part of the "shared models" one that you mentioned in the question - depends if you want to reuse it elsewhere and mind/don't mind having serailization-related transitive dependencies popping all over the place.
Finally, if you allow some personal opinion, I would suggest trying protobuf first - it's binary format (read: efficient) and is widely supported (there are bindings for other languages). Kryo works well too (I have a few closed-source akka-cluster apps with kryo serialization in production), but has a few quirks with regards to collection/map handling.
Background
I'm in the process of reworking and refactoring a huge codebase which was written with neither testability nor maintainability in mind. There is a lot of global/static state going on. A function needs a database connection, so it just conjures one up using a global static method: $conn = DatabaseManager::getConnection($connName);. Or it wants to load a file, so it does it using $fileContents = file_get_contents($hardCodedFilename);.
Much of this code does not have proper tests and has only ever been tested directly in production. So the first thing I am intending on doing is write unit tests, to ensure the functionality is correct after refactoring. Now sadly code like the examples above is barely unit testable, because none of the external dependencies (database connections, file handles, ...) can be properly mocked.
Abstraction
To work around this I have created very thin wrappers around for example the system functions, that can be used in places where non-mockable function calls were used before. (I'm giving these examples in PHP, but I assume they are applicable for any other OOP language as well. Also this is a highly shortened example, in reality I am dealing with much larger classes.)
interface Time {
/**
* Returns the current time in seconds since the epoch.
* #return int for example: 1380872620
*/
public function current();
}
class SystemTime implements Time {
public function current() {
return time();
}
}
These can be used in the code like so:
class TimeUser {
/**
* #var Time
*/
private $time;
/**
* Prints out the current time.
*/
public function tellsTime() {
// before:
echo time();
// now:
echo $this->time->current();
}
}
Since the application only depends on the interface, I can replace it in a test with a mocked Time instance, which for example allows to predefine the value to return for the next call to current().
Injection
So far so basic. My actual question is how to get the proper instances into the classes that depend upon them. From my Understanding of Dependency injection, services are meant to be passed down by the application into the components that need them. Usually these services would be created in a {{main()}} method or at some other starting point and then strung along until they reach the components where they are needed.
This model likely works well when creating a new application from scratch, but for my situation it's less than ideal, since I want to move gradually to a better design. So I've come up with the following pattern, which automatically provides the old functionality while leaving me with the flexibility of substituting services.
class TimeUser {
/**
* #var Time
*/
private $time;
public function __construct(Time $time = null) {
if ($time === null) {
$time = new SystemTime();
}
$this->time = $time;
}
}
A service can be passed into the constructor, allowing for mocking of the service in a test, yet during "regular" operation, the class knows how to create its own collaborators, providing a default functionality, identical to what was needed before.
Problem
I've been told that this approach is unclean and subverts the idea of dependency injection. I do understand that the true way would be to pass down dependencies, like outlined above, but I don't see anything wrong with this simpler approach. Keep in mind also that this is a huge system, where potentially hundreds of services would need to be created up front (Service locator would be an alternative, but for now I am trying to go this other direction).
Can someone shed some light onto this issue and provide some insight into what would be a better way to achieve a refactoring in my case?
I think You've made first good step.
Last year I was on DutchPHP and there was a lecture about refactoring, lecturer described 3 major steps of extracting responsibilyty froma god class:
Extract code to private method (it should be simple copy paste since
$this is the same)
Extract code to separate class and pull
dependency
Push dependency
I think you are somewhere between 1st and 2nd step. You have a backdoor for unit tests.
Next thing according to above algorithm is to create some static factory (lecturer named it ApplicationFactory) which will be used instead of creation of instance in TimeUser.
ApplicationFactory is some kind of ServiceLocator pattern. This way you will inverse dependency (according to SOLID principle).
If you are happy with that you should remove passing Time instance into constructor and use ServiceLocator only (without backdoor for unit tests, You should stub service locator)
If you are not, then You have to find all places where TimeUser is being instantiated and inject Time implemenation:
new TimeUser(ApplicationFactory::getTime());
After some time yours ApplicationFactory will become very big. Then You have to made a decision:
Split it into smaller factories
Use some dependency injection container (Symfony DI, AurynDI or
something like that)
Currently my team is doing something similar. We are extracting responsibilities to seperate classes and inject them. We have an ApplicationFactory but we use it as service locator at as hight level as possible so classes bellow gets all dependencies injected and don't know anything about ApplicationFactory. Our application factory is big and now we are preparing to replace it with SymfonyDI.
You asked for a good mechanism to do this.
You've described some stages you might force the program to go through to accomplish this, but you are still apparantly planning to do this by hand at apparantly a very high cost.
If you really want to get this done on a huge code base, you might consider automating the steps using a program transformation engine: http://en.wikipedia.org/wiki/Program_transformation
Such a tool can let you write explicit rules for modifying code. Done right, this can make code changes reliably. That doesn't minimize your need for testing, but can let you spend more time writing tests and less time hand-changing the code (erroneously).
We are receiving a SSL handshaking exception when another part of the code is calling SSLContext.getInstance(). Can someone please confirm or deny the ability to have multiple concurrent SSLContexts running in the same JVM using the same provider? The method name getInstance implies a singleton.
Yes, you can have multiple SSLContext instances, configured differently if you wish.
getInstance(...) is generally part of the factory pattern, not the singleton pattern, although a number of implementations of the singleton pattern also use the factory pattern to access this singleton.
In addition, SSLContext.getInstance() doesn't exist, only getInstance(String), getInstance(String,String) or getInstance(String,Provider) (and having an argument on the getInstance method hardly makes sense for a singleton).
(Don't confuse this with SSLContext.getDefault(), which will always return the current default instance, although this can also be changed globally with setDefault(SSLContext).)
Just in case you were talking about SSLContext.getDefault() instead, it's worth noting that the default SSLContext will only read and use the javax.net.ssl.* system properties once, the first time it is loaded. This could have consequences if you set these properties somewhere within the code, but not somewhere else (or differently) and call SSLContext.getDefault() in a different order: the first call to SSLContext.getDefault() wins (assuming you're not complicating this with further calls to SSLContext.setDefault(...)).
I have been a web developer for some time now using ASP.NET and C#, I want to try and increase my skills by using best practices.
I have a website. I want to load the settings once off, and just reference it where ever I need it. So I did some research and 50% of the developers seem to be using the singleton pattern to do this. And the other 50% of the developers are ant-singleton. They all hate singletons. They recommend dependency injection.
Why are singletons bad? What is best practice to load websites settings? Should they be loaded only once and referenced where needed? How would I go about doing this with dependency injection (I am new at this)? Are there any samples that someone could recommend for my scenario? And I also would like to see some unit test code for this (for my scenario).
Thanks
Brendan
Generally, I avoid singletons because they make it harder to unit test your application. Singletons are hard to mock up for unit tests precisely because of their nature -- you always get the same one, not one you can configure easily for a unit test. Configuration data -- strongly-typed configuration data, anyway -- is one exception I make, though. Typically configuration data is relatively static anyway and the alternative involves writing a fair amount of code to avoid the static classes the framework provides to access the web.config anyway.
There are a couple of different ways to use it that will still allow you to unit test you application. One way (maybe both ways, if your singleton doesn't lazily read the app.cofnig) is to have a default app.config file in your unit test project providing the defaults required for your tests. You can use reflection to replace any specific values as needed in your unit tests. Typically, I'd configure a private method that allows the private singleton instance to be deleted in test set up if I do make changes for particular tests.
Another way is to not actually use the singleton directly, but create an interface for it that the singleton class implements. You can use hand injection of the interface, defaulting to the singleton instance if the supplied value is null. This allows you to create a mock instance that you can pass to the class under test for your tests, but in your real code use the singleton instance. Essentially, every class that needs it maintains a private reference to the singleton instance and uses it. I like this way a little better, but since the singleton will be created you may still need the default app.config file, unless all of the values are lazily loaded.
public class Foo
{
private IAppConfiguration Configuration { get; set; }
public Foo() : this(null) { }
public Foo( IAppConfiguration config )
{
this.Configuration = config ?? AppConfiguration.Instance;
}
public void Bar()
{
var value = this.Config.SomeMaximum;
...
}
}
There's a good discussion of singleton patterns, and coding examples here... http://en.wikipedia.org/wiki/Singleton_pattern See also here... http://en.wikipedia.org/wiki/Dependency_injection
For some reason, singletons seem to divide programmers into strong pro- and anti- camps. Whatever the merits of the approach, if your colleagues are against it, it's probably best not to use one. If you're on your own, try it and see.
Design Patterns can be amazing things. Unfortunately, the singleton seems to stick out like a sore thumb and in many cases can be considered an anti-pattern (it promotes bad practices). Bizarely, the majority of developers will only know one design pattern, and that is the singleton.
Ideally your settings should be a member variable in a high level location, for example the application object which owns the webpages you are spawning. The pages can then ask the app for the settings, or the application can pass the settings as pages are constructed.
One way to approach this problem, is to flog it off as a DAL problem.
Whatever class / web page, etc. needs to use config settings should declare a dependency on an IConfigSettingsService (factory/repository/whatever-you-like-to-call-them).
private IConfigSettingsService _configSettingsService;
public WebPage(IConfigSettingsService configSettingsService)
{
_configSettingsService = configSettingsService;
}
So your class would get settings like this:
ConfigSettings _configSettings = _configSettingsService.GetTheOnlySettings();
the ConfigSettingsService implementation would have a dependency which is Dal class. How would that Dal populate the ConfigSettings object? Who cares.
Maybe it would populate a ConfigSettings from a database or .config xml file, every time.
Maybe it do that the first time but then populate a static _configSettings for subsequent calls.
Maybe it would get the settings from Redis. If something indicates the settings have changed then the dal, or something external, can update Redis. (This approach will be useful if you have more than one app using the settings.
Whatever it does, your only dependency is a non-singleton service interface. That is very easy to mock. In your tests you can have it return a ConfigSettings with whatever you want in it).
In reality it would more likely be MyPageBase which has the IConfigSettingsService dependency, but it could just as easily be a web service, windows service, MVC somewhatsit, or all of the above.
Is there possible to create a COM-instance in it's own, dedicated, host-process?
I guess some background is needed.
We have an end-user client which has it's central logical components inside an singleton-COM object. (Not propper singleton, but it uses global variables internally, so it would fail.) So that there should be only one instance per exe-file. Convenient while making the client.
However, I should now make a "client-simulator" to test the server-side. I therefore which to make 20 instances of the client-component.
If I could make each instance instanciate in its own exe-host, then the singleton-issue would be handled.
Regards
Leif
I have been struggling with this problem for a few days. I finally found a solution that works. My COM object is written using ATL, so my code snippet will be geared toward that, but the technical solution should be clear. It all hinges on how the class objects are registered. The REGCLS_SINGLEUSE flag is the key. I now have separate processes for each object instance.
In the ATL module, override the RegisterClassObjects() function as follows:
HRESULT RegisterClassObjects(DWORD dwClsContext, DWORD dwFlags) throw()
{
return base::RegisterClassObjects(CLSCTX_LOCAL_SERVER, REGCLS_SUSPENDED | REGCLS_SINGLEUSE);
}
From MSDN regarding REGCLS_SINGLEUSE:
REGCLS_SINGLEUSE
After an application is connected to a class object with
CoGetClassObject, the class object is removed from public view so that
no other applications can connect to it. This value is commonly used
for single document interface (SDI) applications. Specifying this
value does not affect the responsibility of the object application to
call CoRevokeClassObject; it must always call CoRevokeClassObject when
it is finished with an object class.
My theory is that because the registration was removed from public view, it causes a new process to be created for the subsequent instantiations.
This other question mentioned a description of how to use DLLHost as a surrogate process:
http://support.microsoft.com/kb/198891
I've never tried this myself, and I don't know off-hand if you can specify flags for the factories (which control if surrogates can be reused for multiple objects), but maybe you can tweak that via DCOMCNFG or OLEVIEW.
My COM days are long gone, but as far as I remember, there's no built-in way to do that.
It might be easier to rewrite your code so it supports multiple instances than to go the one-process-per-instance route with COM, but here's what you could do:
Use thread-local storage for your global variables and write another CoClass, where each instance owns its own thread through which accesses to the class with the global variables are marshaled. This would at least allow you to avoid the performance impact of DCOM.
Write your own out-of-process exe server (similar to windows' DllHost.exe) to host your COM instances. This requires IPC (Inter-Process Communication), so you either have to code something yourself that marshals calls to the external process or use DCOM (presuming your COM object implements IDispatch)