What dependency strategy suits this small app - oop

I have a small PHP command line application that I am creating in order to learn some common design patterns and oop techniques.
I have set up all of my relevant classes so that they are not instantiating objects internally, but instead they are being given their objects they require via their constructor.
The problem now is how do I orchestrate everything so that each object gets the dependencies it requires. I have read about dependency injection containers and frameworks but this seems overkill for a small command line app + I am having a hard time understanding how they would fit into my application.
Currently the flow goes like this:
Program is executed by user at the command line
Bootstrap occurs, i.e. autoloader etc instantiated etc
I have a factory method that sets up the dependencies (all hard coded inside the class) and returns an application object. There around 2 dependencies for the main application and each of those has a further 2 dependencies each (this is the tricky part i think)
Application->run() is called.
What would be the best approach in terms of a balance between flexibility and simplicity as I dont believe the design (around the factory) is quite correct.

From the sounds of it your application is organised fairly well, construction is separate from the application logic and your dependencies are managed clearly.
You could of course add configurable dependencies or utilise a dependency injection frameworks, however, I would recommend avoiding both of these unless the application requires it. If you do start using a DI framework, make sure you keep everything cohesive even though you are using this magic tool and try and move everything away from the framework when it is possible (i.e. Having modules handle the internal dependencies through factories rather than rely on the framework). Split functionality into modules when it makes sense.
I am working on improvements to a large nebulous application that runs all of its dependencies through a DI framework and it is slow to start up and turns into a bit of a sack-o-crap and is not pleasant to work with, it is an application that does "everything" rather than delegate tasks onto modules and libraries and has zero unit tests.
The main thing to note is that there are a lot of solutions to every problem and learning different patterns is a great idea, there are many types of factory pattern, learn them all. Some are simple, some are more sophisticated, you will get a feeling for what works in different situations.
My personal recommendation, if your application works as you intend, it is organised as it sounds like it is, and if you haven't already (and the aim of the game is to improve your techniques), practice:
Documenting (write some documentation and throw it at a dev friend to see how they deal with it)
Testing (write some unit tests and then rewrite the application a different way)
Benchmarking (run profiling on your code and try to find ways to optimize it)
Try a different approach to factory loading, dynamic factory, builder pattern, framework, use the benchmarks to understand what you traded off.

Your application should work like this:
$app = new Application($arg1, $arg2);
$app->run();
All other classes should be instantiated in the Application, and passed as parameters to constructors of other classes. Rule of thumb is: any constructor should have less than 3 arguments. If you follow this rule everything will be fine.

Related

Method JavaFx TreeItem getRoot() is not visible. What is the OOP/MVC reason it is not?

I needed to get the root item of a TreeView. The obvious way to get it is to use the getRoot() on the TreeView. Which I use.
I like to experiment, and was wondering if I can get same root, buy climbing up the tree from a leaf item (a TreeItem), using recursively getParent() until the result is NULL.
It is working as well, and, in my custom TreeItem, I added a public method 'getRoot()' to play around with it. Thus finding out this method does already exist in parent TreeItem, but is not exposed.
My question : Why would it not be exposed ? Is is a bad practice regarding OOP / MVC architecture ?
The reason for the design is summed up by kleopatra's comment:
Why would it not be exposed I would pose it the other way round: why should it? It's convenience api at best, easy to implement by clients, not really needed - adding such to a framework/toolkit tends to exploding api/implementation to maintain.
JavaFX is filled with decisions like this on purpose. A lot of the reasoning is based on experience (good and bad) from AWT/Spring. Just some examples:
For specifying execution on the UI thread, there is a runLater API, but no invokeAndWait API like Swing, even though it would be easy for the framework to provide such an API and it has been requested.
Providing an invokeAndWait API means that naive (and experienced :-) developers could use it incorrectly to accidentally deadlock threads.
Lots of classes are final and not extensible.
Sometimes developers want to extend classes, but can't because they are final. This means that they can't over-ride a lot of the built-in tested functionality of the framework and accidentally break it that way. Instead they can usually use aggregation over inheritance to do what they need. The framework forces them to do so in order to protect itself and them.
Color objects are immutable.
Immutable objects in general make stuff easier to maintain.
Native look and feels aren't part of the framework.
You can still create them if you want, and there are 3rd party libraries that do that, but it doesn't need to be in the core framework.
The application programming interface is single threaded not multi-threaded.
Because the developers of the framework realized that multi-threaded UI frameworks are a failed dream.
The philosophy was to code to make the 80% use case easier and the the 20% use case (usually) possible, using additional user or 3rd party code, while making it difficult for the user code to accidentally (or intentionally) break the framework. You just stumbled upon one instance of an application of this philosophy.
There are a whole host of catch-phrases that you could use to describe the reason for this design approach. None of them are OOP or MVC specific. The underlying principles have been around far longer than software engineering, they are just approaches towards work and engineering in general. Here are some links if interested:
You ain't going to need it YAGNI
Minimal viable product MVP
Worse-is-better
Muntzing
Feature creep prevention
Keep it simple stupid KISS
Occam's razor

Net core MVC clean architecture without repository pattern

I'm trying to create a MVC application in net core 2.1 using the eshoponweb example application. Ive read that in entity Framework core there's no great benefit of putting a repository layer in and to just use the ef dbcontext directly. How would I do this in a clean architecture scenerio. In the example application the dB context is in the infrastructure layer and the business services logic is all in the application core. I thought about moving either of these but then won't that prevent the separation that clean architecture is looking to achieve. https://github.com/dotnet-architecture/eShopOnWeb and https://www.thereformedprogrammer.net/is-the-repository-pattern-useful-with-entity-framework-core/
I think where a lot of developers get hung up is in thinking that you need your own layers. In onion architecture, you'll generally have a "data" layer, typically referred to as DAL. When you're using an ORM like EF, that is your data layer. In other words, rather than having a separate class library you create to work with the database, EF is that library, and therefore, you'd use it just like you'd use your own DAL library, if you had one.
Try not to get so hung up on layers and "clean" architecture. In truth, the cleanest architecture is a single project. It's only when things start to get unwieldy with that, that it makes sense to break out "layers". In other words, build the simplest unit of functionality you can. If there's a bunch of code involved, you find yourself repeating code, you have too many dependencies, etc. then start to break stuff out as part of the refactoring process. Eventually, you may end up with all the fancy layers and 100 different class libraries, or whatever, but it's folly to try to start there. If your app doesn't actually need something, it's stupid to add it. Plain and simple.
For what it's worth, this is probably one of the most unsung benefits of TDD, or test driven development. You write a test to ensure that one particular thing happens that you want, then you write the code to satisfy that test. Importantly, you only write the code to satisfy that test. That means naturally, you start simple and move to complexity. The refactor cycle of the old red-green-refactor TDD approach is where you clean up the code, abstracting things where necessary, moving logic out into reusable libraries and such. Even if you don't do the whole test-first approach to coding, it's still highly beneficial to look at development this way. You know what you need to build, so build the most simplistic thing that technically satisfies that requirement. Then refactor. Build the next requirement, and refactor again. Let you application naturally grow into what it actually needs to be, rather than trying to assert some sort of architecture, pattern, or process on it from the get-go.

How to unit test non-public logic

In some cases unit testing can be really difficult. Normally people say to only test your public API. But in some cases this is just not possible. If your public API depends on files or databases you can't unit test properly. So what do you do?
Because it's my first time TDD-ing, I'm trying to find "my style" for unit testing, since it seems there is just not the one way to do so. I found two approaches on this problem, that aren't flawless at all. On the one hand, you could try to friend your assemblies and test the features that are internal. On the other hand, you could implement interfaces (only for the purpose of unit testing) and create fake objects within your unit tests. This approach looks quite nice first but becomes more ugly as you try to transport data using these fakes.
Is there any "good" solution to this problem? Which of those is less flawed? Or is there even a third approach?
I made a couple of false starts in TDD, grappling with this exact same problem. For me the breakthrough came when I realized what my mentor meant when he said : "We don't want to test the framework." (In our case that was the .Net framework).
In your case it sounds as if you have some business logic that interfaces to files and databases. What I would do is to abstract the file and database logic in the thinnest layers possible. You can then use Mock (of fakes or stubs) to simulate the file and database layers. This will allow you to test scenarios like if-my-database-returns-this-kind-of-information-does-my-business-logic-handle-it-correctly? Likewise for file access you can test the code that figures out which file in which path to open and you can test that your logic would be able to pull apart the contents of any given file correctly and able to use it correctly.
If for example your file access layer consists of a single function that takes a path name and a file name and returns the contents of the file in a long string then you don't really need to test it because essentially you are making a single call to the framework/OS and there is not a lot that can go wrong there.
At the moment I am working on a system that wraps our database as a bunch of functions that return lists of POCO's. Easy to understand for the business layer and easy to simulate via mocks.
Working this way takes some getting used to but it is absolutely byoo-ti-full once it clicks in your mind.
Finally, from your question I guess that you are working with legacy code and trying to do TDD for a new component. This is quite a bit harder than doing TDD on a completely new development. If it is at all possible, try to do your first TDD attempts on new (or well isolated) systems. Once you have learnt the mechanics it would be a lot easier to introduce partially TDD'd bits to legacy systems.
If your public API depends on files or databases you can't unit test properly. So what do you do?
There is an abstraction level that can be used.
IFileSystem/ IFileStorage (for files)
IRepository/ IDataStorage (for databases)
Since this level is very thin its integration tests will be easy to write and maintain. All other code will be unit-test friendly because it is easy to mock interaction with filesystem and database.
On the one hand, you could try to friend your assemblies and test the features that are internal.
People face this problem when their classes violates single responsibility principle (SRP) and dependency injection (DI) is not used.
There is a good rule that classes should be tested via their public methods/properties only. If internal methods are used by others then it is acceptable to test them. Private or protected methods should not be made internal because of testing.
On the other hand, you could implement interfaces (only for the purpose of unit testing) and create fake objects within your unit tests.
Yes, interfaces are easy to mock because of limitations of mocking frameworks.
If you can create an instance (fake/stub) of a type then your dependency should not implement an interface.
Sometimes people use interfaces for their domain entities but I do not support them.
To simplify working with fakes there are two patterns used:
Object Mother
Test Data Builder
When I started writing unit tests I started with 'Object Mother'. Now I am using 'Test Data Builder's.
There are a lot of good ideas that can help you in the book Working Effectively with Legacy Code by Michael Feathers.
Don't let the hard stuff get in your way... If it's inherently hard to test due to db or file integration, just ignore it for the moment. Most likely you can refactor that hard to test stuff into easier to test stuff using mocks with Dependency Injection etc... Until then, test the easy stuff and get a good unit test suite built up... when you do the refactoring of the hard to test stuff, you will have a much higher confidence interval that it's not breaking anything else... And refactoring to make something more easily testable IS a good reason to refactor...

How to understand the big picture in a loose coupled application?

We have been developing code using loose coupling and dependency injection.
A lot of "service" style classes have a constructor and one method that implements an interface. Each individual class is very easy to understand in isolation.
However, because of the looseness of the coupling, looking at a class tells you nothing about the classes around it or where it fits in the larger picture.
It's not easy to jump to collaborators using Eclipse because you have to go via the interfaces. If the interface is Runnable, that is no help in finding which class is actually plugged in. Really it's necessary to go back to the DI container definition and try to figure things out from there.
Here's a line of code from a dependency injected service class:-
// myExpiryCutoffDateService was injected,
Date cutoff = myExpiryCutoffDateService.get();
Coupling here is as loose as can be. The expiry date be implemented literally in any manner.
Here's what it might look like in a more coupled application.
ExpiryDateService = new ExpiryDateService();
Date cutoff = getCutoffDate( databaseConnection, paymentInstrument );
From the tightly coupled version, I can infer that the cutoff date is somehow determined from the payment instrument using a database connection.
I'm finding code of the first style harder to understand than code of the second style.
You might argue that when reading this class, I don't need to know how the cutoff date is figured out. That's true, but if I'm narrowing in on a bug or working out where an enhancement needs to slot in, that is useful information to know.
Is anyone else experiencing this problem? What solutions have you? Is this just something to adjust to? Are there any tools to allow visualisation of the way classes are wired together? Should I make the classes bigger or more coupled?
(Have deliberately left this question container-agnostic as I'm interested in answers for any).
While I don't know how to answer this question in a single paragraph, I attempted to answer it in a blog post instead: http://blog.ploeh.dk/2012/02/02/LooseCouplingAndTheBigPicture.aspx
To summarize, I find that the most important points are:
Understanding a loosely coupled code base requires a different mindset. While it's harder to 'jump to collaborators' it should also be more or less irrelevant.
Loose coupling is all about understanding a part without understanding the whole. You should rarely need to understand it all at the same time.
When zeroing in on a bug, you should rely on stack traces rather than the static structure of the code in order to learn about collaborators.
It's the responsibility of the developers writing the code to make sure that it's easy to understand - it's not the responsibility of the developer reading the code.
Some tools are aware of DI frameworks and know how to resolve dependencies, allowing you to navigate your code in a natural way. But when that isn't available, you just have to use whatever features your IDE provides as best you can.
I use Visual Studio and a custom-made framework, so the problem you describe is my life. In Visual Studio, SHIFT+F12 is my friend. It shows all references to the symbol under the cursor. After a while you get used to the necessarily non-linear navigation through your code, and it becomes second-nature to think in terms of "which class implements this interface" and "where is the injection/configuration site so I can see which class is being used to satisfy this interface dependency".
There are also extensions available for VS which provide UI enhancements to help with this, such as Productivity Power Tools. For instance, you can hover over an interface, a info box will pop up, and you can click "Implemented By" to see all the classes in your solution implementing that interface. You can double-click to jump to the definition of any of those classes. (I still usually just use SHIFT+F12 anyway).
I just had an internal discussion about this, and ended up writing this piece, which I think is too good not to share. I'm copying it here (almost) unedited, but even though it's part of a bigger internal discussion, I think most of it can stand alone.
The discussion is about introduction of a custom interface called IPurchaseReceiptService, and whether or not it should be replaced with use of IObserver<T>.
Well, I can't say that I have strong data points about any of this - it's just some theories that I'm pursuing... However, my theory about cognitive overhead at the moment goes something like this: consider your special IPurchaseReceiptService:
public interface IPurchaseReceiptService
{
void SendReceipt(string transactionId, string userGuid);
}
If we keep it as the Header Interface it currently is, it only has that single SendReceipt method. That's cool.
What's not so cool is that you had to come up with a name for the interface, and another name for the method. There's a bit of overlap between the two: the word Receipt appears twice. IME, sometimes that overlap can be even more pronounced.
Furthermore, the name of the interface is IPurchaseReceiptService, which isn't particularly helpful either. The Service suffix is essentially the new Manager, and is, IMO, a design smell.
Additionally, not only did you have to name the interface and the method, but you also have to name the variable when you use it:
public EvoNotifyController(
ICreditCardService creditCardService,
IPurchaseReceiptService purchaseReceiptService,
EvoCipher cipher
)
At this point, you've essentially said the same thing thrice. This is, according to my theory, cognitive overhead, and a smell that the design could and should be simpler.
Now, contrast this to use of a well-known interface like IObserver<T>:
public EvoNotifyController(
ICreditCardService creditCardService,
IObserver<TransactionInfo> purchaseReceiptService,
EvoCipher cipher
)
This enables you to get rid of the bureaucracy and reduce the design the the heart of the matter. You still have intention-revealing naming - you only shift the design from a Type Name Role Hint to an Argument Name Role Hint.
When it comes to the discussion about 'disconnectedness', I'm under no illusion that use of IObserver<T> will magically make this problem go away, but I have another theory about this.
My theory is that the reason many programmers find programming to interfaces so difficult is exactly because they are used to Visual Studio's Go to definition feature (incidentally, this is yet another example of how tooling rots the mind). These programmers are perpetually in a state of mind where they need to know what's 'on the other side of an interface'. Why is this? Could it be because the abstraction is poor?
This ties back to the RAP, because if you confirm programmers' belief that there's a single, particular implementation behind every interface, it's no wonder they think that interfaces are only in the way.
However, if you apply the RAP, I hope that slowly, programmers will learn that behind a particular interface, there may be any implementation of that interface, and their client code must be able to handle any implementation of that interface without changing the correctness of the system. If this theory holds, we've just introduced the Liskov Substitution Principle into a code base without scaring anyone with high-brow concepts they don't understand :)
However, because of the looseness of the coupling, looking at a class
tells you nothing about the classes around it or where it fits in the
larger picture.
This is not accurate.For each class you know exactly what kind of objects the class depends on, to be able to provide its functionality at runtime.
You know them since you know that what objects are expected to be injected.
What you don't know is the actual concrete class that will be injected at runtime which will implement the interface or base class that you know your class(es) depend on.
So if you want to see what is the actual class injected, you just have to look at the configuration file for that class to see the concrete classes that are injected.
You could also use facilities provided by your IDE.
Since you refer to Eclipse then Spring has a plugin for it, and has also a visual tab displaying the beans you configure. Did you check that? Isn't it what you are looking for?
Also check out the same discussion in Spring Forum
UPDATE:
Reading your question again, I don't think that this is a real question.
I mean this in the following manner.
Like all things loose coupling is not a panacea and has its own disadvantages per se.
Most tend to focus on the benefits but as any solution it has its disadvantages.
What you do in your question is describe one of its main disadvantages which is that it indeed is not easy to see the big picture since you have everything configurable and plugged in by anything.
There are other drawbacks as well that one could complaint e.g. that it is slower than tight coupled applications and still be true.
In any case, re-iterating, what you describe in your question is not a problem you stepped upon and can find a standard solution (or any for that manner).
It is one of the drawbacks of loose coupling and you have to decide if this cost is higher than what you actually gain by it, like in any design-decision trade off.
It is like asking:
Hey I am using this pattern named Singleton. It works great but I can't create new objects!How can I get arround this problem guys????
Well you can't; but if you need to, perhaps singleton is not for you....
One thing that helped me is placing multiple closely related classes in the same file. I know this goes against the general advice (of having 1 class per file) and I generally agree with this, but in my application architecture it works very well. Below I will try to explain in which case this is.
The architecture of my business layer is designed around the concept of business commands. Command classes (simple DTO with only data and no behavior) are defined and for each command there is a 'command handler' that contains the business logic to execute this command. Each command handler implements the generic ICommandHandler<TCommand> interface, where TCommand is the actual business command.
Consumers take a dependency on the ICommandHandler<TCommand> and create new command instances and use the injected handler to execute those commands. This looks like this:
public class Consumer
{
private ICommandHandler<CustomerMovedCommand> handler;
public Consumer(ICommandHandler<CustomerMovedCommand> h)
{
this.handler = h;
}
public void MoveCustomer(int customerId, Address address)
{
var command = new CustomerMovedCommand();
command.CustomerId = customerId;
command.NewAddress = address;
this.handler.Handle(command);
}
}
Now consumers only depend on a specific ICommandHandler<TCommand> and have no notion of the actual implementation (as it should be). However, although the Consumer should know nothing about the implementation, during development I (as a developer) am very much interested in the actual business logic that is executed, simply because development is done in vertical slices; meaning that I'm often working on both the UI and business logic of a simple feature. This means I'm often switching between business logic and UI logic.
So what I did was putting the command (in this example the CustomerMovedCommand and the implementation of ICommandHandler<CustomerMovedCommand>) in the same file, with the command first. Because the command itself is concrete (since its a DTO there is no reason to abstract it) jumping to the class is easy (F12 in Visual Studio). By placing the handler next to the command, jumping to the command means also jumping to the business logic.
Of course this only works when it is okay for the command and handler to be living in the same assembly. When your commands need to be deployed separately (for instance when reusing them in a client/server scenario), this will not work.
Of course this is just 45% of my business layer. Another big peace however (say 45%) are the queries and they are designed similarly, using a query class and a query handler. These two classes are also placed in the same file which -again- allows me to navigate quickly to the business logic.
Because the commands and queries are about 90% of my business layer, I can in most cases move very quickly from presentation layer to business layer and even navigate easily within the business layer.
I must say these are the only two cases that I place multiple classes in the same file, but makes navigation a lot easier.
If you want to learn more about how I designed this, I've written two articles about this:
Meanwhile... on the command side of my architecture
Meanwhile... on the query side of my architecture
In my opinion, loosely coupled code can help you much but I agree with you about the readability of it.
The real problem is that name of methods also should convey valuable information.
That is the Intention-Revealing Interface principle as stated by
Domain Driven Design ( http://domaindrivendesign.org/node/113 ).
You could rename get method:
// intention revealing name
Date cutoff = myExpiryCutoffDateService.calculateFromPayment();
I suggest you to read thoroughly about DDD principles and your code could turn much more readable and thus manageable.
I have found The Brain to be useful in development as a node mapping tool. If you write some scripts to parse your source into XML The Brain accepts, you could browse your system easily.
The secret sauce is to put guids in your code comments on each element you want to track, then the nodes in The Brain can be clicked to take you to that guid in your IDE.
Depending on how many developers are working on projects and whether you want to reuse some parts of it in different projects loose coupling can help you a lot. If your team is big and project needs to span several years, having loose coupling can help as work can be assigned to different groups of developers more easily. I use Spring/Java with lots of DI and Eclipse offers some graphs to display dependencies. Using F3 to open class under cursor helps a lot. As stated in previous posts, knowing shortcuts for your tool will help you.
One other thing to consider is creating custom classes or wrappers as they are more easily tracked than common classes that you already have (like Date).
If you use several modules or layer of application it can be a challenge to understand what a project flow is exactly, so you might need to create/use some custom tool to see how everything is related to each other. I have created this for myself, and it helped me to understand project structure more easily.
Documentation !
Yes, you named the major drawback of loose coupled code. And if you probably already realized that at the end, it will pay off, it's true that it will always be longer to find "where" to do your modifications, and you might have to open few files before finding "the right spot"...
But that's when something really important: the documentation. It's weird that no answer explicitly mentioned that, it's a MAJOR requirement in all big sized development.
API Documentation
An APIDoc with a good search feature. That each file and --almost-- each methods have a clear description.
"Big picture" documentation
I think it's good to have a wiki that explain the big picture. Bob have made a proxy system ? How doest it works ? Does it handle authentication ? What kind of component will use it ? Not a whole tutorial, but just a place when you can read 5 minutes, figure out what components are involved and how they are linked together.
I do agree with all the points of Mark Seemann answer, but when you get in a project for the first time(s), even if you understand well the principles behing decoupling, you'll either need a lot of guessing, or some sort of help to figure out where to implement a specific feature you want to develop.
... Again: APIDoc and a little developper Wiki.
I am astounded that nobody has written about the testability (in terms of unit testing of course) of the loose coupled code and the non-testability (in the same terms) of the tightly coupled design! It is no brainer which design you should choose. Today with all the Mock and Coverage frameworks it is obvious, well, at least for me.
Unless you do not do unit tests of your code or you think you do them but in fact you don't...
Testing in isolation can be barely achieved with tight coupling.
You think you have to navigate through all the dependencies from your IDE? Forget about it! It is the same situation as in case of compilation and runtime. Hardly any bug can be found during the compilation, you cannot be sure whether it works unless you test it, which means execute it. Want to know what is behind the interface? Put a breakpoint and run the goddamn application.
Amen.
...updated after the comment...
Not sure if it is going to serve you but in Eclipse there is something called hierarchy view. It shows you all the implementations of an interface within your project (not sure if the workspace as well). You can just navigate to the interface and press F4. Then it will show you all the concrete and abstract classes implementing the interface.

Do you use AOP (Aspect Oriented Programming) in production software?

AOP is an interesting programming paradigm in my opinion. However, there haven't been discussions about it yet here on stackoverflow (at least I couldn't find them). What do you think about it in general? Do you use AOP in your projects? Or do you think it's rather a niche technology that won't be around for a long time or won't make it into the mainstream (like OOP did, at least in theory ;))?
If you do use AOP then please let us know which tools you use as well. Thanks!
Python supports AOP by letting you dynamically modify its classes at runtime (which in Python is typically called monkeypatching rather than AOP). Here are some of my AOP use cases:
I have a website in which every page is generated by a Python function. I'd like to take a class and make all of the webpages generated by that class password-protected. AOP comes to the rescue; before each function is called, I do the appropriate session checking and redirect if necessary.
I'd like to do some logging and profiling on a bunch of functions in my program during its actual usage. AOP lets me calculate timing and print data to log files without actually modifying any of these functions.
I have a module or class full of non-thread-safe functions and I find myself using it in some multi-threaded code. Some AOP adds locking around these function calls without having to go into the library and change anything.
This kind of thing doesn't come up very often, but whenever it does, monkeypatching is VERY useful. Python also has decorators which implement the Decorator design pattern (http://en.wikipedia.org/wiki/Decorator_pattern) to accomplish similar things.
Note that dynamically modifying classes can also let you work around bugs or add features to a third-party library without actually having to modify that library. I almost never need to do this, but the few times it's come up it's been incredibly useful.
Yes.
Orthogonal concerns, like security, are best done with AOP-style interception. Whether that is done automatically (through something like a dependency injection container) or manually is unimportant to the end goal.
One example: the "before/after" attributes in xUnit.net (an open source project I run) are a form of AOP-style method interception. You decorate your test methods with these attributes, and just before and after that test method runs, your code is called. It can be used for things like setting up a database and rolling back the results, changing the security context in which the test runs, etc.
Another example: the filter attributes in ASP.NET MVC also act like specialized AOP-style method interceptors. One, for instance, allows you to say how unhandled errors should be treated, if they happen in your action method.
Many dependency injection containers, including Castle Windsor and Unity, support this behavior either "in the box" or through the use of extensions.
I don't understand how one can handle cross-cutting concerns like logging, security, transaction management, exception-handling in a clean fashion without using AOP.
Anyone using the Spring framework (probably about 50% of Java enterprise developers) is using AOP whether they know it or not.
At Terracotta we use AOP and bytecode instrumentation pretty extensively to integrate with and instrument third-party software. For example, our Spring intergration is accomplished in large part by using aspectwerkz. In a nutshell, we need to intercept calls to Spring beans and bean factories at various points in order to cluster them.
So AOP can be useful for integrating with third party code that can't otherwise be modified. However, we've found there is a huge pitfall - if possible, only use the third party public API in your join points, otherwise you risk having your code broken by a change to some private method in the next minor release, and it becomes a maintenance nightmare.
AOP and transaction demarcation is a match made in heaven. We use Spring AOP #Transaction annotations, it makes for easier and more intuitive tx-demarcation than I've ever seen anywhere else.
We used aspectJ in one of my big projects for quite some time. The project was made up of several web services, each with several functions, which was the front end for a complicated document processing/querying system. Somewhere around 75k lines of code. We used aspects for two relatively minor pieces of functionality.
First was tracing application flow. We created an aspect that ran before and after each function call to print out "entered 'function'" and "exited 'function'". With the function selector thing (pointcut maybe? I don't remember the right name) we were able to use this as a debugging tool, selecting only functions that we wanted to trace at a given time. This was a really nice use for aspects in our project.
The second thing we did was application specific metrics. We put aspects around our web service methods to capture timing, object information, etc. and dump the results in a database. This was nice because we could capture this information, but still keep all of that capture code separate from the "real" code that did the work.
I've read about some nice solutions that aspects can bring to the table, but I'm still not convinced that they can really do anything that you couldn't do (maybe better) with "normal" technology. For example, I couldn't think of any major feature or functionality that any of our projects needed that couldn't be done just as easily without aspects - where I've found aspects useful are the kind of minor things that I've mentioned.
I use AOP heavily in my C# applications. I'm not a huge fan of having to use Attributes, so I used Castle DynamicProxy and Boo to apply aspects at runtime without polluting my code
We use AOP in our session facade to provide a consistent framework for our customers to customize our application. This allows us to expose a single point of customization without having to add manual hook support in for each method.
Additionally, AOP provides a single point of configuration for additional transaction setup and teardown, and the usual logging things. All told, much more maintainable than doing all of this by hand.
The main application I work on includes a script host. AOP allows the host to examine the properties of a script before deciding whether or not to load the script into the Application Domain. Since some of the scripts are quite cumbersome, this makes for much faster loading at run-time.
We also use and plan to use a significant number of attributes for things like compiler control, flow control and in-IDE debugging, which do not need to be part of the final distributed application.
We use PostSharp for our AOP solution. We have caching, error handling, and database retry aspects that we currently use and are in the process of making our security checks an Aspect.
Works great for us. Developers really do like the separation of concerns. The Architects really like having the platform level logic consolidated in one location.
The PostSharp library is a post compiler that does the injection of the code. It has a library of pre-defined intercepts that are brain dead easy to implement. It feels like wiring in event handlers.
Yes, we do use AOP in application programming . I preferably use AspectJ for integrating aop in my Spring applications. Have a look at this article for getting a broader prospective for the same.
http://codemodeweb.blogspot.in/2018/03/spring-aop-and-aspectj-framework.html