In my response to Anders Hejlsberg spreading FUD about aspect-oriented programming, I mentioned the common confusion that Aspect-Oriented Programming (AOP) and Dependency Injection (DI) are competing concepts. In this post, I will further clarify the differences and similarities between AOP and DI.
The timing seems perfect, because Dino Esposito just published the article: Aspect-Oriented Programming, Interception and Unity 2.0 in the December 2010 issue of MSDN Magazine. This is a great article and I highly recommend anyone involved in .NET development to read it. Like most dependency-injection frameworks, and several general frameworks (WCF, ASP.NET MVC), Unity offers an AOP-like feature: interceptors.
What is Dependency Injection?
Dependency Injection is a design pattern that strives to reduce explicit dependencies between components.
One of the benefits of reducing dependencies (moving from a tightly coupled architecture to a loosely coupled architecture) is the reduction in the level of awareness components have of each other, thereby achieving a better separation of concerns by implementing the “program to an interface, not to an implementation” design principle.
A common misinterpretation of this design principle among C# and Java developers is that clients should consume interfaces (the thing you declare with the interface keyword) instead of classes. I prefer the following wording: “program to a contract, not to an implementation”. The contract of a class is composed of its public members, augmented by their documentation (plus their formal preconditions, post-conditions and invariants, if applicable). To be even more precise: “program to a documentation, not to an implementation”. So if your aim is just a better separation of concerns, you don’t really need interfaces, you just need the public, internal, protected and private keywords. It’s important to note that in object-oriented design, interfaces are not the silver bullet. Not all framework designers follow the same philosophy: the Java class library relies more on interfaces than that of .NET, probably because .NET designers had negative experience with the all-interface approach of COM.
Programming to an interface (instead of a class) is useful when your component has to talk to many implementations of the interface. There are innumerable cases where this is useful (after all, we all want our architecture to be extensible), but let’s face it: in as many cases, when our component needs to talk to another, we know perfectly well which implementation we have in mind, and there will never be a second implementation running side-by-side. So why use an interface here?
There are two cases when you want to program to an interface even if you know you’ll always have a single implementation:
- You depend on a component that has not been delivered yet and need to start programming – and, more importantly, testing your code – right now. You need to simulate (mock) this component until it is delivered, only for the purpose of testing.
- You depend on a component that has a persistent state (such as a database), real-world effect (such as a credit card processing system) or simply depend on a slow or unreliable resource (web service). You need to substitute these components with an implementation of your own, only for the purpose of testing.
In these situations (which all happen to be related to testing), it’s good that components are not wired together in source code – even if we know how they will be wired together in the final setup. So how do we wire these components together? This is where Dependency Injection comes into play.
With the DI design pattern, you instruct a “container” how the objects should be wired together, and you ask the container to wire the objects. Indeed, most of the time, you don’t create new objects using the class constructor, but using a factory method of the container. The container would figure out which implementation you need to be wired to, and would return it to you.
Dependency injection addresses the problem of wiring loosely-coupled components together. Aspect-oriented programming (AOP) solves a very different issue: taking a reality where some features (such as transaction handling or auditing) affect virtually all business requirements, and expressing this complex reality in a compact way, with minimal duplication of code (DRY principle). When we think of auditing, we want to look in a single source file. When we see a piece of code related to customer management, we don’t want to see auditing code. As you know, this is impossible to achieve with plain object-oriented programming, and that’s why aspect-oriented programming was invented.
Aspect-oriented programming allows you to specify an implementation pattern (using source code, not natural language) into a so-called aspect, and to apply the aspect to base code, typically to business or UI code.
Similarities and Differences between DI and AOP
There are some similarities in the objectives of DI and AOP:
- Both achieve a loosely coupled architecture.
- Both achieve a better separation of concerns.
- Both offload some concerns from the base code.
However, DI and AOP differ significantly in situations where they are found to be useful:
- DI is good when you have a dependency on a component, you just don’t know to which implementation.
- AOP is good when you need to apply a common behavior to a large number of elements of code. The target code is not necessarily dependent on this behavior to be applied.
How did dependency injection ever become associated with aspect-oriented programming? It’s simply that the DI pattern makes it easy to add behaviors to components. Here’s how: when you ask a dependency from the container, you expect to receive an implementation of an interface. You can probably guess which implementation you will receive, but since you play the game, you just program to the interface. And you’re right. If you ask the container to add a behavior (for instance tracing) to the component, you will not receive the component itself; instead, you will receive a dynamic proxy. The dynamic proxy stands between your code and the component, and applies the behaviors you added to the component.
Because DI frameworks have the ability to put behaviors between the client and the implementation of a component, DI has been identified as one of the technologies able to deliver AOP.
There are two technologies that can deliver dynamic proxies: dynamic code generation (using System.Reflection.Emit, typically), and remoting proxies (see RealProxy).
Please read Dino Esposito’s article in MSDN Magazine for details about dynamic proxies.
Proxy-Based AOP: The Good
What I love about the proxy-based approach is how AOP blends into the DI concept. DI lets you configure how the container should build the components by injecting dependencies, setting properties and calling initialization methods – you can also inject aspects at the interface of components. You can configure aspects just as other dependencies in an XML file or using C# code, just as if the issue of building and assembling components were unified.
The ability to add aspects just by changing a configuration file may prove very useful in production. Suppose there’s some issue somewhere and you want to trace all accesses to a specific component. You can do that without recompiling the application. This may be a decisive advantage in formal environments, where it can take months to go from a bug report to the deployment of a fix.
Proxy-Based AOP: The Bad
Proxy-based AOP, as implemented by DI frameworks, has serious limitations:
- It only works on the surface of components, exposed as an interface. It does not work for all methods that are internal to the component, whether public or private, static or instance. Remember that a component is an application part, and generally includes several classes.
- It only works when the client code got a reference of the component from the container. Aspects will not be applied if the component invokes itself, or if the component is invoked without the intermediary of the container. This can be seriously misleading!
- Since aspects are applied at runtime, there is no possibility for build-time verification. Build-time AOP frameworks such as PostSharp will emit compilation errors if the aspect is used incorrectly, and will even provide the aspect developer with different ways to verify, at build time, that the aspect has been used in a meaningful way (for instance, don’t add a caching aspect to a method that returns a stream, or to a method that already has a security aspect).
- It is difficult to figure out (a) which aspects are applied to the piece of code you’re working on and (b) to which pieces of code the aspect you’re working on has been applied. Full frameworks such as PostSharp or AspectJ can display this information inside the IDE, substantially improving the understandability of source code.
- Run-time AOP has higher runtime overhead than build-time frameworks. This may be of minor importance if you only apply aspects to coarsely-grained components, but just generating dynamic proxies has a significant cost if you have hundreds of classes.
- Dynamic proxies are always a side feature of dependency injection framework and do not implement the full vision, even when technically feasible. They are mostly limited to method interception, which hardly compares to what AspectJ and PostSharp have to offer.
Proxy-Based AOP: The Ugly
There’s always a risk when a practice becomes mainstream: it starts being used just for the sake of itself, because it makes you feel safe. This is when a practice becomes a dogma.
It clearly happened to test-driven development, which suddenly imposed upon the community to write tests at meaningless levels of granularity. And thus so with dependency injection, which tends to be used to isolate classes from each other. Remember that dependency injection had been initially proposed to isolate components, i.e. parts of applications of rather coarse granularity. I am concerned that we are ‘forced’ (by social pressure) to write more complex applications just to meet testing ‘requirements’ that do not result in higher quality. Remember that unit testing is an instrument, not an objective. The ultimate objective is quality of your application during its whole lifetime.
Don’t get me wrong, I am not suggesting that you should not write unit tests. I am concerned that religiously following a practice may actually degrade the quality of your code.
The same issue exists with proxy-based AOP. If DI frameworks made you excited for the power of AOP, you may be tempted to restructure your code just to use more of this proxy-based stuff. Don’t. Adding aspects to the surface of components using proxies is simple and elegant. If you are making your code DI-friendly just because you need aspects, you are probably doing something wrong. The principal mantra of AOP is that you should not change your base code. If you are, you should probably look at a real AOP framework such as PostSharp or AspectJ.
Aspect-Oriented Programming and Dependency Injection are very different concepts, but there are limited cases where they fit well together. In these situations, using the AOP facility of a DI framework makes perfect sense. For other cases you have two options: abuse from dynamic proxies (a non-solution), or use of a dedicated AOP framework.
This is not just me, not just PostSharp, and not just .NET. The same dilemma hangs over Java programmers, who have to choose between the proxy-based Spring AOP and the build-time AspectJ (read the summary of Ramnivas Laddad ‘s talk on this topic).
AOP and DI are not competing technologies. You can use both in the same project. What’s important is to understand what each technology was designed for, and to focus on the quality of your code.