Archive

Source code used in these blog posts is available on GitHub

The last consulting gig I was at, we were required to use an SOA framework to interact with enterprise-level data. This framework was SOA almost entirely in name only, completely home-grown, and was something of a misguided "pet project" of the chief of IT, insecure, built on some shaky technologies, and was created to solve a problem that was either non-existent, or solvable by much simpler means. My team was very frustrated by this framework, but as consultants, you sometimes have to learn to pick your battles, go for the easy wins first, build trust, and then gradually tackle larger problems. However, this can be a very slow process that takes years, depending on the organization.

In the meantime, we were stuck with this framework that didn't really work well with multithreaded apps (like, for instance, a web server), would often crash, and sometimes just wouldn't work at all. We had to build a working app on top of this mess, and since we're trying to build trust in the organization, we needed to ensure a certain level of reliability in the app that we were delivering. What we put together was an elaborate system of transactions, retries, error handling, and logging. This worked pretty well, and usually hid all the unreliable nastiness from the end user. However, to use this system, changes had to be made to every method that would touch the SOA framework. Soon after, I got tired of having to add this to every method, forgetting to add it only to see a bug get filed, having to remind others, violating single-responsibility over and over again, etc. I wondered if there was a better way, and that's when I first became interested in using AOP, and PostSharp specifically.

Enough ranting about bygone projects though: you can use PostSharp to improve transaction management, regardless of whether you're building an app on a brittle home-grown SOA or not. Here's an example of an OnMethodBoundaryAspect that will give you the basics of a typical transaction (i.e. start, commit, rollback):

[Serializable]
public class TransactionScopeAttribute : OnMethodBoundaryAspect
{
	[NonSerialized] private ICharityService _charityService;
	private string _methodName;
	private string _className;

	public override void CompileTimeInitialize(MethodBase method, AspectInfo aspectInfo)
	{
		_className = method.DeclaringType.Name;
		_methodName = method.Name;
	}

	public override void RuntimeInitialize(System.Reflection.MethodBase method)
	{
		// in practice, the begin/rollback/commit might be in a more general service
		// but for convenience in this demo, they reside in CharityService alongside
		// the normal repository methods
		_charityService = new CharityService();
	}

	public override void OnEntry(MethodExecutionArgs args)
	{
		_charityService.BeginTransaction(); 
	}

	public override void OnException(MethodExecutionArgs args)
	{
		_charityService.RollbackTransaction();
		var logMsg = string.Format("exception in {0}.{1}", _className, _methodName);
		// do some logging
	}

	public override void OnSuccess(MethodExecutionArgs args)
	{
		_charityService.CommitTransaction();
	}
}

Really simple--there's nothing new here that we haven't seen in my other blog posts. Note that it's usually a good idea to rethrow the exception once you've handled the rollback, and that's done by default. Simply tag all the methods that need to take place in a transaction with [TransactionScope], and now you've saved yourself from writing the tiresome try/catch begin/commit/rollback code over and over with one simple aspect. And we've also gained a clean separation, and a great central place to put any logging code or special exception handling.

One thing that the above code doesn't address is the issue of "nested" transactions. In my example code, the Begin/Commit/Rollback transaction methods are just placeholders that don't actually do anything. In a production app, what you actually use to manage the transaction depends greatly on the underlying data access. If you are using ADO.NET connections like SqlConnection, ODBC, Oracle, etc, then you might want to use a TransactionScope (not to be confused with the name of this aspect) to make nesting easy (along with other things, like distributed transactions). If you are using some other persistence technology you may have to use a different API. However, remember to take the possibility of nested transactions into account (i.e. if there are two methods that both have the TransactionScope attributes applied to them, and one calls the other).

Now that we have a basic transaction aspect, go back to my original example: the service that I'm using will often fail and succeed inconsistently. What I need to add is a retry-loop: "if at first you don't succeed, try, try again." To do this, I can't use OnMethodBoundaryAspect anymore, I need MethodInterceptionAspect instead. Of course, I don't want to keep retrying forever, so I'll limit the number of retries with a hardcoded int value (or perhaps you'll want it in an App.Config or Web.Config file so you can change it without recompiling). I also know that a retry is only worth doing if a specific type of exception is thrown, say a DataException. On any other type of exception, it's worthless to retry (maybe the service crashed...again), so just log that exception for now and forget the retries (maybe in a production app you would want to automatically contact some on-call IT staff).

[Serializable]
public class TransactionScopeAttribute : MethodInterceptionAspect
{
	[NonSerialized] private ICharityService _charityService;
	[NonSerialized] private ILogService _logService;

	private int _maxRetries;
	private string _methodName;
	private string _className;

	public override void CompileTimeInitialize(MethodBase method, AspectInfo aspectInfo)
	{
		_methodName = method.Name;
		_className = method.DeclaringType.Name;
	}

	public override void RuntimeInitialize(System.Reflection.MethodBase method)
	{
		_charityService = new CharityService();
		_logService = new LogService();
		_maxRetries = 4;            // you could load this from XML instead
	}

	public override void OnInvoke(MethodInterceptionArgs args)
	{
		var retries = 1;
		while (retries <= _maxRetries)
		{
			try
			{
				_charityService.BeginTransaction();
				args.Proceed();
				_charityService.CommitTransaction();
				break;
			}
			catch (DataException)
			{
				_charityService.RollbackTransaction(); 
				if (retries <= _maxRetries)
				{
					_logService.AddLogMessage(string.Format(
						"[{3}] Retry #{2} in {0}.{1}",
						_methodName, _className,
						retries, DateTime.Now));
					retries++;
				}
				else
				{
					_logService.AddLogMessage(string.Format(
						"[{2}] Max retries exceeded in {0}.{1}",
						_methodName, _className, DateTime.Now));
					_logService.AddLogMessage("-------------------");
					throw;
				}
			}
			catch (Exception ex)
			{
				_charityService.RollbackTransaction();
				_logService.AddLogMessage(string.Format(
					"[{3}] {2} in {0}.{1}",
					_methodName, _className,
					ex.GetType().Name, DateTime.Now));
				_logService.AddLogMessage("-------------------");
				throw;
			}
		}
		_logService.AddLogMessage("-------------------");
	}
}

Yikes, this one looks a little scary! But don't worry, let's break it down into smaller, bite size pieces. CompileTimeInitialize, and RuntimeInitialize shouldn't look much different: they're just storing the class and method names, initializing the services we'll need, and initializing the maximum retries allowed.

For OnInvoke, let's break it down into the four possible scenarios that it handles:

  1. The method runs without exception and everything works. This is the happy path, and that's the block of code inside of the try { }
  2. The method fails with a DataException, and we haven't reached the retry limit yet. This is the code in the first part of the if/else inside the first catch { }.
  3. The method fails, and now we've exceeded the maximum number of retries. This is the code in the second part of the if/else inside the first catch { }.
  4. The method fails, but it's some unknown exception where a retry won't help. This is the block of code in the second catch { }

In my sample app, I've made a CharityService whose methods will fail almost 50% of the time due to a DataException, succeed 50% of the time, and throw a non-DataException exception 1-in-50 times. When you run the sample application, you'll be able to see in the log window how many retries are taking place (if any), along with the begins, rollbacks, and commits. Finally, the user will only see a MessageBox error message when the number of retries is exceeded, or when some non-DataException exception is thrown.

Screenshot of sample app

By using this aspect, you can give your application an improved level of robustness. The aspect code is getting large, but manageable. If it needs to get much more complex, it might be time to refactor: consider putting each case into its own class, and just use the OnInvoke method to establish a general flow.

The real-world example I gave at the beginning of this post is (hopefully) a rare case, but even the best of architectures can fail from time-to-time. You don't have to dread handling those situations when you are using PostSharp to implement SOLID principles in your application.

Matthew D. Groves is a software development engineer with Telligent, and blogs at mgroves.com.

Source code used in these blog posts is available on GitHub

One of the best uses of AOP is to take cross-cutting concerns that repeat themselves over and over in your system, and move them into their own class. This is a version of the Single Responsibility Principle (SRP). Authentication and authorization are important parts of many applications, but too often the code to check if a user is authorized is spread all over the app, making logic changes difficult and regression common. A class should have one and only one reason to change, so let's get the "auth" stuff into its own class.

The main functionality of authentication isn't something that's normally spread out through an entire application. In a web app, for instance, login and authentication is typically done on one login page (if it's done at the web application level at all--it could be done at the server level), and login information is stored in some sort of token with an expiration on it so that the user is automatically logged out after a certain period of time. The only cross-cutting concern then, is that each web page that requires authentication needs to verify that the user is still logged in. You could certainly use PostSharp to do this, but it really isn't (in my opinion) a particularly strong use-case for PostSharp.

Authorization, on the other hand, is a great place to use PostSharp. Too often the logic about which user role is allowed to perform which activity is messily scattered all over the application, and PostSharp can be used to organize it, as well as provide reusable object-oriented components. Additionally, sometimes role-based security is too broad. A more finegrained control is sometimes needed, for instance to restrict editing data unless you are the user who first created it. Let's look at a sample application that's very similar to one I worked on as a consultant, and see how PostSharp can help.

An application is needed for users to fill out government forms. This would probably be a website, but I'll be using a WinForms app just to keep things simple for now. Each user can submit government forms (just a single textbox in my example). Administrators can delete the forms, but normal users can only submit (add) new forms and look at their existing forms.

I won't list all the code here, but here's a service class that provides basic functionality for the above requirements. This class is using a static collection as persistence, but of course a database, webservice, etc, would be used in a real app:

public class GovtFormService : IGovtFormService
{
	private static readonly IList _govtFormsDatabase = new List();

	public GovtFormService()
	{
		// build up some initial entries of the static list
	}

	public void SubmitForm(GovtForm form)
	{
		_govtFormsDatabase.Add(form);
	}

	public IEnumerable GetAllForms()
	{
		return _govtFormsDatabase;
	}

	public GovtForm GetFormById(Guid guid)
	{
		return _govtFormsDatabase.FirstOrDefault(form => form.FormId == guid);
	}
}

Just wire up that service to the Windows form, and wire up the buttons on the form to each of those methods, and you have a basic application. However, the requirement was that users should only be able to view the details of their own forms. GetFormById currently doesn't do any checking at all. We could put some if-statements in there, but let's instead create an aspect that we can use anywhere:

[Serializable]
public class AuthorizeReturnValueAttribute : OnMethodBoundaryAspect
{
	[NonSerialized] private IAuth Auth;

	public override void RuntimeInitialize(System.Reflection.MethodBase method)
	{
		Auth = new AuthService();
	}

	public override void OnSuccess(MethodExecutionArgs args)
	{
		var singleForm = args.ReturnValue as GovtForm;
		if (singleForm != null)
		{
			if(Auth.CurrentUserHasPermission(singleForm, Permission.Read))
			{
				MessageBox.Show(
				 "You are not authorized to view the details of that form",
				 "Authorization Denied!");
				args.ReturnValue = null;
			}
			return;
		}
	}
}

This is one way to approach it. Check to see if the method that is being intercepted is returning a GovtForm. If it is, make sure it's a form that belongs to the current user. Note that the IAuth field is marked as NonSerialized, and that it's initialized in the RuntimeInitialize override. Instead of hardcoding the dependency, you could use a IoC container as a service locator instead (see previous post about dependency inversion).

Add a bit more code to the aspect, and we can handle methods that return a whole collection of GovtForms by "filtering" out the ones that the current user doesn't have access to.

var formCollection = args.ReturnValue as IEnumerable;
if (formCollection != null)
{
	args.ReturnValue = formCollection
			.Where(f => Auth.CurrentUserHasPermission(f, Permission.Read));
	return;
}

Of course, there's nothing special about GovtForm, other than the fact that it has a UserName property. You could add an "ISecurable" interface to each business object class that you'd want to secure in this way, and then CurrentUserHasPermission would take an "ISecurable" argument rather than a GovtForm argument specifically. Finally, put the "AuthorizedRecordsOnly" attribute on any service/repository method that returns a single business object or collection of business objects, and away you go: only the records the user is allowed to see will be returned, and you didn't need to make any major coding changes to the UI or to the services.

Now that you have a couple of handy aspects under your belt, it's time to look at something a little more complex in the overall scheme of things. Suppose you have a Caching aspect AND an Authorization aspect on a method. Caching should probably come after authorization, otherwise unauthorized cached data might be returned. So how do you enforce that Authorization is applied first, followed by the Caching aspect? Here's how you do it:

1. Apply a ProvideAspectRoleAttribute to one or more of your aspects. This can be a custom string, or you can use the StandardRoles enumeration that comes with PostSharp

2. Apply an AspectRoleDependencyAttribute to one or more of your aspects to specify the type of dependency, and the role that it is dependent upon.

To fulfill the requirement that Authorization is applied first, then Caching, apply attributes to your aspects like so:

[Serializable]
[AspectRoleDependency(AspectDependencyAction.Order,
	AspectDependencyPosition.Before, StandardRoles.Caching)]
public class AuthorizeReturnValueAttribute : OnMethodBoundaryAspect
{
}

[Serializable]
[ProvideAspectRole(StandardRoles.Caching)]
public class CachingAttribute : OnMethodBoundaryAspect
{
	public override void OnEntry(MethodExecutionArgs args)
	{
		// do caching stuff
	}

	public override void OnSuccess(MethodExecutionArgs args)
	{
		// do caching stuff
	}
}

We're telling PostSharp that the Caching aspect belongs to the "Caching" role, and we're also telling PostSharp that the Authorization aspect should be applied "before" the Caching role. Here's what the result looks like in Reflector when I apply both aspects to the "GetAllForms" method:

public IEnumerable GetAllForms()
{
	MethodExecutionArgs CS$0$2__aspectArgs = new MethodExecutionArgs(null, null);
	<>z__Aspects.a1.OnEntry(CS$0$2__aspectArgs);
	IEnumerable CS$1$1__returnValue = _govtFormsDatabase;
	<>z__Aspects.a1.OnSuccess(CS$0$2__aspectArgs);
	CS$0$2__aspectArgs.ReturnValue = CS$1$1__returnValue;
	<>z__Aspects.a0.OnSuccess(CS$0$2__aspectArgs);
	return (IEnumerable) CS$0$2__aspectArgs.ReturnValue;
}

Just for demonstration, if I change Caching's AspectDependencyPosition to "Before" instead, here's what it looks like Reflector:

public IEnumerable GetAllForms()
{
	<>z__Aspects.a1.OnEntry(null);
	MethodExecutionArgs CS$0$2__aspectArgs = new MethodExecutionArgs(null, null);
	IEnumerable CS$1$1__returnValue = _govtFormsDatabase;
	CS$0$2__aspectArgs.ReturnValue = CS$1$1__returnValue;
	<>z__Aspects.a0.OnSuccess(CS$0$2__aspectArgs);
	CS$1$1__returnValue = (IEnumerable) CS$0$2__aspectArgs.ReturnValue;
	<>z__Aspects.a1.OnSuccess(null);
	return CS$1$1__returnValue;
}

Notice that a1 (caching) and a0 (authorization) usages are changed around.

You have a lot of flexibility here to define dependencies if you need to: there are five dependency actions you can use (Commute, Conflict, Order, Require, and None), and besides StandardRoles, you can create an unlimited number of roles named by strings. Hopefully, you won't need to use this feature too much, but it's good to know that it's there if you do. The great thing is that this type of aspect composition allows you to relax knowing that even the newest member of team won't accidentally expose unauthorized data, no matter what order he lists the aspect attributes in the code.

Matthew D. Groves is a software development engineer at Telligent, and blogs at mgroves.com.

Source code used in these blog posts is available on GitHub

Sometimes there's just no way to speed up an operation. Maybe it's dependent on a service that's on some external web server, or maybe it's a very processor intensive operation, or maybe it's fast by itself, but a bunch of concurrent requests would suck up all your resources. There are lots of reasons to use caching. PostSharp itself doesn't provide a caching framework for you (again, PostSharp isn't reinventing the wheel, it's just making it easier to use), but it does provide you with a way to (surprise) reduce boilerplate code, stop repeating yourself, and separate concerns into their own classes.

Suppose I run a car dealership, and I need to see how much my cars are worth. I'll use an application that looks up the blue book value given a make & model, and year. However, the blue book value (for the purposes of this demonstration) changes often, so I decide to use a web service to look up the blue book value. But, the web service is slow, and I have a lot of cars on the lot and being traded in all the time. Since I can't make the web service any faster, let's cache the data that it returns, so I can reduce the number of web service calls.

Since a main feature of PostSharp is intercepting a method before it is actually invoked, it should be clear that we'd start by writing a MethodInterceptionAspect:

[Serializable]
public class CacheAttribute : MethodInterceptionAspect
{
	[NonSerialized]
	private static readonly ICache _cache;
	private string _methodName;

	static CacheAttribute()
	{
		if(!PostSharpEnvironment.IsPostSharpRunning)
		{
			// one minute cache
			_cache = new StaticMemoryCache(new TimeSpan(0, 1, 0));
			// use an IoC container/service locator here in practice
		}
	}

	public override void CompileTimeInitialize(MethodBase method, AspectInfo aspectInfo)
	{
		_methodName = method.Name;
	}

	public override void OnInvoke(MethodInterceptionArgs args)
	{
		var key = BuildCacheKey(args.Arguments);
		if (_cache[key] != null)
		{
			args.ReturnValue = _cache[key];
		}
		else
		{
			var returnVal = args.Invoke(args.Arguments);
			args.ReturnValue = returnVal;
			_cache[key] = returnVal;
		}
	}

	private string BuildCacheKey(Arguments arguments)
	{
		var sb = new StringBuilder();
		sb.Append(_methodName);
		foreach (var argument in arguments.ToArray())
		{
			sb.Append(argument == null ? "_" : argument.ToString());
		}
		return sb.ToString();
	}
}

I store the method name at compile time, and I initialize the cache service at runtime. As a cache key, I'm just appending the method name with all the arguments (see: BuildCacheKey method), so it will be unique for each method and each set of parameters. OnInvoke I check to see if there's already a value in the cache, and use that value if there is. Otherwise, I just Invoke the method as normal (you could also use "Proceed" if you prefer) and cache the value for the next time.

In my sample CarDealership app, I have a service method called GetCarValue that is meant to simulate a call to a web service to get car value information. It has a random component so that it returns a different value every time it's called (only where there is a cache miss in this example).

[Cache]
public decimal GetCarValue(int year, CarMakeAndModel carType)
{
	// simulate web service time
	Thread.Sleep(_msToSleep);

	int yearsOld = Math.Abs(DateTime.Now.Year - year);
	int randomAmount = (new Random()).Next(0, 1000);
	int calculatedValue = baselineValue - (yearDiscount*yearsOld) + randomAmount;
	return calculatedValue;
}

A few notes about this aspect:

  • I could have also used OnMethodBoundaryAspect instead of MethodInterceptionAspect--either approach is fine. In this case, using MethodInterceptionAspect was the simpler choice to meet the requirements, so that's why I chose it.
  • Note that the ICache dependency is being loaded in the static constructor. There's no sense loading this dependency when PostSharp is running, so that's why that check is in place. You could also load that dependency in RuntimeInitialize.
  • This aspect doesn't take into account possible 'out' or 'ref' parameters that might need cached. It's possible to do, of course, but generally I believe that 'out' and 'ref' parameters shouldn't be used much at all (if ever), and if you believe the same then it's a waste of time to write out/ref caching into your aspect. If you really want to do it, leave a comment below, but chances are that I'll put the screws to you about using out/ref in the first place :)

Build-Time Validation

There are times when caching is not a good idea. For instance, if a method returns a Stream, IEnumerable, IQueryable, etc, these values should usually not be cached. The way to enforce this is to override the CompileTimeValidate method like so:

public override bool CompileTimeValidate(MethodBase method)
{
	var methodInfo = method as MethodInfo;
	if(methodInfo != null)
	{
		var returnType = methodInfo.ReturnType;
		if(IsDisallowedCacheReturnType(returnType))
		{
			Message.Write(SeverityType.Error, "998",
			  "Methods with return type {0} cannot be cached in {1}.{2}",
			  returnType.Name, _className, _methodName);
			return false;
		}
	}
	return true;
}

private static readonly IList DisallowedTypes = new List
			  {
				  typeof (Stream),
				  typeof (IEnumerable),
				  typeof (IQueryable)
			  };
private static bool IsDisallowedCacheReturnType(Type returnType)
{
	return DisallowedTypes.Any(t => t.IsAssignableFrom(returnType));
}

So if any developer tries to put caching on a method that definitely shouldn't be cached, it will result in a build error. And note that by using IsAssignableFrom, you also cover cases like FileStream, IEnumerable<T>, etc.

Multithreading

Okay, so we've got super-easy caching that we can add to any method that needs it. However, did you spot a potential problem with this cache aspect? In a multi-threaded app (such as a web site), a cache is great because after the "first" user pays the cache "toll", every subsequent user will reap the cache's speed benefits. But what if two users request the same information concurrently? In the cache aspect above, it might be possible that both users pay the cache toll, and neither reap the benefits. For a car dealership app, this isn't very likely, but suppose you have a web application with potentially hundreds or thousands (or more!) users who request the same data at the same time. If they all put their requests in between the time it takes to check for a cache hit and the time it takes to actually fetch the value, hundreds of users could incur the "toll" needlessly.

A simple solution to this concurrency problem would just be to "lock" the cache every time it's used. However, locking can be an expensive, slow process. It would be better if we check if there's a cache miss first, and then lock. But in between the checking and locking, there's still an opportunity for another thread to sneak in there and put a value in cache. So, once inside the lock, you should double-check to make sure the cache is still not returning anything. This optimization pattern is called "double-checked locking". Here's the updated aspect using double-checked locking:

[Serializable]
public class CacheAttribute : MethodInterceptionAspect
{
	[NonSerialized] private object syncRoot;

	public override void RuntimeInitialize(MethodBase method)
	{
		syncRoot = new object();
	}

	public override void OnInvoke(MethodInterceptionArgs args)
	{
		var key = BuildCacheKey(args.Arguments);
		if (_cache[key] != null)
		{
			args.ReturnValue = _cache[key];
		}
		else
		{
			lock (syncRoot)
			{
				if (_cache[key] == null)
				{
					var returnVal = args.Invoke(args.Arguments);
					args.ReturnValue = returnVal;
					_cache[key] = returnVal;
				}
				else
				{
					args.ReturnValue = _cache[key];
				}
			}
		}
	}
}

It's a little repetitive, but in this case, the repetition is a good trade-off for performance in a highly-concurrent environment. Instead of locking the cache, I'm locking a private object that's specific to the method that the aspect is being applied to. All of this keeps the locking to a minimum, while maximizing the usage of the cache. Note that there's nothing magical about the name "syncRoot"; it's simply a common convention, and you can use whatever name you'd like.

So, confused yet? Concurrency problems and race conditions are confusing, but in many apps they are a reality. Armed with this PostSharp aspect, you don't have to worry about junior developers or new team members or the guy with 30 years COBOL experience using C# for the first time introducing hard-to-debug race conditions when they use the cache. In fact, all they really need to know is how to decorate methods with the "Cache" aspect when caching is appropriate. They don't need to know what cache technology is being used. They don't need to worry about making the method thread-safe. They can focus on making a method do one thing, making it do one thing only, and making it do one thing well.

Matthew D. Groves is a software development engineer with Telligent, and blogs at mgroves.com.