Archive

This is one of these unpopular but necessary decisions that every software publisher has to do now and then. It’s generally accepted that pulling off expensive but little-used features is a healthy decision because it frees resources that can be spent on more popular features. This is what we are doing with documentation and support of PostSharp SDK.

What is PostSharp SDK?

PostSharp is made of two principal components: PostSharp.dll and PostSharp.Sdk.dll.

PostSharp.dll is the high-level public API. It mostly contains interface definitions, abstract classes and custom attributes from which developers can derive their own aspects. This library is designed to be used by developers of business applications. It offers transformation primitives (such as: intercepting a method, wrapping a method, introducing an interface) that developers can add to their code.  PostSharp.dll leverages real aspect-oriented programming, a disciplined extension of object-oriented programming. Just as a normal compiler, the high-level API enforces syntactic rules, and you will get a reasonable error message if you violate them.

PostSharp.Sdk.dll is the opposite of PostSharp.dll. Its public API allows you to modify .NET assemblies at a very low level. You can do everything. The API does not enforce any programming discipline or syntactic rules. You can create invalid assemblies without even getting a warning. It requires you to learn the MSIL specification and to understand most of the 565 pages of the ECMA-335 specification. PostSharp.Sdk.dll is made of several components. The most low-level ones (code model, image reader, image writer) are also found in Mono Cecil and Microsoft CCI, although PostSharp SDK has its own implementation. PostSharp SDK also includes several middle-level components such as the project loader, the aspect infrastructure (which could allow you to use PostSharp SDK to develop an aspect weaver that has different syntax than PostSharp.dll), the aspect weaver (the implementation of the PostSharp.dll), the custom attribute multitasking components, and much more.

PostSharp SDK is more complex than you think

Contrarily to PostSharp itself, PostSharp SDK has a very steep, but misleading, learning curve. You get quite quickly the illusion that you “got” it, but the devil is in details. The 80-20 rules does not apply to PostSharp SDK: what applies is 95-5: 95% of time is spent in addressing 5% of cases. Think of MSIL programming as hiking in high mountains without a map. You always have the illusion that the top is near, but whenever you climb on what appeared to be top, you discover another, higher top. If you want to have an idea of the effort you’ll need to reach the goal, you need a map; ECMA-335 and the PostSharp SDK class reference will give you a fair overview of the complexity of your task.

So why do we have PostSharp SDK? First, for our own needs. From our own point of view, PostSharp SDK is the most important component of PostSharp. Secondly, because this API is useful to a tiny minority of ISVs with very specific needs (for instance: high focus on speed). They can afford to maintain MSIL skills because the effort is leveraged to thousands of customers. Third, because PostSharp SDK can be used to overcome missing features of PostSharp.dll. But this is where things can go wrong.

PostSharp SDK is undocumented and unsupported

There has been some criticism that PostSharp SDK is undocumented. This is not accurate: the class reference is quite complete and contains more than what’s obvious from the method signature. Many actually claim that PostSharp SDK has the best documentation of all MSIL rewriting tools.

What is true is there is no conceptual documentation. Let me be clear: the lack of conceptual of documentation is a feature, not a defect. The SDK will not be better documented. As any company, we have to allocate limited resources to a potentially unlimited number of features. It does not economically make sense to spend time in documenting a very complex API that is used by a dozen of customers.

The same holds for support. We cannot provide support on a highly complex and incompletely documented API. We cannot guide you through baby steps. PostSharp SDK is not supported. You use it at your own risks.

Nostra Culpa

On the support forum, you could often read answers that sounded like “this is not possible to do with PostSharp.dll but can be done with PostSharp.Sdk.dll”, followed by a disclaimer that PostSharp SDK is hard and you should maybe not try. This had led some customers to ask for more information for these specific cases, which I published on the blog. Mistake! These blog posts have been interpreted as an advertisement of and an invitation to use PostSharp SDK. I apologize for that. I will not advertise PostSharp SDK again. It will remain unsupported and undocumented. The harmful blog posts have been withdrawn.

PostSharp SDK still available

That said, PostSharp SDK is still available for use in the Professional Edition. We are only making it clearer that this feature is not officially supported, and must be used at own risk. I believe that it’s not a good idea to code directly MSIL instructions unless you can leverage the effort to thousands of customers (as we do), but you’re free to try. It’s just that you’re on your own.

I’m aware this decision will be unpopular, but I’m convinced it’s a necessary one to continue provide good support to the community.

Happy PostSharping !

-gael

I’m excited to announce the first release candidate of PostSharp 2.1, available for download from our website and from the NuGet official repository.

PostSharp 2.1 is a minor upgrade of PostSharp 2.0; the upgrade is free for everybody. The objective of this version is to fix several gray spots in the previous release.

This release candidate is ought to be of very high quality and free of known bugs, but needs to be tested by the community before it can be labeled stable. As required by the RC quality label, the online documentation has been updated to reflect the latest API.

PostSharp 2.1 has full backward binary compatibility with PostSharp 2.0.

What’s new in RC 1?

This release candidates contains the following additions to the previous CTP:

  • The design of architecture validation (PostSharp.Constraints) has been finalized.
  • Warnings can be disabled locally (for a specific element of code) using the IgnoreWarning custom attribute. See online documentation for details.
  • The PostSharp project property page in Visual Studio now allows you to specify which warnings should be globally ignored or escalated into errors.
  • Compatibility with Code Contracts 4.0
  • As an experimental feature, warnings and errors now come with file/line information. The feature must be enabled manually from Visual Studio options. We’re eager to hear feedback about this feature from customers with larger projects.
  • 17 bug fixes

What’s new in PostSharp 2.1?

If you missed the previous announcements, here’s a list of new features in PostSharp 2.1 compared to version 2.0:

  • Improved build-time performance: up to 5x faster. Read more.
  • Architecture validation: build-time validation of design rules. Read more.
  • Extended reflection API: programmatically navigate code references. Read more and more.
  • NuGet packaging and improved no-setup deployment experience. Read more.
  • Support for obfuscators: we now support Dotfuscator. Read more.
  • Support for Silverlight 5.
  • Compatibility with Code Contracts 4.
  • Improved messaging API.
  • Tab page in Visual Studio project properties.
  • Streamlined licensing experience.
  • License server.
  • Streamlined “getting started” experience.

Upgrade your Projects

To upgrade your projects from PostSharp 2.0 to PostSharp 2.1 easily you can use the conversion utility included in the PostSharp HQ application. Just open the app and click on “convert”, then select the folder containing your projects. References to libraries and MSBuild imports will be automatically fixed.

PostSharp 2.1 Roadmap

A release candidate means that we are confident in the code quality and is that all mandatory quality work, including documentation, has been done. PostSharp 2.1 is now the default version on our download page. We’ll wait a couple of weeks to allow the community to give this version a try, then publish the RTM or another RC, according to the feedback.

Note that the license agreement allows for production use.

It’s now time to download PostSharp 2.1 and upgrade your projects!

Happy PostSharping!

-gael

The full source code for this blog post is available for download.

One of the most interesting features of PostSharp that sets it apart from other AOP tools is its ability to apply aspects at compile-time. As I've explored in previous blog posts, this gives you the ability to do compile time checking and initialization, insteaad of costly and error prone runtime validation. For instance, one could use CompileTimeValidate to enforce that a given aspect can only be used on MVC Controller methods.

    [Serializable]
    public class ConstrainedAspect : OnMethodBoundaryAspect
    {
        public override bool CompileTimeValidate(System.Reflection.MethodBase method)
        {
            if(!method.DeclaringType.IsSubclassOf(typeof(Controller)))
            {
                Message.Write(MessageLocation.Of(method),
				SeverityType.Error,
				"987",
				"Aspect can only be used on Controllers. " + 
				"You applied it on type {0}",
				method.DeclaringType.Name);
                return false;
            }
            return true;
        }

        public override void OnEntry(MethodExecutionArgs args)
        {
            var controller = (Controller) args.Instance;
            controller.ViewData["aspect"] =
			"Constrained Aspect was here at " + DateTime.Now;
        }
    }

This nifty feature has led some Postsharp users to create "aspects" that only contain compile time validation. An architect could put some code in here that helps to validate and enforce the architectural design: an "architectural unit test", if you will. With PostSharp 2.1, these "constraints" become a first-class feature.

There are two types of constraints available in PostSharp 2.1: scalar constraints and referential constraints. The separation is partially a semantic one, as both types of constraints are just ways of enforcing rules at compile time that the C# compiler itself doesn't give you. It's also a technical separation, as referential constraints are checked on all assemblies that reference the code element. (Note that you'll need to turn on architectural validation in the "PostSharp" tab of your project properties, and that this feature is available only in the professional version).

Scalar Constraints

A scalar constraint is a simple constraint that is meant to affect a single piece of code in isolation. This is the most like using a CompileTimeValidation method in an aspect, except without the aspect part. For instance, if you are a user of NHibernate, you know that your entity classes must have virtual properties. However, if you're like me, you might add a new property and forget to make it virtual. Then, you compile your project, run it, go through a test case, and get a runtime error. Wasted time! Here's a scalar constraint that you can apply to your entities to make sure you don't forget.

    [Serializable]
    [MulticastAttributeUsage(MulticastTargets.Class)]
    public class NHEntityAttribute : MulticastAttribute, IScalarConstraint
    {
        public void ValidateCode(object target)
        {
            var targetType = (Type)target;
            var properties = targetType.GetProperties(
			BindingFlags.Public | BindingFlags.Instance);
            var virtualProperties = properties.Where(p => !p.GetGetMethod().IsVirtual);
            foreach (var propertyInfo in virtualProperties)
            {
                Message.Write(MessageLocation.Of(targetType),
				SeverityType.Error,
				"998",
				"Property {0} in Entity class {1} is not virtual",
				propertyInfo.Name, targetType.FullName);
            }
        }

        public bool ValidateConstraint(object target)
        {
            return true;
        }
    }

Note that ValidateConstraint exists to validate the application of the constraint itself (a validation of a validation!). In my example above, I'm performing no validation at all and just returning true, but certainly you could check to make sure this validation is not applied to a static class, for instance. If ValidateConstraint method returns false, then the constraint is considered not valid, and will not be applied.

If you have all your entities in a single namespace, it's very easy to apply this constraint to all your entities (even ones that you haven't written yet) by multicasting that attribute. (For more info on multicasting, check out Dustin's excellent blog posts on multicasting: part 1 and part 2).

[assembly: NHEntity(AttributeTargetTypes = "YourNamespace.Models.Entities")]

When you forget the 'virtual' after you add a new property, you'll see something like this when you compile:

NHEntity compiler error

You could use constraints like this for similar situations, like WCF DataMembers in a DataContract or OperatingContracts in a ServiceContract. You can avoid a lot of frustration and wasted time.

Referential Constraints

Referential constraints are meant to enforce architectural design across assemblies, references, and relationships. This feature can be very useful, especially if you are writing an API. PostSharp actually ships with 3 out-of-the-box constraints for common scenarios: ComponentInternal, InternalImplements, and Internal.

ComponentInternal raises a compiler error if the code its applied to is used in a namespaces besides the one it resides in. For instance:

	// NamespaceA
	namespace PostsharpArchitecturalConstraints.API.NamespaceA
	{
		[ComponentInternal(Severity = SeverityType.Error)]
		internal class ApiA
		{
			public string GetFriendsName()
			{
				return "Mr. Friendly";
			}
		}
	}
	
	// NamespaceB
	using PostsharpArchitecturalConstraints.API.NamespaceA;
	namespace PostsharpArchitecturalConstraints.API.NamespaceB
	{
		public class ApiB
		{
			public string GetFriendsName()
			{
				var a = new ApiA();
				return a.GetFriendsName();
			}
		}
	}

Component Internal compiler error

If you want to specify some exceptions (a specific namespace that you want the internal class to be used in), you can do that in the ComponentInternal's constructor, but by default, it only allows code within its own namespace (and child namespaces) to call it.

InternalImplements is for use on interfaces, and limits implementations of the interface to its own assembly. This means the interface can stay public, for instance, but nothing outside the assembly can implement it.

	// in PostsharpArchitecturalConstraints.API assembly
	namespace PostsharpArchitecturalConstraints.API.Interface
	{
		[InternalImplement(Severity = SeverityType.Error)]
		public interface IPublicInterface
		{
			void DoOperation();
			string GetValue();
		}
	}

	// in PostsharpArchitecturalConstraints assembly
	using PostsharpArchitecturalConstraints.API.Interface;

	namespace PostsharpArchitecturalConstraints.Models.Services
	{
		public class MyPublicInterfaceImpl : IPublicInterface
		{
			private string _value;

			public void DoOperation()
			{
				_value = "operation complete";
			}

			public string GetValue()
			{
				return _value;
			}
		}
	}

InternalImplement compiler error

Why would you want to do this? If you are designing an API, you may want the user to be able to reference an interface, and use your provided implementations of that interface, but you also may want to change your interface somewhere down the line. But, if you make changes to your interface, you could potentially break any implementations that the user has already made. By using InternalImplement, the user retains some flexibility in how they consume your API without the potential of their code breaking when they upgrade to your new version.

And finally, Internal. If you put Internal on a public item, it will remain public, but cannot be used by another assembly.

	// in one assembly
	[Internal]
	public class PublicAndInternal
	{
		public string GetValue()
		{
			return "this can only be called in its own assembly";
		}
	}

	// in another assembly
	public class TryingToUseInternal
	{
		public string Execute()
		{
			var publicInternal = new PublicAndInternal();
			return publicInternal.GetValue();
		}
	}

Internal compiler error

Of course, the door is wide open for you to write your own referential constraints. Here's one I wrote called Unsealable. It makes any classes inherited from the selected class unable to be sealed.

    [MulticastAttributeUsage(MulticastTargets.Class)]
    public class Unsealable : ReferentialConstraint
    {
        public override void ValidateCode(object target,
			System.Runtime.InteropServices._Assembly assembly)
        {
            var targetType = (Type) target;
            var sealedSubClasses = ReflectionSearch.GetDerivedTypes(targetType)
                                        .Where(t => t.DerivedType.IsSealed)
                                        .Select(t => t.DerivedType)
                                        .ToList();
            sealedSubClasses.ForEach(sealedSubClass => Message.Write(
				MessageLocation.Of(sealedSubClass), 
				SeverityType.Error,
				"997",
				"Error on {0}: subclasses of {1} cannot be sealed.",
				sealedSubClass.FullName, targetType.FullName));
        }
    }

This example also makes use of the handy ReflectionSearch utility that comes with PostSharp, which makes certain reflection tasks (like finding derived types in the above example) cleaner, easier, and Linq-ready. Here's an example of applying that constraint:

	[Unsealable]
	public class MyUnsealableClass
	{
		protected string _value;

		public MyUnsealableClass()
		{
			_value = "I'm unsealable!";
		}

		public string GetValue()
		{
			return _value;
		}
	}

	public sealed class TryingToSeal : MyUnsealableClass
	{
		public TryingToSeal()
		{
			_value = "I'm sealed!";
		}
	}

Unsealable error

Constraints can be very valuable for larger teams, or for building an API that will be consumed by a larger audience (even an internal one). Don't go too crazy, though. Only build constraints that will help you, your team, and your API consumers save time and make less mistakes. By enforcing constraints on how code is to be used, you can protect yourself and your users from costly breaking changes down the road. It may be a politically sensitive issue, so good communication is still important, but it's very much like defensive programming, just at the architectual design level.