Archive

Structured exception handling and defensive programming are the two pillars of robust software.

Both pillars fail however when it comes to handling internal faults, those that normally originate in software defects rather than in any external factors.

In this webinar, Zoran Horvat demonstrates advanced defensive coding techniques that can bring the quality of your code to an entirely new level.

Watch the webinar and learn:

  • When throwing an exception is the right thing to do
  • Why exceptions and defensive coding cannot be applied to recover from defects
  • How to handle situations when internal software defect is causing the fault
  • How to treat fault detection as an orthogonal concern to normal operation

Advanced Defensive Programming Techniques on Vimeo.

Download slides.

Download code samples.

Q&A

Q: Doesn't the argument against the rewriter for Code Contracts also apply to PostSharp?

A: Actually, no – IL rewriting done by Code Contracts makes the code you see during debugging not the same as the code you execute. PostSharp respects the whole development chain, so even though it is rewriting the IL, the debugging experience remains untouched. You can debug not only your production code, but also the aspect code including even your custom build-time logic. And the PostSharp Visual Studio extension helps you to keep track of all the enhancements PostSharp does in your code.

Q: Why not using the standard Microsoft Code Contracts library instead?

A: Code Contracts are obtrusive, and my experience shows that development teams are not ready to take the risk. The teams I was leading have never even considered using Code Contracts library, and I didn’t want to push the matters either.
If I had to pick the sole reason for not advocating use of Code Contracts library, it would be the fact that it is modifying the IL code in a non-transparent manner. Even the original documentation states that IL code rewriting may be the reason for teams to avoid using Code Contracts.
In my opinion, Code Contracts could have been designed as a regular library, and that would remove that veil of mystery that surrounds the process of code rewriting. I suspect that primary concern was performance. But then, if you look at spectacular performance gains of ASP.NET Core, for example, we could question that reasoning on grounds that performance could have been improved using other means. We will probably never know.

Q: How does this relate to Code Contracts?

A: If you decided to try Code Contracts on this same example, then you would find that there is correlation between syntax I have shown and Code Contracts syntax.
That is not a coincidence, because my example was strongly influenced by the Code Contracts library.
There are two reasons why I have opted to use custom contracts instead of a library. For one thing, I wanted to show you the bare bones, because that looks more convincing. Core of the contracts implementation is, as you could see, no longer than ten lines of code. The second reason is that in previous demonstrations part of the audience was complaining that I am pursuing a lost cause, predicting slow and painful death to Code Contracts. I am very fond of that library, it is very smart and also complete, but current level of support truly doesn’t fuel optimistic feelings.

Q: How does this presentation apply to PostSharp?

A: PostSharp tool supports code contracts as an aspect. You can add attributes to method arguments and describe preconditions that must be satisfied (e.g. that an argument must be non-null). It also supports custom extensions so that you can widen the use of aspects provided by the tool.
The theory which I have demonstrated is more general than any tool (except Code Contracts) currently supports, and that stands for PostSharp as well. I have insisted on unconstrained version of the theory so that you can see the direction in which that is going. Tools are getting better with every year anyway.

Q: How to extend code in a way that client code will know that method does not return null or does not accept null as an argument for instance?A: Expectations made by the method are called preconditions; promises made by the method are called postconditions. Failing to meet a precondition indicates a bug in the caller; failing to meet the postcondition indicates a bug in the called code. So much about terminology.
Preconditions and postconditions are part of documentation. You may wish to write unit tests which would document them in an executable way, but there is currently no way to rely on, say, language or Visual Studio about that.
There was an attempt to design a Visual Studio extension which would read Code Contracts and expose them as part of IntelliSense. That extension was unstable and I had to remove it from my copy of Visual Studio after trying it for some time. There are other similar attempts, but none which would make an impression.
There were requests to Microsoft to include contracts in Roslyn compiler, but the team has rejected that. It remains to wait for someone to develop an extension which would incorporate contracts validation into the build, I suppose.

Q: Are these given precondition contracts all we need, or do you use (many) different ones not shown here?A: As in previous question, we also have postconditions – promises made by the method. For example, method should never return null, and then it asserts that the return value is non-null before exiting. Such postcondition tells callers that they don’t have to guard against null before accessing the result of our method – a useful hint indeed.
But generally, even if you decide not to use any tools, preconditions that I have shown in this demonstration are pretty much everything you might need. As any great idea, Design by Contract is very simple.
There is, however, one subtlety. Preconditions are inherited, so that derived classes are satisfying the Liskov Substitution Principle out of the box. It is forbidden to add more preconditions in the derived class. Now, that sounds easy when you have base and derived classes. The real problem is that all this stands for interface inheritance as well, and that is what makes Design by Contract messy in .NET.
Code Contracts library came with a powerful (and complicated) solution to contracts inheritance problem. You may refer to their documentation for details.

Q: Where can I get more instructions on using contracts?A: The authoritative source of information on Design by Contract is Bertrand Meyer’s book Object-Oriented Software Construction. That is where Mr. Meyer has introduced and explained DbC theory in such terms that there is virtually nothing more to be added.

Q: Your idea of recovery is not technically sound... How this "separate" concern can work in multi-threaded environment?A: Multithreading is another concern. It is not advisable to enforce thread safety on any given piece of code. It is better to implement thread safety as a separate class which wraps a non-safe component. In that way, code is much simpler and usually less error prone.

When things are put that way, it turns that contracts will remain in the component which doesn’t deal with threads and therefore it is 100% sound.
Take TPL as an example – threading was incorporated in Tasks and Task Managers, leaving logic intact. If you take a look at the way Code Contracts library is already handling async/await constructs, you will see that there is nothing missing in their solution.

Q: How practical it is to remove precondition from production code? What should your method returns in production if someone violate your preconditions for method?A: Sometimes people decide to remove contracts checking from production code just for performance reasons. Note that contract clauses can also be viewed as part of a larger testing picture.
You can consider rigorous proofs as one level (which you run once – on paper), automated tests as another level (which you run sometimes), and then contract checks as the third safety valve (which you run every time the code executes). Now, it’s obvious that you can drop contract checks if you are pretty sure that automated tests have done the good job, or even that you have a formal proof that contract check will never fail.
In about one month from now a new course of mine will be published at Pluralsight – titled Writing Highly Maintainable Unit Tests. In that course I am explaining in great depth relation between formal proofs, unit tests and code contracts. For those of you having Pluralsight subscription, it might be useful to watch that course once it goes live.

Q: If we have checks/contracts on debug or staging, how would we not want that on production? Ex: checking to see if the students exists would have to be checked on production as well right?

A: This relates to previous question, where I have argued that code contracts are only one level of safety, though the most rigid one. If your code is tested well, then you don’t even have to put explicit contract checks into your code. However, I don’t think that any production code is tested enough to truly remove contract checks, because contract violation would then pass unnoticed and it could possibly damage data.
Therefore the conclusion. It pays to consider removing some of the costly contract checks. For example, remove quantifier which has O(N) complexity from methods with sub-O(N) complexity – like methods that have O(logN) or O(1) complexity. Leaving contract checks there would make the code observably slower. You probably don’t have to remove an O(N) contract check from a method which already has O(N) complexity or higher – like O(NlogN) or O(N^2). Contract check would then reduce performance by at most a constant factor, which we can survive.

Q: Wouldn't Debug.Assert be removed from Release build even without the #if DEBUG compiler directive?A: That is true in my example. I have included the conditional build just to show you the concept. But don’t forget that there is the Release version of assertions in .NET. Some people opt to assert in Release build as well. That makes sense in very sensitive code, where erroneous execution might cause unacceptable consequences.

Q: Is there any situation in which non-private custom Exceptions should be used?A: Exceptions remain valid solution in all exceptional situations. If you are writing a library, you would probably opt to throw a custom exception back to the caller, than to put a burden to the caller to have to use certain code contracts library (including your own).
However, inside your library, you could still be fine if you decide to institute contracts as the design strategy. That will at least save you from your own bugs, not related to any other code that might call your functions.
Other legitimate cases are already covered by the .NET Framework and other libraries and frameworks. Consider network exceptions as an example. Network communication is a typical example of the process which is full of exceptional cases, and that is where exceptions can be of great value.
Note, however, that there is one funny side of exceptions, which can give us a hint about when exceptions are a valid option. Although they are representing exceptional events, we always expect exceptions. We expect network exceptions when working with remote services. We expect database exceptions, like concurrent edit, deadlocks or timeouts. We expect IO exceptions when working with files. If you have a situation in which you actually do not expect anything exceptional to happen, than you will probably be better off if you avoid introducing an exception.

Q: Is it correct to say that catching or throwing exceptions in between the execution of a function is an anti-pattern?

A: Using exceptions to procure information or to alter control flow is definitely the anti-pattern. Just consider this case. You have caught a custom exception and you are ready to do what it says. Now – how do you know that the exception was thrown by the entity you expected? Exceptions can jump over activation frames. You are certainly not going to inspect call stack from the exception object to discover which function has thrown it.
This analysis can go much deeper than this. Maybe it is enough to think of catching specific exceptions as breaking encapsulation of the called code and knowing too much about its implementation. That is rigid and fragile and it breaks principles of OO programming - which is not the problem in itself, but then you won’t have any benefits of OO like polymorphic execution.

Q: Since preconditions have been rephrased as declarative code, can we use attributes to define contracts? Any frameworks for doing this?

A: PostSharp tool is defining contracts through use of attributes. However, keep in mind that attributes must be compile-time constant, and therefore you cannot pass just any expression as an attribute-based contract.
However, it is possible to cope with that (in theory). You could use static attribute to somehow point to another method which contains more elaborate contract verification code.

Q: Briefly, what was the purpose of the private ContractException?

A: Primary fear from throwing exceptions when precondition/postcondition is violated is that somebody might catch that exception and forget to rethrow it. In that way, operation might continue to run, even though we have just found sufficient reason to terminate it.
That is the reason why in Debug mode many developers opt to assert, i.e. to kill entire process, rather than throw an exception. But then, in production, killing the process is not the most user friendly behavior we could come up with, and for that reason we may decide to throw exceptions instead.
But then, we don’t want to throw an exception which can be explicitly listed in the catch block. We don’t want anyone to catch that exception for any reason. In order to catch a private exception, you must catch its base System.Exception instead. And it makes sense to expect that nobody will ever catch System.Exception deep inside code. That is the justification behind private exception class.

Q: If you want to access a resource such as database and you split it to two function: One to check whether there is a value, and another to fetch the value. This will use expensive resource twice, so maybe here return nullable value is ok?

A: Database access is not subject to Design by Contract. Database is to our code just another outer system and we are dealing with it with all precautions we would make in any other case. You have to catch exceptions coming out from the database because you have no idea what might go wrong there (like deadlocks or timeouts, or even plain failures such as corrupt index).

Then the question how to deal with a nonexistent value where it was expected becomes easier. You just query the database as it is. Then turn concrete data into concrete objects. And then those objects would be subject to Design by Contract. If an object requires a value, and you got nothing from the database, then don’t call the method and take a recovery course instead. Otherwise, if you try to invoke the method with null reference, just because database returned no rows, the called object would preserve right to explode.

 

About the speaker, Zoran Horvat

Matt Warren

After fifteen years in development and after leading a dozen of development teams, Zoran has turned to training, publishing and recording video courses for Pluralsight, eager to share practical experience with fellow programmers. Zoran is currently working as CEO and principal consultant at Coding Helmet Consultancy, a startup he has established to streamline in-house training and production of video courses. Zoran's blog.

 

Modern business applications rely heavily on rich domain classes, which in turn rely heavily on polymorphic execution, code reuse and similar concepts.

How can we extend rich domain classes to support complex requirements?

In this webinar, Zoran Horvat will show why an object composition approach is favored over class inheritance when it comes to code reuse and polymorphism.

Watch the webinar to learn:

  • How class inheritance can lead to combinatorial explosion of classes
  • What the limitations of object composition are
  • What design patterns help consume composed objects
  • Techniques for creating rich features on composed objects

Applying Object Composition to Build Rich Domain Models on Vimeo.

Download slides.

Download code samples.

Q&A

Q: How to control if visitor always matches concrete object and there is no implicit conversion?

A: Visitor is designed to match concrete objects. If there is the need to support out-of-band objects, like objects from one hierarchy that can be converted to objects from a different hierarchy, then the compiler will have hard time trying to understand which method of the visitor to invoke.

Major selling point of the Visitor pattern is that it makes obvious to the compiler which method to invoke in every possible situation. The variation – which concrete implementation of a method to invoke – has been turned into an object, a visitor object. That makes it possible to reach enormous flexibility at run time, while remaining within premises of strong types all the time. Not so many designs can do that.

Q: What happened to simplicity? Seems like you are a making an alternative to LINQ-to-Objects to avoid having to deal with a solution, that you call ugly. It seems over-complicated.

A: There are no elegant solutions in complex domains, and certainly no simple ones. It’s only different places to hold complexity. The goal of the game is to make client code simple.

In Visitor pattern, concrete visitors will encapsulate complexity. Chaining calls approach has only saved the caller from having to know the visitor protocol.

One alternative was mentioned by a viewer: Make classes with no common base class implement common interfaces. This is a viable solution unless distinct classes come with different method dependencies and different calling protocols. At that moment, common interface idea is off.

Another alternative is Chain of Responsibility pattern, which I have applied to this same problem in the Tactical Design Patterns in .NET: Managing Responsibilities course on Pluralsight. It works nicely, but fails to differentiate subtle cases, like what happens if an object contains more than one component of requested type.

The worst options are bare hierarchy of classes and public composition (in which the object exposes its content). Class hierarchy promotes code duplication. Public composition prevents future maintenance of the composed class.

Q: Would it be possible for OfClassification<T> method to return an IEnumerable<T> instaled of IEnumerable<Animal>?

A: Classification objects do not reference their containing Animal – that was the design decision. It might be better to expect OfClassification<T> to return IEnumerable<Tuple<Animal, T>>, which then gives entire information to the consumer.

Another question here is how will this method behave if one animal object contains multiple classification objects of the same type – will it return multiple records then? That is one of the difficulties in approach I have shown in the webinar.

Q: Why not use dynamic to implement double dispatch instead of the visitor pattern?

A: Because you don’t know whether the dynamic object will even have the desired feature (like the Start() method which only exists on the Run class). Fundamental feature of double dispatch mechanism is that it lets two concrete objects meet each other.

Q: How to apply the Visitor pattern and composition altogether with dependency injection?

A: Visitor can be injected using common DI techniques. It boils down to selecting a concrete visitor, which is exactly what DI frameworks are good at. There are subtle issues here, though, but they are not hard to resolve. For example, I have requested target objects in all visitor constructors. That would have to be redesigned to fit DI, but as I said, it’s not too hard.

Composition does not fit the DI pattern since there is no object that implements certain type. It is rather an object which is constructed in certain way. We could argue that composite objects are closer to the Builder pattern which codifies the process of building a complex object. In that respect, IoC container (which implements DI) would rather resemble Abstract Factory, which is of lower complexity level compared to the Builder.

Q: How do you persist encapsulated domain entities to database without breaking encapsulation?

A: The persistence problem is not related to application of Visitor pattern, or object composition, as it exists in class hierarchies as well.
There are two levels at which we can attack it. One is to use ORM which can access private data members of a class. Entity Framework and Hibernate can both access private members. You can use that approach to some extent.

In really complex domains differences between OO model and relational model are significant. When you see that persistence is affecting domain model in adverse ways, it is probably time to separate domain model from persistence model. You can then apply a mapping framework to convert proper OO model into flat persistence model and vice versa.

Q: Can we call this pattern in borderlines of SOLID principles?

A: Depending on application, Visitor pattern may be seen as enforcement to SOLID principles, or as an adversary. Here are some guidelines on that:

S – Single responsibility – Each concrete visitor deals with only one feature. Visitor pattern enforces SRP more than the original hierarchy of classes which were doing more than one thing each.

O – Open for extension, closed for modification – If you have a fixed hierarchy, then you will never have to modify abstract visitor. That is the primary niche of the Visitor pattern. It is natively applicable to fixed hierarchies – like classes of animal species, or types of bank accounts, etc. It also supports easy extension, because extending the system boils down to implementing new concrete visitor.

L – Liskov substitution principle – Methods of original concrete classes are now moved to concrete visitors. LSP is violated if a derived class adds method preconditions. Therefore, if original classes had new preconditions added, then concrete visitors hill have to add them too. Conversely, Visitor pattern will violate LSP to exactly the same extent as original classes did.

I – Interface segregation – Original class can implement one interface per concrete operation. With Visitor pattern, each concrete visitor would represent one such interface. Segregation then boils down to selection of a visitor object, which is even more flexible than compile-time interface segregation. This means that concrete visitors will satisfy ISP at least to the same extent as original classes did.

D – Dependency inversion – Since each concrete visitor is a separate class, and all visitors share the same base type, it is possible to inject them as polymorphic dependencies. That is the premise of inversion of control principle. Also, this was the case with original classes which had the same feature. Therefore, DI is supported by both the class hierarchy and by the corresponding hierarchy of visitors.

Conclusion is that the only true difficulty with the Visitor pattern is when the hierarchy of classes is not stable. If we have to add new classes to the hierarchy, then we will have troubles implementing and maintaining the hierarchy of visitors.

Q: How will be the application performance with this approach when we need create several methods?

A: From source code perspective nothing will change. Solution will contain exactly the same number of methods, only they will be moved to different classes.

From run time performance point of view, there is only a negligent penalty. Where one virtual method used to be called, now we will have two virtual methods called. That does not give sufficient motivation to consider performance loss due to application of the Visitor pattern.

About the speaker, Zoran Horvat

Matt Warren

After fifteen years in development and after leading a dozen of development teams, Zoran has turned to training, publishing and recording video courses for Pluralsight, eager to share practical experience with fellow programmers. Zoran is currently working as CEO and principal consultant at Coding Helmet Consultancy, a startup he has established to streamline in-house training and production of video courses. Zoran's blog.

 

Starting with the premise that "Performance is a Feature", Matt Warren will show you how to measure, what to measure and how to get the best performance from your .NET code.

We will look at real-world examples from the Roslyn code-base and StackOverflow (the product), including how the .NET Garbage Collector needs to be tamed!

Watch the webinar to learn:

  • Why we should care about performance
  • Pitfalls to avoid when measuring performance
  • How the .NET Garbage Collector can hurt performance
  • Real-world performance lessons from Open-source code

Performance is a Feature on Vimeo.

You can find the slide deck here: http://www.slideshare.net/sharpcrafters/adsa-69947835

Q&A

Q: Does LINQ perfomance problem raise only in Roslyn or does it have a place with older .Net platforms?

A: With regards to LINQ, there are no major differences in the Roslyn compiler compared to the older one. So performance issues can exist in LINQ across all .NET compiler versions. The example was included because it was a change made to the Roslyn code base itself, not to the code it produces when compiling LINQ.

Q: What is the best setting for GC on Win Server 2012 R2 which hosts 250 ASP.NET MVC apps on IIS. I am talking about gcConcurrent and gcServer in aspnet.config file.

A: Generally, server mode is the best when you are running on a server, but you actually don’t need to do anything, because it’s the default mode in ASP.NET Apps, from https://msdn.microsoft.com/en-us/library/ee787088(v=vs.110).aspx
You can also specify server garbage collection with unmanaged hosting interfaces. Note that ASP.NET and SQL Server enable server garbage collection automatically if your application is hosted inside one of these environments.

Q: Do we get hints, how to measure or which tools are recommended to measure performance issues?

I really like PerfView, it takes a while to find your way around it, but it’s worth it. This post by Ben Watson will help you get started http://www.philosophicalgeek.com/2012/07/16/how-to-debug-gc-issues-using-perfview/, plus these tutorials https://channel9.msdn.com/Series/PerfView-Tutorial I’ve also used the JetBrains and RedGate profiling tools and they are all very good

Q: What other techniques can be applied for embedded dotnet, single use application, to avoid unpredicttable GC hesitation. Considering the embedded device is not memory constrained.

A: Cut down your unnecessary allocations, take a look with a tool like PerfView (or any other .NET profiler) to see what’s being allocated and if you can remove it. This post by Ben Watson will help you get started with PerfView http://www.philosophicalgeek.com/2012/07/16/how-to-debug-gc-issues-using-perfview/. PerfView will also tell you how long the GC pauses are, so you can confirm if they are really impacting your application or not.

Q: How is benchmarking different from finding the difference of timespan between start and end of a function?

A: BenchmarkDotNet does a few things to make its timings are accurate as they can be:

  1. Using Stopwatch rather than TimeSpan, as it’s more accurate and has less overhead.
  2. Call the [Benchmark] method once, outside the timer, so that the one-time effects of JITting the method are included in the timings.
  3. Call the [Benchmark] method several times, in a loop. Even Stopwatch has a limited granularity, so that has to be accounted for, when the methods only takes several nanoseconds to execute.

Q: This is not a question, but an answer for the question about examples for Unit tests not showing performance issues. We need to load data from Azure / SQL server in production, but in Unit tests we have a mock service that responses immediately

A: Thanks, that’s another great example, a mock service is going to perform much quicker than a real service!

Q: What measure could point me in the right direction to know that I could benefit from object pooling?

A: See Tip 3 and Tip 4 in this blog post by Ben Watson http://www.philosophicalgeek.com/2012/06/04/4-essential-tips-for-high-performance-garbage-collection-on-servers/ for a really great discussion on ‘object pooling'.

Q: How does benchmarking work on asynchronous code?

A: Currently we don’t do anything special in BenchmarkDotNet to help you benchmark asynchronous code. It’s actually a really hard thing to do accurately and so we are waiting till after the 1.0 release before we tackle it, sorry!

 

About the speaker, Matt Warren

Matt Warren

Matt is a C# dev who loves nothing more than finding and fixing performance issues. He's worked with Azure, ASP.NET MVC and WinForms on projects such as a web-site for storing government weather data, medical monitoring devices and an inspection system that ensured kegs of beer didn't leak! He’s an Open Source contributor to BenchmarkDotNet and RavenDB. Matt currently works on the C# production profiler at ca and blogs at www.mattwarren.org.