Do you know how to write very fast C# code? Here's a sobering fact: many schools and universities only teach how to write valid C# code, and not how to write fast and efficient code.

Did you know that adding strings together inefficiently can slow down your code by a factor of more than two hundred? And ’swallowing’ exceptions will make your code run a thousand times slower than normal.

Slow C# code is a big problem. Slow code on the web will not scale to thousands of users. Slow code will make your Unity game unplayable. Slow code will have your mobile apps catching dust in the app store.

In this session, our guest speaker Mark Farragher will show you many common performance bottlenecks and how to fix them. We’ll introduce each problem, write a small test program to measure the baseline performance, and then learn how you can radically speed up the code.

Watch the webinar and learn:

  • The low-hanging fruit: basic optimizations
  • How to read compiled MSIL code
  • The struct versus class debate
  • Optimize for the garbage collector
  • Writing directly into memory with unsafe pointers
  • Use dynamic delegates to dramatically speed up reflection

 

How to Write Very Fast C# Code on Vimeo.

For source code of the examples, please email Mark at mark@mdfarragher.com

Video Content

  1. Throwing exceptions (5:09)
  2. Fast String Handling (8:30)
  3. Fast arrays (14:26)
  4. Fast Loops (19:25)
  5. Fast Structs (23:00)
  6. Fast Memory Copy (28:09)
  7. Instantiation (31:51)
  8. Property Access (38:06)
  9. Q&A (45:32)

Webinar Transcript

Hello, everyone. I'm happy to be here. Let's get started. So I would like to talk about how to write very fast C# codes. I'm going to show you a couple of tips and tricks to speed up your C# codes. We're going to do lots of benchmarks to find out what kind of code is slow in C#, what kind of code is fast, and where the pitfalls are. Before we get started, there's a couple of offers that I'd like to bring to your attention. First of all, I have 10 courses on Udemy about C# programming. If you're interested, I've got a coupon called PostSharp15 that'll get you a 90% discount on any of my Udemy courses. 

I've also copied all my courses to Teachable, to my own Teachable environment, and there I have subscription model where you can get access to any course for $9 per month, and future courses are included. So if you're in the subscription, any new course that I produce, you'll automatically get access, and I produce a course roughly once a month, once every two months if it's a slow month. So you can expect new content every month. Last but not least, the source code that I'm using in this webinar, I will send it to you if you send me an email. So just email me at the email address at the bottom here, and I will reply with a zip file of the solution, and you can play around with this actual code, do some benchmark testing of your own.

Okay. Let's get started. So these are the topics I want to cover in this webinar. I'm going to show you the overheads of throwing an exception. I'm going to show you how to manipulate strings, so how to handle strings in C#. I'm going to take a look at arrays, at the different types of arrays in C# and how they match up in terms of performance. I'll show you the difference between a For and Foreach loop and what that does to your performance. I'll show you structs versus classes. So I mean, you probably know that structs are slightly faster than classes, but how big is the difference? And is it worth the trouble of refactoring your codes? I'll show you how to copy a block of memory using different techniques, and I'm saving the best for last. At the end, I'm going to show you how to instantiate classes and how to do basic reflection in an extremely fast manner. I'm going to show you a piece of code that emits custom CIL instructions. So we're basically compiling C# on the fly, creating a custom assembly to do super fast reflection.

So let's get started. I'm using Visual Studio community edition to run this code on OSX. I've written a console program to do the performance measurements, and I'm using a base class called PerformanceTest right here to run the different performance tests. I have three methods here, MeasureTestA, MeasureTestB, and MeasureTestC. So these are virtual methods, and in any derived class, I will put the test code in here. The format is always that the first one, Test A, is the baseline test. So this would be unmodified slow code, and then we have two methods, B and C, available to try out different kinds of optimizations. And then for the actual performance test down here, the performance test is fairly simple. Basically what I do is I go through these Test A, Test B, and Test C methods, and I repeat them a number of times. So you can see there's a constant here, default repetitions with a value of 10. I basically repeat the tests 10 times to average out the effects of the garbage collector because an ill-timed collection of the garbage collector can really slow down one of the tests. So if we just run it 10 times, then we average that effect.

To measure performance, you're supposed to use the stopwatch class in C#. So when you're doing benchmarking, please don't use datetime, always use stopwatch because stopwatch is a specialized class for measuring times extremely accurately. So you can see I always start by restarting a stopwatch, doing the test, stopping the stopwatch, and then I have the elapsed milliseconds right here, and I'm adding that to a variable, and then in the end, you can see it right here. I'm returning the total value divided by the number of repetitions. So we're doing 10 repetitions. So we have the total elapsed time for every method and then divided by the number of repetitions gives you the average execution time.

Throwing exceptions

So let's get started. I have the program running right here. So it's a console program with eight tests pre-configured, and I can simply select a test and run it. So you can see the first one is exceptions. Let me go back to the code and show you the code. So the exception test is right here. So you can see it's just a class that's derived from PerformanceTest. So I get to implement these Measure A and Measure B methods. What I'm doing, check this out here in the constructor, what I'm doing is I'm filling a list of 1000 strings. So I've got a list of strings, 1000 strings, and every element in the list is a number, and the number has five digits. So I pick them from this list at random, and you can see I have an X here at the end. So what this code does is it mixes digits together to create random numbers, and 1 in 11 digits is going to be an X, which will make any number invalid.

So 9% of my list element population is going to be invalid and won't be able to be parsed. So what does my test do? See right here, Test A does a simple int.Parse and catches any format exception, and Test B does int.TryParse, and that's basically the only difference. So let me run the code, and we can see what happens. So I'm running Test 0, 100 iterations, and I include the baseline. So here we go. So now, it's doing the test and then doing it 10 times to average out any effects, and it'll show us the results in a little graph when it's done. There you go. So you can see that the slow down of an exception in int.Parse is massive. You can see we're looking at an execution time of 1.094 seconds for the Parse function and to the TryParse is 9 milliseconds. So this is a massive difference in performance, and keep in mind, only 9% of the numbers were invalid. So if you have a much higher failure rate in your data, much higher number of invalids that fills in your data, exceptions are really going to slow down your codes.

So the takeaway here is don't swallow exceptions in your codes. Don't catch an exception, and don't do anything with it. If you're parsing a lot of data, make sure that you validate the data before you try to parse it, and don't do it the other way around where you first parse the data, and then catch the exception, and in the catch block, you recover your code. I mean, you can do that, but it will really slow down the execution of your codes. Exceptions are super slow. They take roughly one microsecond to execute, and I mean, it's incredibly slow, and that's because they're intended for debugging purposes. They capture the stack trace. They capture the context of the executing threads, and they prepare all this debug information. So they're not supposed to be thrown in mission critical loops in your code. So first takeaway, don't catch exceptions. Avoid exception throwing as much as you can in mission critical codes.

Fast String Handling 

Okay, so the next thing we're going to look at is string handling. So this is the classic example of C# performance optimization, and yet, I'm surprised by the number of people who don't realize this crucial difference. This is the code that does the test. The first, the baseline test simply builds a string. So we start with an empty string right here, and then, in a simple loop, I add one character to the string, and I do that 50,000 times. So in the end, I have a string of 50,000 characters, so that's 100 kilobytes on the heap. And that's it. The B test does the exact same thing, but it uses a StringBuilder. So instead of adding things together, it uses a StringBuilder, and it uses the Append method to append the character. The third test does build up a string incrementally, and it uses pointers. 

So you can see here that I start with a character array of 50,000 characters. Then, I fix that character array on the heap, and I ask for a pointer to the block of memory, and then I declare my own pointer and initialize it with this initial value. So the pointer will initially point to the first element in the character array, and then in a simple loop, I use the pointer to directly write this character into heap memory. So this is a very interesting test because it will show us how much faster StringBuilder is compared to a regular string, and if it's worth your while to use pointers instead of using the StringBuilder to speed up the code even more.

So let's do the test and see what happens. So I'm running Test 1, the string test, 50,000 iterations with the baseline. Here we go. There you have it. So 553 milliseconds for the string, and the other two tests, they actually did run, but they are so fast that we can't see them on this resolution. So you're getting a hint already that the difference in performance is massive here. So let me run the test again. So now, I'm going to go to one million iterations, and now, I have to disable the baseline test because then, otherwise, we have to wait forever, and check this out. The StringBuilder takes two milliseconds, and the pointer operation is now so fast that we can't see it. So I'll do this again, and now, I'm going to do 100 million iterations, again, without the baseline test. Just wait for it, and here are the results. 452 milliseconds for this StringBuilder and 169 milliseconds for the pointer. So there is this massive performance difference between using strings and using the StringBuilder. If you want ultimate performance, you can use direct pointer operations on the heap, and that'll get you a three-fold, roughly, three-fold performance boost over using string builders. 

So if you're wondering, why is the string so incredibly slow? It's because of this. Let me show it to you in a picture. When you append characters to a string, strings are immutable in .NET. So that means that any operation that modifies the string will create an entirely new string on the heap. So in the first loop iteration, we have an empty string, and then I add one character, so I get a string of a single character on the heap. Then, I add another character, and now, I have two strings on the heap, the original and the modified version. Then, I add a third character, and now, I have three strings on the heap, the original, the modified version, and again, the modified version, and so on and so on. So if I add 50,000 characters to a string, I end up with 50,000 disposed strings on the heap where each string is one character longer than the string before it. So it's a huge amount of data, and I'm flooding the heap with data, and I'm constantly doing this memory copy operation where the string is being copied to a new version and to a new version and to a new version. 

So once we hit the higher end of the loop, like the high loop iterations, we're copying this block of 100 kilobytes on the heap, as like 100 K, and then 100 K plus 1 byte, and then 100 K plus 2 bytes, and so on and so on. So it's super inefficient. If you use a StringBuilder, it works the way you would expect. You have this buffer in memory that can hold any number of characters. You can declare a StringBuilder and specify the size, and then you simply write characters into specific locations in that buffer of memory. So naturally, the StringBuilder is much, much faster. Now, the fun part is that the StringBuilder, behind the scenes, the StringBuilder is actually using this code. 

So internally, the StringBuilder fixes a block of memory and then writes directly to the memory using character pointers, and the only reason why we see a difference between these two blocks of code is because in Test B, we have the overhead of the append method, and that will slow down the code a bit. So going back to the results ... the StringBuilder, when you're modifying strings, always use a string builder because it's way faster. I mean, 50,000 iterations for the string and 100 million iterations for the string builder, I mean, that's pretty obvious. But if you want ultimate performance, use pointers directly. It'll give you another three-fold improvement over the StringBuilder.

Fast Arrays 

Okay, moving on to arrays. Let me show you the array test. I have a very simple piece of code. I declare a three-dimensional array. So I have an array with three dimensions. Then, I have three nested For loops to fill the array, and in the innermost loop, all I do is increment every array element by one, so super simple. So that's Test A. Test B uses a one-dimensional array, and it‘s flattened. So I have a one-dimensional array that has the same size as the three-dimensional array, and I use this simple formula to calculate the index into this one-dimensional array using the I, J, and K variables, and then I do the same operation. Finally, I have a one-dimensional array. I use I, J, and K, but now, you can see, I simply have an index variable that starts at zero, then I increment it by one. So instead of using this formula with the multiplication and the addition, I just have a simple variable that incrementally goes to the entire array and initializes it.

So let's take a look at the performance of these arrays. So I'm running Test 2, 300 iterations with a baseline. Here we go. And here's the results. The three-dimensional array is slowest. The flattened, one-dimensional array is faster, and the array with incremental access is the fastest. Now, you might be wondering right now what's going on. Why is the three-dimensional array slower than a one-dimensional array with the exact same logic? I mean, if you think about it, this expression, the .NET framework needs to do this exact same calculation to find the memory location of this three-fold array index. So either I'm doing the formula, or the framework is doing the formula, but it's the same mathematical expression. So why do we see this difference? So to explain that I'm going to have to show you the intermediate language code of this compiled program, which I have right here.

So let's go to the array test, sorry, arrays test, plural. So here it is. Here's the array test class. This is the constructor. Let me just scroll down, and this is MeasureTestA. So the code to access the array element is right here. As you can see, this instruction loads a location, loads a local variable of the stack. So zero, one, and two are the I, J, and K variables, and then, this call does the array indexing, and you can see, it's actually a method call. It's an instance call to the array class, and within that class, it calls a method called Address, which expects three parameters. So behind the scenes, the .NET framework implements a three-dimensional array as a class, and any interaction with that class goes through methods, but now, let's look at the one-dimensional array. So that's right here, and the operations are here. So this is the code to index into a one-dimensional array, and the thing I want you to notice is this instruction here called load element address. 

Load element address indexes into a one-dimensional array and returns the address of that element. So to work with one-dimensional arrays, the .NET framework, the .NET runtime has a specialized CIL instruction. So load element is optimized. It's specialized to work with one-dimensional arrays. So there's no method call. I don't have to go into a method and run some .NET framework code to get at the array element. It can all be done with CIL instructions. The only method calls in this block of code are these two, and they are only needed because my array dimensions are stored in this property. If I had used an array with a constant dimension, a constant size, this call wouldn't be there, and it would simply be a load instruction to load a constant value, and then, this entire block of code wouldn't have any method calls whatsoever. So the takeaway I want you to remember is that one-dimensional arrays in .NET, the intermediate language has optimized instructions for dealing with them. So a code that uses a one-dimensional array will always be faster than code that uses two, three, four, five, or six dimensional arrays because of the difference in implementation in the .NET runtime. 

Fast Loops

Okay, back to the program. So the next test is a comparison of For and Foreach. So let me show you the code. The test is here. Here we go. So I create a list with one million elements. It's a list of integers, and I fill the list with random numbers, so super simple. One million integers, and every list element is just a random number, and then, this test uses a Foreach loop to loop through the list, and this test uses a normal For loop with an integer index variable, and that's it. So let's run those tests and see what happens. And here's the results. 273 milliseconds for the Foreach loop, and 112 for the For loop. So the For loop is roughly twice as fast as the Foreach loop. So to show you why that is happening, let me go back to the intermediate code. We can take a look at the compiled code and see how the loops have been implemented. So here's the test class. Let me scroll down. So that's the constructor, and here is MeasureTestA, and you can see that the Foreach loop uses an enumerator, a generic enumerator to loop through the list. 

So the first thing you have to do is you have to call the list and call the GetEnumerator method, and that gets you the enumerator, and then, the enumerator itself has a Current property to access the current value of the current element you're looking at, and there's a MoveNext method that will move the enumerator to the next element in the list. A MoveNext is a ... You can see it's a Boolean method. It returns bool, and this branch instruction basically jumps back to 12. So it loops this bit of code as long as the enumerator returns a value, and as soon as MoveNext returns false, we reach this point, and this leave instruction will exit the methods. So you see that the Foreach loop is implemented by using an enumerator class and then repeatedly calling MoveNext and accessing the Current property to get the data. So that's a lot more overhead than a simple For loop, which is implemented here. It's this bit of code.

You can see that the For loop, it doesn't really use any classes at all. The only place where a class is being used, well, a method call is being used, is here when I access the elements in the list, but the loop itself is just this piece of code. So a For loop is going to be implemented with only a few CIL instructions, and it doesn't require any specialized classes. So that's why a For loop is much faster than a Foreach loop. Now, keep in mind, when you are looping through an array, there's no difference between the two because the C# compiler is very smart. If you use Foreach on an array, the compiler will generate a normal For loop behind the scenes. So you won't see any difference in performance, but for the more complex collection classes, you can see that there is a difference. It's about a factor of two.

Fast Structs 

Okay, so moving on, structs versus classes. Let me show you the code, right here. So what I've done is I have declared a simple class that contains an X and a Y field, two integer fields, on a constructor to initialize those two fields. I've done the same thing as a struct. So this is basically the exact same thing, but now, it's a ... Whoops. Here, it's actually a bug in my code. It's not a struct or a class. Sorry about that. Let me quickly fix that then. Let me see if this works. This is the demo effect. There's always something that goes wrong. Exit and restart. So I've defined a class with an X and a Y field, and I've defined a struct with an X and a Y field, and then the only thing my code does is it fills a list with either classes or structs. So the C test fills a list with structs. The B test fills a list with classes, and the A test fills a list with classes, but look at this. The class has a finalizer. So when this class gets disposed, the finalizer will be called by the garbage collector.

So let's run that code and see what happens. It's default iterations. So there is the difference. The class with the finalizer takes 246 milliseconds. The normal class takes 111 milliseconds, and the struct takes 6 milliseconds. So that's quite a big difference. The reason for that difference is because of the way that structs and classes are implemented on the heap. When I create a list of classes, this is what the memory will look like. The reference to the list will be on the stack, so it's right here. The list itself is right here on the heap. The list has a number of elements. Each element is an object reference. So that'll be eight bytes in size, and the reference points to an entirely different location on the heap where the class is stored.

So if you calculate the amount of memory for eight megabytes list, you would actually look at 32 megabytes of heap memory because you have to store the list on the heap, and you have to store all the different point classes for all the data. Now, when this gets garbage collected, there's going to be a load of objects on the heap, not just the list, but also all these individual point classes, and they all need to be garbage collected to be disposed. If you use a struct, the memory layout looks like this. So we still have the list reference on the stack pointing to the heap. The list is on the heap, but now, the data, the struct is in line in the list itself. This is the difference between a class and a struct. Structs are stored in line within their containing type. Whereas, classes are stored separately, and the containing type contains a reference.

So now, the entire struct, it's two integers. So the entire struct fits inside eight bytes, inside an eight-byte element. So now, the entire data structure is only eight megabytes, and it's only a single object on the heap. So when the garbage collector has to clean up the memory, it goes to the list, disposes the list, and it's done. That's all it needs to do. So in these kinds of scenarios, using structs is extremely lucrative if you have lists with a large amount of data. The data itself contains only a few fields, like X and Y or X, Y, Z coordinates, think points, vectors, things like that, and you use the data for a short amount of time, and then you don't need it anymore. So you only briefly need access to the data. If those three conditions are met, then structs are extremely lucrative to use.

And finally, the big slow down in the finalizer is because when your classes have a finalizer, the garbage collector needs to call the finalizer one after another to dispose your class, and it does so on a single thread. So if you have one million classes right here on the heap, the garbage collector has to call one million finalizers to get rid of all the data, and that's really going to slow down your code. So that's why you get this difference in performance.

Fast Memory Copy

Okay, moving on to copying blocks of data. So what we're going to do is we're going to take a byte array right here, a byte array of one million bytes, so one megabyte in size, and we're going to copy this entire array into another byte array. So we're just copying a block of memory. So the most straighter way of doing that is simply using a loop and iterating ... right here, iterating through the loop and going through every byte in the array and manually copying it into the other array, and again, to slow down this test a bit, I repeat this whole thing 500 times. So I'm copying one megabyte of data 500 times. Now, in Test B, I do the same thing with pointers. So you can see I used the fixed keywords again to get two pointers to the first element in the buffer, and then, I used these source and destination pointer variables to do the copy like this.

So here, I'm using array indexing, and here, I'm using byte pointers. Finally, the last method, I use this CopyTo method, and array has a CopyTo method, which will quickly let you copy the array to another array. So now, we can see the difference, how slow is this manual process, how much faster is it with pointers, and is there any benefit in using the CopyTo method instead. So let's test that out, byte array copy, number 5, 500 iterations with a baseline. Wait for it, and there we go. So the direct copy operation takes 400 milliseconds. When we use pointers, it's only 388 milliseconds. So it's very interesting. Using pointers doesn't have that much benefit actually when we're working with bytes. The CIL implementation, so the intermediate language that the compiler produces is already so efficient that using pointers doesn't really have any added benefits, and this is perfectly in line with what I told you earlier, that the intermediate language runtime is optimized for one-dimensional arrays.

So when you're already working with one-dimensional arrays, you don't really need to optimize it further with pointers, but look at the CopyTo method, 32 milliseconds. That's massive. That's 10 times faster. So the reason for this is that the CopyTo method is incredibly optimized. It actually calls into the operating system, and it calls a low level function for copying a block of memory. So basically, the CopyTo method simply fixes those two blocks of memory on the heap, just like the fixed keywords, but then, it calls an OS function, and it says, "I've got this block of one megabyte. Could you please copy it to this other memory address," and then the operating system does it. Now, that is extremely fast. There's no way we can go even faster with C# codes. I mean, you can't beat the operating system. So the takeaway here is one-dimensional arrays are already super fast, so you don't really need pointer operations there, but if you are simply copying a block of memory, you're not doing anything special with the array values, then the CopyTo method is the way to go because then you allow the operating system to basically just copy this entire block of memory, and that gives you maximum speed because you are ten time the speed's improvements. So that's pretty cool.

Instantiation

Okay, the next performance test, now this one is very nice. I'm going to show you a very fast way to instantiate an object. So let me start with a simple code first. So here's my baseline. Instantiating an object means we construct an object. We construct an instance of a certain type, and often, when you have to use a reflection, the type isn't known at compile time. The type that you want to create is only known at runtime. You often get this if you have configuration information in an XML file, and your code needs to dynamically adapt to whatever is in the configuration file, or if you use something like XAML, not actually XAML, but say, your own implementation, and you have a complicated data-binding expression, something that you write out as text. You bind one property to another property, and then you somehow need to turn that into executable code.

So these are all scenarios where your code has a string that contains a type, and you need to instantiate an object of that type. So the most straightforward way of doing that is simply using reflection. So I have my string right here. You see I'm going to create a string builder. So I take the string. I create a type, object. So I get the type object of this type, and then I use this line of code. You've probably seen this a couple times in other programs, Activator.CreateInstance, and that will instantiate an object of that type, and then this code here is just a sanity check. I look at the object, and I check if the type is actually a string builder. So if we see an exception, we know that the code is acting weird.

So that's one way of doing it. Now, the fastest way of doing it is like this. This is cheating because here, I'm actually constructing a string builder. So here, the type information is known at compile time. Obviously, this wouldn't be possible in a normal program, but I'm just adding it here for reference purposes so we can see the difference in performance. So this is compile time instantiation. This is runtime instantiation using reflection, and now, I'm going to show you a really cool trick, a way to quickly instantiate types using runtime instantiation. That's this bit of code here. Now, all the magic happens here in this GetConstructor method. So let's look at that. So here is GetConstructor, and what Constructor does is it creates dynamic methods.

Dynamic methods are super cool. They were introduced into .NET when LINQ expressions were introduced, and now, with LINQ, you can create a LINQ expression and turn it into an expression tree, and then at a later time, turn that expression tree into executable code and then run that code. Behind the scenes, that library uses dynamic methods to create new methods on the fly at runtime. So when we instantiate an object in intermediate code, it's super simple because you only need two CIL instructions. Actually, you only need one CIL instruction to do it. The instruction is called new object, and the only thing it needs is a reference to a constructor. So it's a single CIL instruction, and it will instantiate an object. So what this code does is it creates a new dynamic method, and then it uses intermediate language generator to fill this method with CIL instructions one after another.

So the first instruction that we inject into this method is simply new object. So this will call the constructor of the object that we're trying to create, and then the second instruction is Return, because we want to return out of the method, and that's it. So this DynamicMethod is returned right here. You can see I return it as a constructor delegate, which I've declared up here. So my constructor delegate is simply a delegate that describes a method without any parameters that returns an object. So let's run this code and see what happens. Instantiation, one million times, and with a baseline, so here's the difference. So using reflection takes 85 milliseconds, which is not bad, but using my DynamicMethod takes only 22 milliseconds. So it's really cool. It's four times faster, but now, look at this. Compiled code is 19 milliseconds. So there's almost no difference between constructing a dynamic method and letting the compiler construct the method for you, basically.

So this trick lets you use reflection-like techniques that let you use a form of dynamic programming to create objects at runtime of any type you want at the same performance level of compiled code. Now, this is super important because my career is 20 years long, and I have used reflection many, many times in my code projects to instantiate objects, and with this trick, I get almost native performance. So there's no need to use Activator.CreateInstance anymore. You can simply use a DynamicMethod. So the takeaway here is please be aware that you can do it this way. You don't need to use classic reflection to create new objects. You can use this neat trick to create your own methods, inject CIL instruction into that method, and let it do anything you want, and creating an object is super simple. You only need two CIL instructions. So this whole magic just happens in this block of code. So it's fairly compact. It's a drop-in replacement for Activate.CreateInstance, and it gives you a massive performance increase. So please be aware that this is possible.

Property Access

Okay, now, the final performance benchmark I'm going to show you is property access because if you think about it, this DynamicMethod trick is super cool. We can inject any kind of CIL instructions into a new method, make it do anything we want. So could we access a property using a DynamicMethod? So let's find out. So my code is here. Let me go down to the Test methods first. So the first thing I do is I use classic reflection. So I'm creating a string builder right here. Then, I use classic reflection to get a PropertyInfo instance, and you can see I access the type of the string builder, and then I access the property called Length. So now, I have a PropertyInfo variable, and then to get the value of Length, all I need to do is this, pi.GetValue. That's it, and then here's a simple sanity check, my name, my full name. It's exactly 21 characters, so I'm checking that the value really is 21. If not, we'll see an exception.

So this is classic reflection. Compile time called would look like this. So to access the length of the string builder, I simply write sb.Length, and that's it. So compiled time called, this gives us maximum performance, but now, DynamicMethod. I have a method here called GetPropertyGetter, which will get me an access to the LengthGetter method of this type. So now, Getter will point to the property, and it will point to the internal Get method of the property, and then to call it, I can simply do this. So let's see how that works. So here is roughly the same code again. You can see I have my DynamicMethod, which I ... Whoops. Wrong one. You can see I have my DynamicMethod here, which I instantiate. So I'm creating a GetValue method, and I'm injecting CIL instructions into it, and then here are the instructions I'm creating.

So the first thing I'm emitting is a CIL instruction called load argument 0. So what this will do is it will load the first argument onto the internal CIL execution stack. So the first argument would be this one. Here's my property, GetDelegate, and you can see it's a delegate for a method that accepts a single object parameter, and it returns an object return value. So load argument 0 will load this value. Then, what I'm emitting is a call, an instance call, to the Getter function, and the Getter is up here. It's the Get method of the property that I specified. So there's a tiny bit of classic reflection here to get to the method info of the Getter of the property, but from then on, I simply use that variable to emit the call instruction directly, and now, you might be aware that the .NET framework, it can transparently work with value types and reference types, but if you have a value type, and you return it as an object, you have to box it. 

So that's what I'm doing here. I'm looking at this Getter method, and if the return type of the Getter is a value type, then I'm emitting an XML box instruction to take this integer. I mean, we know that the length of the string builder is an integer, so it's going to be a value type. So I'm boxing it into an object, and then, here's the return, and that's it. So now, this is a super compact DynamicMethod with, well, four CIL instructions, load argument 0, call the Getter, box the value type, and return. So four CIL instructions to perform the access to the property, and then, to use this, all I need to do is this. I call this Getter delegate. I provide the string builder, and it returns me the length. So let's run the code and see what happens. So I'm doing the property access. We're doing ... What's this? Five million iterations and with a baseline. Wait for it, and that's the result. 

So now, this is extreme, huh? The classic reflection takes 910 milliseconds. So it's fairly slow. The DynamicMethod takes 55 milliseconds, 55 milliseconds. So that's pretty awesome. That's 10 ... What is it? 20 times faster, more or less? 910 divided by 55, 16 times faster. So that's quite a speed improvement. Doing it in compile code is only one millisecond. So that's extremely fast. You can see that the delegates that I'm using to run this dynamic code, calling the delegate, there's little performance overhead associated with that. When we created the object, there wasn't that much difference between doing it in a compile time or doing it with a dynamic delegate, but here, you can see that the compiler is able to very quickly access this Length property, whereas my DynamicMethod is 55 times slower, so it's a big difference.

But we're assuming this is a scenario where you can't do compiled time code. You don't know type that you want to work with at compile time, so this option is basically out of the window. So your only choice is classic reflection or using a DynamicMethod, and this gives you a 16 times performance improvement. So the takeaway here, the thing that I want you to remember is using DynamicMethod is not that complicated. You can see my code is fairly compact. I've added the Setter as well. I'm not using this in my example, but when you download my code and use it, you can play around with the Setter as well, but you can see that's creating a property Getter using dynamic CIL instructions. It's not that much code. It's just this bit, and the CIL instructions to do the work are just this section here.

So creating the DynamicMethod, it sounds intimidating, but it's not that complicated actually, and it will give your code a massive speed improvement if you use it to replace your classic reflection code. So please be aware that this option is on the table. And that brings me to the end of this webinar.

So the coupon to get any of my Udemy courses at a 90% discount is POSTSHARP15. So use that code in any Udemy course to get the discount. If you want to spend even less, then go to my training.mdfarragher.com website. So this is a Teachable environment. This is my own Teachable environment, and there, you can take a subscription for $9 a month, and it'll give you access to everything, and any courses that I create in the future will automatically get added to the subscription. So that means that roughly once a month, you can expect a new course from me, and you will be enrolled in that course automatically. Finally, if you want the source code that I've just shown you to play around with the code and create some dynamic methods of your own, then just send me an email at mark@mdfarragher.com, and I'll reply, and I'll put the source code in an attachment in the reply, and then, you can play around with it. So I've used Visual Studio community edition on OSX, but of course, the code will work in any Visual Studio edition, and I'm using .NET Core 1.1, but I'm not doing any weird stuff. So you can easily take the code and run it against the classic .NET framework. It will still work. So send me an email. I'll give you the source codes.

Q & A

Q: Which version of .NET are you using? 

A: I'm using .NET Core 1.1.. I'm using the default C# version, which is version 7. 

Q: Is there a concern with using string.Format as opposed to string builder?

A: It depends on how we use it. String.Format, internally, of course, it uses a string builder. So the call to format itself will be pretty efficient, but of course, it returns a string. So it all depends on what you're doing with string.Format. If you take the outputs and you simply add it to another string, and you do that in a tight loop, and it's part of the mission critical part of your program, then you are going to see a performance hit, but honestly, I use string.Format all over the place in my own code, in logging code, in tracing code, output code, and it's all good. So don't worry about calling string.Format, but if you are looking at a mission critical loop in your code, then do consider removing it. One final thing: if you use these kind of strings, I forgot the name, but the ones that start with the dollar and have these embedded variables like right here, this is simply a syntactic sugar. It calls string.Format behind the scenes. So this is actually a string.Format call, and again, don't worry about it. Just use it wherever you like, but in tight loops, mission critical code, consider removing it.

Q: Is there any difference using DynamicMethod versus expression or lambda expression for reflection?

A: It's pretty much the same, but the DynamicMethod is slightly faster. I read a benchmark and they compared it to all these different ways of creating dynamic expressions, and the DynamicMethod was the fastest. It's only a slight difference, so if you prefer to use expressions instead, just go for it. If you want the maximum performance, then use DynamicMethod.

Q: Do you have advice for which of these methods you recommend we look at all the time versus looking for them in the hot path? For example, if something is called once at startup, do you strive for these optimizations in your code?

A: Yes. Absolutely. When you identify the hot path in your code, please do take out repetitive calls, repetitive instantiations, initializations, and move them outside of loops or outside of the hot path. That's basically step on in optimization. 

Q: Looking at the GetConstructor method regarding string appending, how many strings does it need that the string builder is more efficient than a normal string edition?

A: I measured it. The answer is four. So I actually did these measurements. So if you do less than four string concatenations, the string is faster because of the overhead of actually creating the string builder, but if you do four or more, than the string builder is faster, and these two start to deviate really quickly. So again, don't religiously remove all normal string concatenations from your code because it makes your code a lot less readable. String builder's a nice class, but the append syntax is not very nice, basically. It's a lot less clear than simply using a plus sign to add two strings together. So three strings or less, absolutely no problem, use normal strings. More than three, use the string builder.

Q: When working with loops, is the performance loss the same when using LINQ queries? For example, does the Foreach expression have the same performance loss as a normal Foreach loop?

A: The fun thing LINQ is that it always uses an enumerator. So if you use Foreach in LINQ, you get the enumerator code to incrementally step through the expression, and if you try to do it with a classic loop, you still get the enumerator code because LINQ is built on top of enumerators. The whole thing is one giant enumerator with nested methods on top of that. So with LINQ you will see the slow performance no matter what you do.

Q: Since DynamicMethod has been in .NET since the introduction of LINQ, why doesn't the optimized reflection code you demonstrated exist in the reflection library outright as an existing class if the implementation is the same regardless of the code you're pulling?

A: Honestly, I'm not sure. My hunch is that it has to do with backwards compatibility, but it's a good point. You could definitely rewrite the classic reflection code and make it much faster using this methodology. My hunch would be that the Activator.CreateInstance behaves slightly different from a DynamicMethod, and if they had tried to do this, it would break backwards compatibility.

Q: Can you please suggest any good tools to check performance issues in code?

A: I have Visual Studio Enterprise and a virtual machine. It comes with a performance testing tool. That one's pretty amazing. So I really like the tools that are bundled with Visual Studio, and in fact, those are the only tools I use. I mean, I've demoed all this code using Visual Studio on OSX because it's easier, because I don't want to run a performance benchmark inside a virtual machine, but when I'm doing my day-to-day programming, I'm using Visual Studio Enterprise in a Windows VM, and the tools in there are just great. So I would say start with those. I really don't have any other recommendations than the standard Visual Studio stuff.

Q: Which one is faster, TypeOf or GetType method?

A: I think TypeOf is faster, but it's a hunch. I'm going to have to check that.

Q: Can you tell us a bit about projects where you have used these optimizations?

A: In the past, I wrote this huge web library, ASP.NET library, where you could create web pages using a XAML-like syntax. So you could basically just map out your entire web page in HTML, but you could use special binding expressions inside the HTML. So I could put a text box in there and then bind the contents of the text box to a variable in my ASP.NET code. So this wasn't XAML. It was my own project, and I was kind of inspired by XAML, and the code to parse those data-binding expressions used these dynamic methods to speed it up. I started out with classic reflection to parse an expression and then access objects, access properties, and get values, and it was incredibly slow. So I rewrote the whole thing and used DynamicMethod. So any place where you are creating expressions based on text data, so not actually code, but something that's stored in a text file. It could be XML or a config file or anything. Any situation where that occurs, using DynamicMethod is really going to help you.

Q: What's the overhead of string interpolation in C# 7 versus string.Format?

A: So string interpolation is the dollar syntax, I think. It's exactly the same. So behind the scenes, string interpolation is string.Format. So you're not going to see any performance difference between the two.

Q: In the strings test, would string builder perform better if you passed capacity into the constructor?

A: Yes. That is an excellent question. Since I didn't initialize my string builder, it gets initialized on the heap with a default size, which I think it's 16 bytes, I think, 16 characters, 32, or something like that. Every time when you hit the limit, so when you add characters, and the whole thing's full, it doubles in size, and of course, to double it in size, the framework has to instantiate a new string builder with twice the size, and then copy all the data over. So it's doing exactly the same thing that the string is doing, but the difference is it happens in doubling. It doesn't happen on every character addition. It only happens when the buffer is full, and it has to double. So string builder is logarithmically faster than string, but it's still doing this instantiate and using and copying data over process. So if you instantiate the string builder at maximum size right from the start and then fill it with data, then you never have to expand the buffer. There is enough room in memory, and you're simply writing characters one-by-one directly into that area of memory, and that will give you the maximum performance. Great question. Good observation.

Q: Why 9% are exceptions?

A: Several viewers have pointed out that the 9% number I mention in the webinar is incorrect. Here is the correct calculation:

I’m building numbers from individual digits. There are 11 digits, 0-9 and the letter ‘X’. So, the chance of a single digit being invalid is 1/11. A number consists of 5 digits, so the chance of a single number being invalid is (1/11) * 5 = 45%. The loop in my code will fail 45% of the time and throw an Exception.

Q: How to get mastery in reflection and dynamic code?

A: By practicing a lot. Write lots of code that uses reflection and dynamic emitting. Experiment, measure performance, see how far you can go optimizing your code. Play around and discover what works and what doesn’t. Plus: read lots of blog posts and articles.

Q: Why would it not be beneficial to use structs for all simple business objects? Is there a point of degradation or some limitation over a class? Is a struct usable with Entity Framework to represent database objects?

A: The .NET Runtime makes certain assumptions about structs and classes, specifically that structs will be very small (in terms of memory space) and have a short lifetime, and classes will either be small or large and have a long lifetime. Simply replacing all classes with structs in your code is dangerous because you will go against these assumptions. For example - if you change a long-living object to a struct, it will get boxed on the heap and your code will be even slower than when using classes. A struct also get copied during each method call, so passing a very large struct to many different methods will slow down your code a lot.

The rule of thumb here is to always start with classes, and only use structs when it makes sense to do so.

The Entity Framework does not support structs.

Q: Can we use DynamicMethod trick on AOT platforms (via Mono)?

A: Nope. The ILGenerator class is missing, so you can’t emit your own CIL code into the dynamic method. Makes sense, right? It couldn’t possibly work with AOT.

Q: CIL stuff is really interesting.  Perhaps worth mentioning that string interpolation and string.Format uses StringBuilder so you don't always need to explicitly use StringBuilder.  Also, StringBuilder has a little overhead so for <4 strings something like str1 + str2 + str3 is faster - I think

A: Correct! String interpolation ($”yadday {yadda}”) compiles to a String.Format call, so it’s exactly the same thing. I always use interpolation because it’s so much easier to type.

You’re also spot-on with the string versus StringBuilder comment. A StringBuilder has some overhead initializing, so it is actually slower for a small number of additions. The cutoff point is at 3 additions. For zero to three the string is faster, for four and more the StringBuilder is faster. For larger number of additions, they start to diverge very quickly.

In my logging and diagnostic code, I always use strings (string interpolation) because I usually stay below the 3-addition limit, and it makes my code so much easier to read.

Q: Hi, For Exceptions, what if TryParse is not there. For user-defined types instead of Primitive what needs to be done.
A: You need to do the same that TryParse is doing internally – scan the input data first, and only start parsing if the scan says it’s okay. Also make sure you return a parsing failure as a return value (i.e. a bool) instead of throwing a FormatException.

An easy way to scan is by using a precompiled regular expression to make sure the input data doesn’t contain any invalid characters. Regular expressions are super-fast.

Q: Any comment about differences between copping arrays, lists, c # hash table, etc at the heap?

A: In terms of memory layout, there’s not that much difference between an array, a list, or a hashtable. All three use arrays internally to hold the data. A hashtable is optimized for key/value lookup, whereas list and array are intended for indexed access.

They all have a CopyTo method that attempts to block-copy all data in one go. If you’re storing value types, you will see great performance for all three.

Q: Are you going to review LINQ / Parallel performance someday?

A: That’s a great idea! Thanks for the suggestion. I have an existing course already that scratches the surface of LINQ versus PLINQ performance, but I’d love to go deeper.

Q: Nice talk. BTW Stringbuilder may not be the fastest. it depends on the size etc. you have to calculate the GC allocations also. best tool for that is BenchmarkDotNet with memory diagnoser on windows! it is a fantastic tool. General rule of thumb: whatever you do you have to measure in order to see perfomance benefits.

A: Thanks for the suggestion. I’ll check out BenchmarkDotNet. And you’re right about the rule of thumb – you always have to do actual measurements, you can’t rely on just theoretical knowledge to optimize your code.

Q: Just a question on array.CopyTo(...) where Mark said that the memory copy was done out of process by the OS (in C libs guessing "memcpy"). In the profiling application during the webcast, array.CopyTo(..) executed in 32ms, whereas the copy via index and loops was >300ms, in other words, using array.copyTo is an order of magnitude faster with OSX as the OS. It the 10-fold difference "about" the same with .NET on Windows? Different OS different ratio?

A: Yes, the ratio is roughly the same. The speed of a memory copy is more or less the same for all operating systems, whereas you might see small differences in 1-dimensional array performance. I’ve noticed that .NET Core tends to be slightly faster than Mono in handling arrays, because it’s much better optimized.

Q: I measured. GetType() is 171 ms vs. typeof() at 6 ms in a test of a million iterations.

A: That’s because typeof() is processed at compile-time, whereas GetType() is processed at runtime.

Q: How do you keep yourself upto date on the latest and greatest technology?

A: I read lots of technical blogs, and when I’m preparing for a new course or webinar, I do a lot of research and write small test programs to experiment. And I probably have a talent for learning new stuff very quickly.

Q: Would you use some form of multi-dimensional converter to convert a single dimensional array back to a multi-dimension array or would you take another approach?

A: It depends on the use case. I usually just wrap a 1-dimensional array so from the outside it looks like the original multi-dimensional array. The disadvantage of converting the other way is that you’re slowing the code down again, so I am a bit hesitant to use any kind of converter.

Q: Do you have any advice for Parallel.ForEach?

A: Yeah, use it! Parallel.ForEach is great for parallelizing regular for or foreach loops. It is my first step in parallelizing code, and quite often it’s all I need to do.

Two years ago, I wrote an app that processes Sharepoint documents. I had a for-loop in my code that would process each document individually. I parallelized the code simply by replacing my for-loop with a Parallel.ForEach. This drop-in replacement to make code multi-threaded is really nice.

Q: Have you tried these performance tests on .NET Core?

A: Yeah. Everything I show you in the Webinar is running on .NET Core 1.1

Q: Foreach loops do have a performance optimization over for loops in cases where the collection is already an enumeration or a function that yield returns?

A: No. Enumerations or methods with yield return cannot be indexed and they don’t have a well-defined upper limit, so there’s no benefit using a for-loop with them. If you do try to use a for loop, you’d have to manually access MoveNext() and Current, and this would be the exact same code the compiler produces when you use foreach.

Q: Is there any significant difference between pre- and post-increment operations. In C++ I am accustomed to always doing ++i in preference to i++ but I rarely see this being done by C# developers.

A: It works exactly the same as in C++, the difference between the two is the return value: i before increment or i after increment.

Q: Does the performance benefits you described for structs vs classes get lost when comparing the performance of passing classes vs structures to other functions (excluding cases where structs are being passed by reference)?

A: Passing structs to functions will slow down your code, because structs are copied by value. For every method call the entire struct will be cloned in memory. When you’re using classes, only the reference to the object instance is copied into the method.

So yes, for large structs with lots of fields you’ll see a measurable slowdown when doing lots of method calls with struct parameters.

Q: What is the difference between the heap and the stack?

A: The stack is a highly-optimized block of memory intended for data with a very short lifetime, just for the duration of a single method call. Stack memory is created when you enter a method and gets cleaned up when you exit out of a method. The stack is also fairly small, usually around 100MB. It’s optimized for a manageable number of small objects (thousands, not millions) with a very short lifetime.

The heap is a very large block of memory (multiple GBs) optimized for long-term storage. You can easily put millions of objects on the heap, and they can be either small or large. The heap has a special internal process for archiving long-lived data, and there’s a separate process called the Garbage Collector that cleans up objects that are no longer in use.

As a rule of thumb, the stack is slightly faster than the heap. It can also very quickly initialize new data by writing zeroes directly to memory (the heap calls the constructor of each object individually). The disadvantage of the stack is that it’s relatively small, and it assumes your data will be short-lived. The stack can also slow down if you have a very deep chain of nested method calls.

Q: Why and when we use reflection?

A: We use reflection when we want to dynamically access object fields or call object methods. With ‘dynamically’ I mean based on data that is not known during compile-time. For example, when we store database configuration data in a configuration file. The configuration file might say we need an OracleConnection or a SQLLiteConnection. With reflection, we can read this configuration field and then dynamically instantiate the correct object.

Basically, any time an object type, property, field or method appears somewhere in text format, we’re going to need reflection to perform instantiation, access fields and properties, or execute a method call.

Q: What does emit mean?

A: Emit means injecting a single CIL instruction into a dynamic method.

Q: What do you mean by baseline test?

A: A baseline test is a performance test of un-optimized code, to get a baseline performance value.

 

About the speaker, Mark Farragher

Mark Farragher

Mark Farragher is a blogger, investor, serial entrepreneur, and the author of 10 successful IT courses in the Udemy marketplace. His IT career spans 2 decades and he has worn many different hats over the years.

Mark started using C# and the .NET framework 15 years ago, and creates online courses that make complex C# programming topics easy to understand and accessible to anyone.

 

.NET Core is here! You've probably heard that it is lightweight in nature and that you can use the tools that make you happy. Most of us are going to let Visual Studio do the heavy lifting, and that's fine, but you can learn much about how things work under the hood if you put the IDE aside and work with .NET Core more directly.

And why stop there? .NET Core is cross platform, so in this webinar, our guest speaker Chris Gomez will do all of the development and testing on Ubuntu Linux.

This session is perfect for .NET veterans who are brand new to .NET Core and want to see what the brave new world looks and feels like. It's okay if you're unfamiliar with Linux, but are interested in having options available to you. We'll even learn how Microsoft Azure can make the heavy lifting to production much easier.

Watch the webinar and learn:

  • What the brave new world coming with .NET Core looks like
  • The acquisition and use of .NET Core on a Linux VM untouched by a Visual Studio installation
  • How things work under the hood if you work with .NET Core more directly
  • Development tools such as Visual Studio Code and how you can contribute to the .NET Core and tools ecosystem

Who needs Visual Studio? A look at using .NET Core on Linux on Vimeo.

Download slides.

Video Content

  1.  What is .NET Core? (5:03)
  2.  What about .NET Framework? (12:37)
  3.  What is ASP.NET Core? (13:04)
  4.  Prerequisites & Acquiring .NET Core (14:49)
  5.  Development Experience (17:30)
  6.  Q&A (47:20)

Webinar Transcript

Introduction

Tony:

My name is Tony and I'll be your moderator today. I work as a software developer here at PostSharp and I'm excited to be hosting this session today. I'm pleased to introduce today's speaker, Chris Gomez. Chris is a senior software architect and today he's going to share how to get started with development and deployment on Linux using .NET Core.

Before I hand the mic over to Chris I have a few housekeeping items to cover about this presentation. First, this webinar is brought to you by PostSharp. PostSharp is an extension that adds support for patterns to C# and Visual Basic, so if you are tired of repeating yourself in your code, you may want to check this out – as the folks at Microsoft, Intel, Bank of America and others who have been using PostSharp in their projects to save development and maintenance time.

Customers typically cut down their codebase by around 15% by using our product so feel free to go to our website, www.postsharp.net for more details and you can get a free trial there and try out PostSharp. And next, today's webinar is being recorded and the recording will be available after the live session so you will receive an email with the link to the recording.

And last, we'd love to hear from you during today's presentation, if you have any questions for our speaker, please feel free to send them through the questions window in GoToWebinar application, which is at the bottom of your player and we will be answering your questions at the end of the session.

And also if we have more questions than we can handle during the webinar we will make sure to follow up with your questions afterwards so you will have the answers along with the link to the recording. Without any further ado I'd like to kick things off by welcoming Chris Gomez. Chris, over to you.

Chris:

Great, thank you, Tony, and thank you PostSharp for having me today. Let me introduce myself real quick. My name is Chris Gomez and the first software I ever wrote was in BASIC on DOS 2.1 quite a while ago. Been writing software professionally for Windows since '93 and I'm currently a Microsoft MVP in Visual Studio tools and Development Technologies, which is a big mouthful for helping developers get their work done.

And then because I copied this slide from another slide, the next bullet point isn't important but I'm also a contributor to the Static Void podcast, which you can listen to at www.staticvoidpodcast.com. A couple of good friends of mine in the local Philadelphia region and I get together and we talk about topics that we think affect .NET developers at work, so we try to think a lot about trying to take the technologies we learn about at webinars and conferences, how do we actually take those to work. So that's what that podcast is all about.

Before we get started I want to let people know that today's slides are available on my blog at www.chrisgomez.com. The first post today is the link to the slides hosted out on Slideshare and a couple other interesting links, so don't forget to check that out if you think you're being overloaded by all the information and you wanna rush and copy the links that you see here, don't worry about it, just head over to my blog afterwards at chrisgomez.com and you can find these exact slides.

Today we're gonna talk, very briefly about .NET Core because what we really want to get into is how you acquire .NET Core on Linux, and what's the development experience like. And then there are so many different ways to publish your work, even if you've spent your whole entire development experience in Linux, never touched Windows, never touched Visual Studio, there are different ways to deploy your finished .NET Core apps and I'm gonna show you one of those at the end. Those are the things we're gonna cover. It's gonna be a pretty quick survey, there's a ton of content and we should go ahead and get started.

What is .NET Core?

Let's talk real quick about what is .NET Core. .NET Core is a brand new platform. This is not the next version of .NET Framework. You're gonna hear me use the two terms today, .NET Core and .NET Framework, and I'll go into a better definition coming right up.

.NET Core is a cross-platform implementation of the same .NET concepts that we're all familiar with, a CLR, a Common Language Runtime that supports multiple languages, garbage collection and other .NET idioms and concepts that we're all used to, including similar base class libraries. It runs on Windows, several Linux distributions and the Mac, and currently there's support for 64-bit, you do get 32-bit support on Windows, so just in case you're wondering the Ubuntu Linux image you're gonna see today is a 64-bit image. I think that's pretty common now.

So .NET Core is biased towards being platform agnostic. There are two major workloads that I think you can use today. There's a console applications and then the ASP.NET Core models on top of it. We're gonna talk about the ASP.NET Core framework. That's what most people are gonna be doing when they actually settle down and say, "Hey, I wanna get some work done." You can build services using the console model but today we're gonna focus on the ASP.NET Core model.

It's open source, so you can contribute to the code, you can contribute to the documentation and there's the link to get to the .NET repository. Today there's language support for C# and F#. You'll see in the documentation that Visual Basic is listed as future support, that's all the information that I have there.

A lot of people ask why do we need a second .NET? Well, it turns out we've already had many .NETs. And here's just a small sampling of some different .NETs that we have today. Besides the Mono implementation that was created very early in the .NET Framework's life, we have the traditional .NET Framework. It runs on Windows. It's gonna continue on and only run on Windows and it supports app models that you might be familiar with such as WPF, ASP.NET, Windows Forms and some other services like WCF, they're very popular.

.NET Core is currently, that I know of, being used in three places. It's being used for the ASP.NET Core workload, the Universal Windows Platform workload, which is listed here as UWP, and then it's not on this slide but I do believe that Xamarin Forms is taking advantage of UWP. And to give you an example of who else is taking an advantage of it, the Unity folks for the game engine. When you create a store app they use the Universal Windows Platform.

We've already had many .NETs but .NET Core, as shown in the previous slide, is trying to accomplish some new goals, such as achieving cross-platform, lightweight, being flexible.

Tony: Excuse me, Chris, may I have a question here?

Chris: Yeah.

Tony:

Are you sure that .NET Core goes along with .NET Framework and you've been talking about Mono as well. Is it really an extra framework along with those or does it use also .NET Framework or Mono on the respective platforms?

Chris:

.NET Core is a brand new framework, a brand new platform. That's probably a better word, to call it a platform. It has its own set of base class libraries that map there in the CoreFX repo on GitHub and then new app models are being built on top of it, and today we're gonna focus on ASP.NET Core but your question is a great one because early in the life of ASP.NET Core you could build on Linux, you could build for both Mono and the emerging .NET Core platform. Essentially they were using Mono as a stopgap. They could keep working on ASP.NET Core, which is an app model on top of a framework, and you could get started using it and testing it and providing feedback while they were finishing up .NET Core support for Linux. There was a period of time that both were available.

Now, I'm pretty sure today I haven't seen any evidence that you would be able to use today's ASP.NET Core on Mono and I don't know if there's plans to bring it back as a second Linux target. So today you would be doing ASP.NET Core on .NET Core for Linux. Does that help, Tony?

Tony: Yes, thank you.

Chris:

Great, thank you. It was a good question. Let's talk about what .NET Core doesn't do. It doesn't mind meld into your machine the way that .NET Framework does. When you use the .NET Framework it come with many operating systems, it comes with many server OS versions but when the times comes to upgrade it you have to go and get an installer and it goes and adds assemblies to the global assembly cache and beginning with .NET 4 it's an in-place upgrade for every app running on the server.

If you had written an ASP.NET app expecting .NET 4.0, except that there's some other ASP.NET apps running on that same exact server, they don't even have to be ASP.NET apps. They can be Windows Services even but there's a need to upgrade .NET. Everybody gets that upgrade.

Now, Microsoft worked really hard to try and make those upgrades as backwards compatible as possible but you can certainly understand that from a developer point of view that we might like to have a system or a platform that's a little bit more forgiving.

So there are two models for deploying .NET Core. There is the model where you include .NET Core right in your app, that's a self-executing model, and as part of publishing, you basically say that you want the self-deployment option and you literally will have a folder that you could pick up and even port to another machine, stick it to another machine and it'll work just fine, as a famous demo that I see the product team doing all the time is they put it on a USB key and walk it across the stage.

You can also install shared .NET Core frameworks side by side on the same machine and different apps can target the ones they choose. So you have a ton of flexibility in the deployment models. Now, .NET Core also does not implement everything you remember from the .NET Framework. Some examples are there's no Code Access Security model, there's no WPF. WPF was built on DirectX, which is very Windows-specific. There's no WCF. Most of the items that have been left out are either had a strong Windows tie, they were using underlying Windows technologies that really strong tie, like WPF or for now they just may not be considered higher priority but customers have been providing feedback and many things have made it into the roadmap based on customer feedback of things they still wanna use in .NET Core and possibly on other platforms, OSes like Linux.

What about .NET Framework?

What does that mean for the .NET Framework? Well, real quick I just wanna let you know that .NET Framework 4.6.2 was released in August 2016 and .NET Framework 4.7 was released in April 2017 so it continues to be a framework for Windows, we're seeing new versions, one last month and I think as long as we have Windows operating systems there's going to need to be support for new .NET Frameworks to support those features.

What is ASP.NET Core?

I talked about ASP.NET Core. What is that and how does it relate to .NET Core? I talked about .NET Core but now all of a sudden I'm talking about building web apps on it. Well, .NET Core I consider a platform. Runs on Windows, Linux and Mac, as you can see on the diagram at the bottom. ASP.NET Core is an open source and cross-platform framework. And since it's a coding framework to help you build web applications or web servers, it actually has the benefit of it runs on both .NET Framework and on .NET Core.

So you can get started with ASP.NET Core now and still run them on Windows servers that are using .NET Framework without ever installing .NET Core on them. Or you can begin to move towards .NET Core and possibly look at deploying on other operating systems like Linux.

It contains a few components to help you build sites and one of those is the ASP.NET Core MVC. If you are familiar with ASP.NET then you probably are familiar ASP.NET MVC. So this version helps you build your presentation layers and it also helps you build web APIs. There has been a convergence in the two frameworks, MVC and web API, they were already pretty similar so you will use ASP.NET Core MVC whether you're building presentations for websites or whether you're building a back end web API. You'll also see in the docs Entity Framework Cork, a new cross-platform data access layer.

Acquiring .NET Core

So what do you need to get going? Here are some different options, things you can use. I am today using Virtual Box running on Windows. You could also use Hyper-V and you could use other virtual machine options as well, there's nothing exclusive about what I've chosen here in the options box, or you can just use your Mac. You'll get a very similar experience if you're using a Linux distribution. You can also use Linux right on the bare metal if you've already got it running on a machine or laptop and you can do all of the command line stuff you're seeing today on Windows too. If you wanna experiment on any of the operating systems, you're gonna find the commands are very similar.

For some of the things we're using today I needed Node.js and NPM, those were prerequisites for Yeoman. If you don't know what Yeoman is, I'll get to it when we start the demos. I also installed a Yeoman generator for ASP.NET and I installed Bower because that's the client side package restore for Yeoman. If you aren't interested in Yeoman at all, you can actually skip this page just to get things started.

To acquire .NET Core you can go to the home for .NET at dot.net and it'll take you to this page, click through to the download section and you'll see the Linux installation guide and you see there's several Linux distros supported, from Red Hat. Today I'm using Ubuntu but there's also CentOS and so on so you've got a lot of options and we'll see real briefly how that works.

When you acquire it, it's all using commands. Unlike the .NET Core SDK for Windows, which has an installer, on the web page you actually just see a list of commands and it brings down .NET Core for you, the whole process took me approximately three minutes, and that includes building the app, restoring NuGet packages.

We're gonna look at Visual Studio Code today. I'm gonna show you a little bit about the OmniSharp project and we're just gonna take a brief look at JetBrains Rider. If you used JetBrains Rider on Linux, I noted in the slides here there are two other prerequisites you have to go get. You need to go get mono-complete and the MSBuild package for that to work.

Getting Visual Studio Code's a piece of cake. You could Google for this very easily. The web page will help you install it. I actually really did go click on that button right there for Ubuntu installs. Before you get started, if you open any kind of C# project, it's gonna recommend you get the C# extension. You're gonna wanna do that because then you get some nice debugging support in IntelliSense.

Development Experience

Let's go take a look at what that experience is like. Hopefully we'd gotten through most of the boring slides, right? Let's take a look at an Ubuntu distribution here. You can see that I've got it running in a virtual machine. I'm using Ubuntu 16.04 but there is more support than just that one, I don't want you to think that's the only one you can use.

The first thing we're gonna do is let's just say you've gone ahead and you followed all the instructions and you've installed the .NET Core on your machine and you're ready to go. Just to show you what that looks like, if you followed through, you saw on the slide I had the installation guide and you saw some screenshots but these are the actual commands, it's real simple, you're just adding some new libraries for apt-get and you can build your first Hello World app real quick.

Let's go ahead and do that and in order to do that we're gonna use a utility that gets installed called .NET. This .NET command ends up being super important and you can read the built-in help just by doing .NET help. It's got some important commands and this is actually an extensible model. You can add your own commands to it as well.

I'm gonna say dotnet new. Actually what I'm first gonna do is let me make a directory for this and let's just call it consolenet and then cd to it and we'll say dotnet new and what I'm saying here is that I want a console app. A console app is a real simple Hello World style app, it doesn't do much more than just kind of get your feet wet and show you how it works.

So how did I know to do all that. Well, I showed you that dotnet -- help brought up some general commands but you can also dig in deeper. If I say dotnet new -- help and what I find out here is oh, okay, I can pass it a template that I want to be instantiated and there are even some options where I wouldn't have to have made the folder first.

Anyways, I have gone ahead and shifted into this folder. There's not much here, just a CS.proj file, which might not look super familiar if you're used to CS.proj files that looked more like this. The new .NET Core model, MS build and the .NET Core team have worked to try to make a CS.proj file have much cleaner defaults, be human readable and human editable. It doesn't mean you have to spend a lot of time working in here but certainly it's nice to know that maybe the days of all those GUIDS, all those ugly GUIDS are gone.

This is the program that we're gonna run. If you say dotnet new console, you're really just getting like a demo app. And the first thing you would do if you were in the Windows world is you'd have to probably restore NuGet packages.

Here from the command line we continue to use .NET, we say dotnet restore. And it's gonna go ahead and restore the packages for this project, didn't take very long and we can say dotnet run and all this app does is write out Hello World to the console so it'll do that after it compiles. That's the basic, you know, getting started but if you wanna see more options, it turns out that there's, let's see, if I say dotnet new ... there are a lot of different other template types. You can have it create you a solution file or you can even say, if you want to edit NuGet config, web config, and there's even other templates besides that.

If I say dot net new MVC, and this time instead of creating the folder first, I'm gonna go ahead and say, I'm just gonna call this New MVC App. I'm using -o, which is output, telling it don't put it here in my home folder on my Linux machine, don't put it there, go put it in New MVC App, and it's gonna get to work. And it creates a new ASP.NET Core web app, this is a different template. Let's click back into here and just take a look at what we get.

This might look a little bit more familiar. There's a controllers folder, there's a views folder. In the .NET Core world we put our static files in the www root folder. Now, this is configurable but by default it's the www root folder and it's not like before where you actually were just kind of putting your static files somewhere in your project. They might go in a scripts folder, they might go just right in the root, they were mixed in with everything. Now they're isolated and the static files that you need served have to go in there.

And then you get some other basic things to start so you could just begin working, I mean you could open up just command line tools right here but I think this is probably a good opportunity to shift over to Visual Studio Code. So I had installed Visual Studio Code and you can run Visual Studio Code from any folder that you're in just by saying code.

So it's gonna open up this brand new MVC app for us in Visual Studio Code. Now, Visual Studio Code's not an IDE. It's a lightweight editor. It's not intending to replace Visual Studio but what's great about it is it's cross platform. You can use this on a Windows machine, you can use it back on a Mac, maybe you're developing in a Mac client.

And when you open a C# project, it's gonna notice a few things. For example, just my sitting here talking, it noticed that we haven't run dotnet restore. I did that on purpose because it's gonna bring up and say well, just like if you open a Visual Studio project that you haven't run NuGet package restore yet, you haven't built, you get all the red squigglies saying, "Oh, I don't know what all these things are."

This runs dotnet restore for you. And if you wanna debug in Visual Studio Code it needs a little bit more information. If you go ahead and click yes here, it's gonna add a little folder to your project called .VSCode and these JSON files just describe simple tasks for building. You can add to these tasks, you can add testing support and also this gives Visual Studio Code what it needs in order to attach a debugger.

Yeah, I said that right, here in Visual Studio Code we're going to attach a debugger, it's pretty cool. What I'm gonna do is I'm gonna go to the home controller and let's go ahead and if I come to the left side, here's a debug button because of that launch.JSON that was created for me I've already got this set up, I can say, dotnet core launch, click the play button just as if I'm in Visual Studio and it's gonna build and run this. Won't be planning to build too many more after this though. Got some ready-made projects for us.

Alright. All .NET Core apps, including web applications are really console applications and we can see all the output coming through the debug console. I didn't make any changes to the default template so it went ahead and used localhost:5000, which it's connecting to right now and getting the first page. And here we go, there it is.

Really this was just a file, new project and I called it New MVC App so they put this up here but I wanna show you that you can debug into this. So if I set a breakpoint here, just the way I'm used to, just sort of click in the margin, get a little red dot and then go ahead and go to that controller. Instead of just immediately switching to the controller we're gonna hit the breakpoint.

Now, I don't have a whole lot in here to look at but I certainly do have these locals so I've got perhaps a poor man's variable window over here, I could interrogate model state, I could add my own watches, I can look at the call stack. For example, here's the model state if you're familiar with MVC. Not a whole lot going on here in this simple app.

That is some debugging support that you get just with Visual Studio Code. If you're working with one project and this simple startup home controller. Now, let's take a quick peek at how this looks. I talked about Rider. I have Rider running already and here is just a simple app that was built using Rider and in this particular case we've got a web API app and you can see that it's a more traditional solution with two projects and we've got a little service library that the PeopleApi depends on.

You might be wondering well, wait a second, back in Visual Studio Code, how would you be able to do that? Well, it turns out Visual Studio Code will go ahead and pick up a solution file if it finds one and it'll go ahead and use that to set up an experience with multiple projects sitting in one window here. You're not seeing that here because in this particular case what you may not have noticed yet is there's no solution file. That's a Visual Studio artifact, not necessary for .NET Core and so it's gonna let me work on this new MVC app self contained in this CS.proj, a much simpler looking CS.proj I might add. Here the NuGet package references and the fact that I'm targeting .NET Core 1.1 and that's all it needs.

Now, this talk is about who needs Visual Studio, so how did I create this? You could use Rider to create a solution with multiple projects but it turns out that all-powerful dotnet command is here to help us as well. One of the dotnet new commands that I think you saw, if we look at the help, was SLN. And that's seems kind of strange. What would I need with an SLN support?

It turns out that just to replicate what you saw there without actually going through all the motions, what I'm doing here is I'm saying I wanna create a new solution project and go ahead and put it in a folder called Solution just so that we remember where it is, and it says, "Yeah, sure, I did that for you", so we can switch over to it and take a look and all that's in here is just a solution file. And in fact it's pretty barebones.

Now, here's where you might be thinking, "Oh, great, so I'm gonna add multiple projects to this, do I have to learn this solution file format?" And it turns out you don't because dotnet new is here to help you again. Dotnet new has a web API template, earlier I used the MVC template and if I wanted to replicate what I just showed you with that simple PeopleApi, I could do it this way. I could say I want you to create a new project called PeopleApi, I want you to put it in the PeopleApi folder, under the folder that I'm in and it does that and we can even see it already, it's already there. This is kind of the way Visual Studio sets up things, right?

And then the problem is though is if we take a look at the solution again, nothing's changed here, so how would it know? Well, there's a command for that too. .NET has SLN commands to help you manage your solution files in a folder and if I say dotnet SLN add, and I tell it, "What I need you to do is I need you to go dive into that PeopleApi folder and find the CS.proj in there." Then it says, "Okay, I'll go ahead and add it to the solution" and now the solution's getting too big to see just in terminal. Unless I make the window bigger.

So I'm gonna go ahead and open Code. And when Visual Studio Code opens you're gonna see that this time it says, "Well, wait a second, I see there's a solution file here" and the dotnet new command added everything that the solution file needed to add this project, you can even remove projects later. You get this command line Visual Studio add/remove project support and here is that PeopleApi. And I could come back out to the terminal and I think you saw in that example there was a class library. Well, there's a class library template too, it just provides you, if you're familiar with the class library template in Visual Studio, just a class, it's not trying to be ASP.NET MVC or anything like that but what if you have some kind of service library that your API called out to.

Then what I'm doing here is I'm saying, "Let's make that project. Go ahead and put it in that subfolder." It did it, we can see, we got PeopleApi and that simple service library so I can dotnet, I'm gonna add that to this solution with dotnet sln add. And tell it, "Dive into the service library folder and find the CS.proj file."

Once it does that, if I come back over here to Visual Studio Code, I need the Visual Studio Code we were working on though. The solution now knows about both projects. Both of these fields are here and when you build the solution, when you build, when I say dotnet build from in here it picks up that you have a solution file, builds dependencies, keeps track of all of that for you, I can go do that out of the terminal or I could even do that from within here. You can bring up an integrated terminal from within Visual Studio Code. If you ever forget what shortcut keys are like I have, I can do control+shift+P and bring up this command palette and say well, I wanna open a new terminal.

So right here inside Visual Studio Code, that's not what I wanted. I wanted ... there's built-in terminal. Looks like that's what I get for not remembering the shortcut keys, so what I'm gonna do is I'm gonna say I want .NET Core launch JSON and then it'll go ahead and build this. That's not what's super important here, I wanted to show that you can have multiple projects in a solution and the dotnet command is your pathway to managing projects and solutions if you don't have a tool like Rider or Visual Studio.

Now, the point of this talk is not to tell you not to use Sharp tools, not to use tools that do things for you. Part of it is to show you that this is all being built at a command line level using base tools that can be scripted, that might in build scenarios, in automated test scenarios or, eventually, those same tools can be implemented in IDEs so that when you say do something in an IDE, it does the exact same thing as if you do it from the command line, because how frustrating it is when that's not the case.

Publishing and Deployment Examples

Let's finish things off by taking a look at how deployment works. You can create new projects with the dotnet command, you can bring projects directly over from Visual Studio. I also showed that you can use JetBrains Rider and then there's the Yeoman Generator, where you can say yeoasp dotnet and you get a console experience to help you create your projects.

Now, you have a lot of options if you wanted to deploy to the cloud with ASP.NET Core. You could use Amazon Webservices, they even have blog posts showing you how you can deploy an ASP.NET Core application using Beanstalk, which is very similar to Azure Web Apps. They're both essentially about handing it over code and forgetting about the infrastructure and the scaling.

You can use .NET Core in AWS Lambda projects. You can use it with the Google Cloud platform. There's a demonstration Scott Hanselman did where he tried out the Google Cloud platform with ASP.NET Core. And on the Google Cloud platform blog they talked about using containers and Kubernetes all in their cloud environment.

Speaking of containers, containers, if you know about Docker and containerization and you wanna try this out, you don't have to get tied to any platform as a service offering, whether it's Azure Web Apps or Elastic Beanstalk or something on Google Cloud, you could say, "Hey, I'm gonna use containers on my own infrastructure, I'll use them on VMs", and all those clouds I just mentioned, they have options too for hosting containers in a more platform as a service model.

Let's take a look at how quickly you can take your .NET Core project, create Docker image and even get it published up to the cloud, and, again, you could use any cloud you want in this case. Let's see what that looks like.

What I have, let me fix this terminal up a little bit here, I have another solution called MVC2Docker, and it's based on that MVC template that you saw at first. I'm gonna go ahead, instead of trying to find the Visual Studio that has this running we'll just open another one. This is the MVC template with just a minor change, and you can see the change right here, the Docker file.

What's a Docker file? If you're unfamiliar with Docker, this is like the directive for how to build an image that'll become a self-contained Docker container, that the bits never change, they're immutable and you can basically send those off to either your own infrastructure, your own Docker hosting or use any Docker cloud provider here so it doesn't really matter.

It's a pretty simple format, it's pretty readable even. Microsoft has images and they have a .NET image, and I'm just saying here, go ahead and take the latest. You're seeing Visual Studio Code is giving me some IntelliSense on what these different commands do. Now, I have to tell it where do I wanna get these files from. Well, I'm gonna go out to the terminal in a second and you're gonna see how to publish.

Publishing is the act of using the dotnet command, once again, to publish out files into a folder that are ready to pick up and host somewhere. And in this case, we're gonna say, "Well, I want you to take those files and copy them into the container's App folder."

Now, you probably remember that our MVC app by default runs on port 5000 so we need to have the container expose that port and in order, there's an environment variable that I want set in order to make sure that 5000 is open and communicating with ASP.NET Core and then I gotta tell it, "Well, now what do you run to get things started?"

When you've published your app you don't need to use dotnet run, like I did in the very first demonstration. You can say dotnet and your second command argument is the DLL that you've created, so that's really what I'm saying here, is run dotnet MVC2Docker.dll.

In this MVC2Docker folder the first thing that I have to do in order to publish this is I need to say, I can say dotnet publish and once again it's another command with all kinds of help. You can pick the framework, you can pick the runtime, you can pick where you want it to be output but if you do nothing more than say, "I want you to do dotnet publish" then go ahead and do a release build, which is what I'm saying here with the -c option, then that's gonna go to work and it ends up putting that here in the bin release folder and then it takes the name of your framework and puts it in a publish folder.

So I already knew that that's where that was going to go so it was pretty straightforward for me to just copy that right into the Docker file. Now, I've already created the Docker image. It's time for me to start closing some of these Visual Studio Code instances, I think. Let's see if I can get control of my virtual machine here.

Sorry about this, it looks like combining the streaming with having a few too many things open here ... Okay. I think we're getting better. Yeah, let's go ahead and exit. Alright. Yeah, and the webpage is down so I expect this not to be building. Just for the sake of time, I've already gone ahead and built a Docker image, so let's take a look, if I say, docker run and what I'm gonna do just to kind of prove to you that I'm not using the same exact thing I did before is ... Okay. Do I already have it running? I suppose that's possible. Oh, look at that.

Okay, so I already have this Docker image running, I actually started it twice, it's called Azure Web App on Linux. It was created using the Docker file you saw and so that means that right now on my local machine that same exact app with just a minor change, it's currently running except I've mapped port 5000 to port 8080 so 5000 isn't gonna work anymore because we've turned off that web app, we're not using it so if I go ahead and say localhost:8080 then that's gonna come up.

And I've made just a minor change to this app, I didn't do anything drastic. I got rid of the carousel switching and I changed it to say that it can run in Docker containers just so we can see, like yeah, that's a different app, it's the MVC2Docker app, it's not the one you saw earlier.

I went ahead and I published that out to my own public repository, so I published this Azure web app on Linux. Why? Because I'm gonna use a preview feature in Azure app service called Azure Web Apps on Linux and it was very, very simple for me to just basically tell Azure Web Apps as part of the configuration, you say, "Where's the image?" Oh, it out on DockerHub and it's called azurespaceshot/azurewebapponlinux. Did that and you can see Azure took it, published it out to a URL of my choosing, whole process took about five minutes and here is Docker container running using the preview service, Azure web apps using the Linux containers running my special slight change to the MVC startup app.

Alright. .NET Core is lightweight, it's less intrusive than the .NET Framework, it's cross-platform. You acquire .NET Core on Ubuntu Linux using familiar tools like apt-get. The development experience is .. you can use Visual Studio Code, we took a look at that but you saw that there are other tools as well and I showed that there is a publishing and deployment process.

I made a Docker container. I could've used any cloud service to deploy it or even my own infrastructure. It turns out 45 minutes is just not a lot of time to barrel through everything you might wanna know about .NET and its future, .NET Core, .NET Core and Azure or any other provider. I want you to take a look at these videos, if you wanna see this great video about the history of .NET, Kathleen Dollard just does a wonderful job here, Jon Galloway talking about using in Azure, Jeff Fritz has a wonderful advanced .NET Core talk that really goes in depth, it's a nice deep dive and the last talk I wanted to show you is deploying ASP.NET Core applications using Docker containers, which will go much more in depth than what you saw me do, and if you're not familiar with Docker at all, don't be afraid, it's super easy to get started and it really frees you up to go to whatever cloud, whatever infrastructure you'd like. I think that's about the time that we have, Tony.

Tony:

Yeah, thank you very much, Chris. It was an interesting presentation. I would just ask you about one thing you didn't mention, you didn't mention .NET Native here. Do you have any information about this project, how is it going to be?

Chris:

Yeah, .NET Native, that's a great question. And I think a lot of developers ask about it because a couple of build conferences ago the product team showed .NET Core being compiled with .NET Native. Now, that was a Hello World sample, and this is what I know about .NET Native's current state: it is being used when you build applications for the Windows Store, that your application when you submit it, it gets compiled using .NET Native to native code and then that is what is actually put on folks' computers.

There is some SDK information that you can use to get started with .NET Native but the only workload that I know of right now is the Universal Windows Platform. I do not see any documentation, I haven't seen any update on .NET Native for .NET Core projects, ASP.NET Core specifically or more importantly what I think we all wanna see is on Linux. I mean we've seen the demo but it's been a little while and so I just don't have any more on that. I would love to hear if anybody said, "No, you just missed it" and don't forget build is next week. No idea what's gonna be announced there, I'm not pretending that you should tune in to hear an answer here but it is next week and a lot of times some surprises are saved until then.

Tony:

Okay, thank you. Now we can go ahead and answer some questions of our attendees. I would begin with questions about when the slides will be available and if we would provide the commands that have been used during the presentation and so on, so if you can repeat this information, please, Chris.

Chris:

Yeah, that's a great question. On my blog, chrisgomez.com, you can go ahead and reach the slides today. They're hosted on SlideShare but the second part of that question was about if you wanna recreate some of these demos. That's a good question and you know I think probably demands some roundup blog posts, putting together the documentation and support that I had in order to make this happen there is the dotnet command documentation, there's a Docker startup documentation.

We only got to just scratch the surface today. The time went by really, really fast but I could understand if you want to recreate this, it makes good sense to be a blog series for that. The slides are available now and I will get to work on blog posts to help you replicate what we did with the dotnet command and with the Docker commands.

Tony:

Yeah, that would be great and also don't forget that there will be the recording available so once this live session finishes we will process the recording and all registered people will get the link to the recording along with the questions answered.

 

Q & A

Q: Is if it possible to make a traditional Windows application and/or console application and if it makes sense to replace the .NET Framework with .NET Core?

A: Right, that's a good question. Today if you're using .NET Framework, first of all there's no rush to .NET Core but what you wanna find is what's the feature of .NET Core that you're waiting for and say, "Why, I really could use this."

For one thing it's the cross-platform support and I would just say that the number one workload I'm seeing people head towards is the ASP.NET Core stuff and using ASP.NET Core MVC and Entity Framework Core and things like that.

If you've got an existing application, you have to make that decision on migration because it's not, you don't just simply pick up your code and move it over to a .NET Core project and build it. You got a little bit of a hint when you saw that the .NET Core code that there's no global ASAX anymore if you're used to that. There's a startup.CS and there's a program.CS file. It's a slightly different model, the idea is to keep the concepts intact. You might be missing features that you're used to on the Windows console app that you described that you might have or using right now or that you're getting started on, so double check and see if they're in the .NET Core framework or if they're on the roadmap, otherwise you have both of these options for the foreseeable future.

Q: Is it possible to build daemons with .NET Core?

A: Well, you can build, like I said, you could build console applications if you wanna register it as a daemon. I can't point you to a tutorial or post that I'd seen on that right now. It's an interesting idea and it might be so simple that it would just work. I was just trying to think through my research if I saw anybody specifically reaching out to do that. I don't see why it wouldn't be possible though, because, again, you have two major choices in .NET Core, you have console applications, which is just plain old stuff running in the terminal, or you have this ASP.NET workload, which sets up a server and hosts traditionally built web applications.

Q: What about external DLL files, how can we add them to our projects?

A: Okay, so if you're working purely in Linux, let's see, I think I should be able to ... go ahead and come back here. I'll try and show this but I'll answer the question, is that the CS.proj file still is the place to add package references. It's just that if we're used to Visual Studio, you know, we would just say, we would just add a reference and be done with it. We wouldn't really care that much about what was actually happening under the hood.

But if I go back to this solution project that I created, actually let me do a different one. I want solution, back. I'm gonna just show you just in the CS.proj real quick where that reference got added so that the, in this particular case I know that it's a project reference but it's not the only way to add files. So the solution knows about the two projects, that I already showed but in order for this PeopleApi to use the service library, because in this particular example the values controller is modified so that it's using that service library and it's calling a method on it. It's creating an instance of a class, calling a method on it.

In order for it to know where to reach out to that, there's a different project reference types, so project reference include instead of package reference. What we need to look up here, and I didn't demonstrate, is in the new CS.proj format, you have project reference, package reference, other types of references and you could just take a bare DLL, reference it.

One sneaky way to find out how to do things, quite frankly, because I think most of us are probably Windows developers and have Visual Studio, is go try something in Visual Studio in a .NET Core project and then go check out what it did to the CS.proj file. That is what led me to other research to say, okay, so now that I know that it added this item group, let me go find out what item groups and project references are. It's a reverse engineering way to jumpstart your research but it definitely works.

Q: It's per my knowledge .NET Core has project JSON files for project. Please correct me if I'm wrong. Which .NET Core version has CS bridge files support?

A: Good question because the project JSON file was going to be the way forward, it was gonna be the end of CS.proj, right? You're absolutely right, all through the betas there was a different JSON based format. There was a decision made, I believe it was maybe close to a year ago now that trying to force project JSON onto .NET Core was not going to work. The reason why is that .NET Core was gaining some mind share in other workloads, not just ASP.NET. The Xamarin people got interested, the Unity people got interested, the regular .NET team got interested. That's why today we have the CoreFX repo, which is .NET Framework team traditionally kind of concern, although, remember, it's not the .NET Framework, that is the Windows framework.

Then there is the ASP.NET repo, and that product team is building on top of .NET Core. They decided that instead of making everybody move off of MSBuild and throw away all of that and basically reimplement MSBuild so you could support all these other different types of projects that instead ASP.NET Core went back to MSBuild. This is the final CS.proj.

If you take your .NET Core 1.0 app, which is still project JSON based and you bring it to .NET Core 1.1 there is a migration, the dotnet command will migrate your project JSON to CS.proj for you. I will tell you I've heard that your mileage may vary on that. I haven't moved over, like so many dozens of projects that I can tell you oh, this is the pitfall. And, again, a lot of them are demonstration projects, so those were a piece of cake.

The format going forward is CS.proj. This is what it looks like. So if you're used to looking at a CS.proj file and seeing tons of GUIDs and all kinds of nonsense that you don't understand, there is an effort going on to make this readable, partially so it can be hand edited and partially just so it's not so frustrating. You don't see all of the source files in here anymore. They're assumed to be in your project by default because they're in the folder.

Now, what if you don't want a file in here? There is an exclude option, going the opposite direction. Exclude what you don't want, don't have to list every file. Remember how you would create a new class and that would be a CS.proj change and you'd have to go check that into your source control, and guess what? Someone else created a new class too and that was the worst merge conflict, happened all the time, most frustrating thing in the world. We're getting away from that now, that's kind of nice but yes, the answer is that CS.proj is the way forward and project JSON is .NET Core 1.0 and it's not coming back.

Q: What is the Yeoman about?

A: Yeoman, yeah, I didn't get to show. Yeoman, let's go take a look at what generating a Yeoman project is. Yeoman is an open source project generation, the web's scaffolding tools for modern apps. It's not just to support ASP.NET. But what happened is a lot of the community felt wouldn't it be nice if you could use Yeoman and that generator for ASP.NET, what if you could use that to build projects?

You're gonna see this looks very similar to dotnet new, extremely similar. It's even got a lot of the same APIs, except this is an interactive console, and I'm just gonna say go ahead and make a web application basic, and I'm gonna say, "Oh, I could use Bootstrapper semantic UI, let's do Bootstrap." What do I wanna call the app? I can change the name of it here and it's going to go ahead, create all the files and use Bower to install the client side libraries and you've got another way to build apps.

Now, pros and cons to Yeoman. The pro is it's open source, we can all go work on it however I can tell you that the .NET templating is now also open source so there's an interesting pivot going on. This project is still going, you can see it's a little behind though, they're always trying to catch up to what the .NET product team is doing with the dotnet new command, and you and I are able to contribute to the dotnet new templates today.

We're kind of in an interesting space in wondering should we continue to have the Generator ASP.NET, I'm not part of that project, they may wanna continue on to be an alternative. At the same time you can contribute to dotnet new templates but this is just another way to get things started, I can dotnet restore and dotnet run and this project is gonna work just like as if I used dotnet new.

Q: Is Visual Studio code open source?

A: Visual Studio Code is open source.  You can find the project here: https://github.com/Microsoft/vscode.

Q: How did "dotnet restore" know which packages to restore?

A: When you install the dotnet tool following the instructions on the .NET Core download site (https://www.microsoft.com/net/download/core), a default NuGet Config file is created with a default feed.  You can find this in Ubuntu in the ~/.nuget/NuGet folder.  This can be overridden in your projects if you include a NuGet.Config file.  For more information, read about dotnet restore in the documentation at: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-restore

Q: Does ASP.NET Core run on ARM?

A: I haven’t personally investigated ARM, yet.  However, you can see the daily builds for .NET core on various platforms here: https://github.com/dotnet/core-setup#daily-builds. Here you will find builds for ARM versions of Windows and Linux.

An interesting source for information is a recent podcast with Scott Hanselman and his guest Adi Avivi.  In the show, they discuss developing RavenDB on .NET Core for the Raspberry PI: https://www.hanselminutes.com/579/ravendb-the-open-source-nosql-database-for-net-with-adi-avivi

Q: Is there a different NuGet website for Core or it's all in the same place with .net packages?

A: NuGet as a product has evolved to support the needs of .NET Framework and .NET Core.  If you use dotnet new or Visual Studio 2017 to create a new project today, the feed location is https://api.nuget.org/v3/index.json for both. 

Q: Can you provide us the commands that you used in this presentation?

A: Unfortunately, it would take a few posts to recap everything used here to download and install .NET Core and to use the dotnet tool for its various features.  We also quickly published a Docker image and I used one published previously to Docker Hub for the Azure App Service on Linux.  Some great resources to start are:

Step by step instructions to install the .NET Core SDK on Ubuntu Linux: https://www.microsoft.com/net/core#linuxubuntu

dotnet command (https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet) - This documents the various features of the dotnet command line tool.

An overview of the process to create your own Docker images with your application: https://hajekj.net/2016/12/25/building-custom-docker-images-for-use-in-app-service-on-linux/

Using your docker image with Azure App Service for Linux: https://docs.microsoft.com/en-us/azure/app-service-web/app-service-linux-using-custom-docker-image

Q: Does it make sense to use IIS to host an ASP.NET Core application?

A: If you are going to run you ASP.NET Core application on a Windows Server, it absolutely makes sense.  In fact, today it is required to run a full featured web server as a reverse proxy.  .NET Core ships with a lightweight server named Kestrel.  Kestrel has been tuned as a high performance web server built with .NET.  However, it has not to this point been hardened to be the public facing server.

In the Linux world, this had already become more common.  The idea was that programming stacks would ship with small lightweight and fast resource servers, but that you would use an application server to guard it and configure access to it from the outside world.

Please carefully read the section called Set Up A Reverse Proxy in the following documentation discussing how to host ASP.NET Core applications today: https://docs.microsoft.com/en-us/aspnet/core/publishing/

You should also read When to use Kestrel wth a reverse proxy in the documentation: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel#when-to-use-kestrel-with-a-reverse-proxy.

Finally, to host on IIS, you will need to learn about the ASP.NET Core Module on IIS: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/aspnet-core-module

Q: I noticed that the support for Dependency Injection and IoC is also only minimally supported.

A: ASP.NET Core supports a minimal dependency injection model without any external dependencies. Some developers prefer minimizing dependencies and don’t need more than this minimal model. However, the system is not closed, and other dependency injection systems may be used.

The documentation discusses the built-in system at length and provides an example of using Autofac to replace it in the document called Introduction to Dependency Injection in ASP.NET Core: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection

Q: Did you also try VS for Mac? Can I also use it to build apps with .NET Core?

A: I do not have a Mac and have not tried Visual Studio for Mac.  Visual Studio for Mac was made generally available during the BUILD conference.  Among other features, you can begin creating .NET Core and ASP.NET Core applications with the new IDE.

Q: I have some existing C# libraries that I would like to try on .NET Core for possible use on Linux. Any advice?

A: Research .NET Standard.  There is some common confusion about the difference between .NET Standard and .NET Core.  .NET Standard is a specification.  It defines the APIs that are available at a certain release level.

NET Standard accomplishes this by defining the intersection of APIs available in older platforms (some that existed before anyone had the idea for .NET Standard) and paves the way for newer .NET platforms (which includes newer versions of .NET Core, .NET Framework, and perhaps new “.NET’s” no one has thought of yet) to embrace common sets of APIs.

If you’ve written .NET libraries intended to work on multiple platforms, you may be familiar with Portable Class Libraries.  .NET Standard intends to remedy the problems with PCLs and is the present and future of .NET library compatibility. 

This post on the .NET Blog introduces .NET Standard and links to some valuable living information such as a FAQ and the compatibility matrix:  https://blogs.msdn.microsoft.com/dotnet/2016/09/26/introducing-net-standard/

For documentation on .NET Standard, visit: https://docs.microsoft.com/en-us/dotnet/standard/library

Q: All the things we've done so far are already available in .NET Framework in a very elegant and easy manner. Apart from the Cross platform, why should we go for .NET Core?

A: You are asking a very good question that your team must answer for itself.  If you are satisfied with your solution, .NET Framework and support on Windows Server is not ending.  In fact, I would not expect any kind of announcement that .NET Framework will be superseded.  .NET Framework is the desktop platform for .NET on Windows client and server.  When new Windows features ship, a new .NET Framework often follows.

You may want to research .NET Standard and watch the evolution of that space.  Over time, you could responsibly make sure your internal library code supports a .NET Standard version.  At some point, you could consider a trial run of .NET Core and reuse your internal library investment because .NET Standard enables compatibility across .NET platforms (in this example, between .NET Framework and .NET Core).

The other major feature you may want to keep an eye on and test for yourself is performance.  Besides, cross-platform support, ASP.NET Core aims to be a high performance framework.  If you proved that significant and necessary performance gains could come from switching, that could be a good consideration for doing so.

For each case I have discussed, there should be no immediate urgency on your part.  Your question implies you are very satisfied with your solution.  I would just keep an eye on .NET Standard and see if it makes sense to eventually consider making your libraries implement the standard for future flexibility.

Q: Do we have NuGet package support in Linux?

A: When you use .NET Core or ASP.NET Core, you are retrieving packages from NuGet even for base class library items such as .NET Core itself.  You can created NuGet packages and either post them on internal feeds or the public NuGet feed and target them in your Linux projects.  For example, commonly used NuGet packages such as JSON.NET implement the .NET Standard and are part of the ASP.NET templates for a new project that you might create in Linux.

.NET Core is made up of NuGet packages.  This is a departure from .NET Framework, which was obtained and installed separately.

For more information on NuGet packages and their use in .NET Core, see the article Packages, Metapackages, and Frameworks in the documentation:
https://docs.microsoft.com/en-us/dotnet/core/packages

Q: Is there CMake support for C#? I think I read something about that a while ago...

A: I’m sorry.  I’m pretty unfamiliar with using CMake.  I am familiar with the tool, but have no practical experience with it.  However, I can tell you that CMake is used on .NET Core itself.  For example, the CoreCLR for .NET Core has CMake as a build prerequisite, so if you wanted to contribute to this repository, you would be using CMake:  https://github.com/dotnet/coreclr

Q: Is there a way to execute all the TESTS on the solution using VS Code?

A: The best answer I have for this is to learn about Tasks in Visual Studio Code.  Tasks allow you to setup command line tools to be executed within Visual Studio Code.  You can learn more about Tasks here:  https://code.visualstudio.com/Docs/editor/tasks

Next, you would combine Visual Studio Code tasks with dotnet test.  The dotnet test command will execute a test runner against a compiled .dll.  It is like “dotnet run” but for tests.  MSTest, NUnit, and xUnit are all supported test frameworks.  You can learn more about dotnet test here:  https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-test

But don’t stop there.  dotnet-watch is a command extension for the dotnet command.  The command doesn’t include watch by default.  You add a reference to your project and now the command “dotnet watch” will run a command of your choosing when files change in the project.  One of the things you could do is automate unit testing every time a file changes by using all of this together.

Scott Hanselman demonstrated bringing this all together in the following blog post: https://www.hanselman.com/blog/UsingDotnetWatchTestForContinuousTestingWithNETCoreAndXUnitnet.aspx
You can learn more at this documentation article called Developing ASP.NET Core apps using dotnet watch:  https://docs.microsoft.com/en-us/aspnet/core/tutorials/dotnet-watch

Q: Can we run Package Manager Console for the Nuget packages?

A: You can continue to use Package Manager Console in Visual Studio in Windows.  On Linux you will be using the command line tools such as “dotnet add package” to add a package to your project from the command line.  You may also edit the project’s .csproj file directly.  The new format is so streamlined it will not take long to understand.

This article discusses the changeover from .xproj to .csproj as .NET Core has matured: https://docs.microsoft.com/en-us/dotnet/core/tools/project-json-to-csproj

Q: Does .NET Core have the same libraries that already existed in ASP.NET?  For example: does System.Web.Security exist in .NET Core?

A: ASP.NET Core does not implement everything that you found in ASP.NET just as .NET Core does not attempt to implement everything found in .NET Framework.  Examples of items left out were some that were very Windows specific in nature, items that customers weren’t using, or items that would benefit from some redesign.

For example, when considering ASP.NET MVC, you will find in ASP.NET Core this is called ASP.NET Core MVC and is a “concept compatible” framework.  You can not simply life your ASP.NET 5 code and use it immediately, but the idea was that the code in ASP.NET Core MVC would be very familiar to an ASP.NET MVC developer, and they would have no problem transitioning to the new framework.

For the record, there is no System.Web.Security namespace today in ASP.NET Core.  The security concepts are presented in this article in the documentation: https://docs.microsoft.com/en-us/aspnet/core/security/

Q: Is it a good idea to invest in containers as a pattern?

A: Containerization was way beyond the scope of this talk.  However, I wanted to point out that .NET Core and ASP.NET Core were “container-ready”.

As a contrast, there was one way to host ASP.NET MVC in ASP.NET 4.6, and that was with IIS.  That automatically means Windows Server.

ASP.NET Core presents many options.  Some are great for your current datacenter, and require little or no change.  Other options, like Docker containers, are good options to explore for the future, especially a move to cloud based container services.

Reasons for using containers is a big topic, but one example of a benefit is that a container represents a complete image of an application.  The bits you run on your development machine once the container is built are identical to the bits running in your datacenter.  You can reduce or eliminate setup instructions and be assured that there isn’t a rogue configuration setting somewhere that makes it work for you but not in production.

 

 

About the speaker, Chris Gomez

Chris Gomez

Chris Gomez has been developing software professionally since 1993, but the love of coding began in grade school when he developed his first simple games on an IBM PC. His day jobs have included creating entertainment kiosks for theme parks and music retailers, commerical loan analytics, and clinical data exchange systems. Chris is recognized as a Microsoft MVP for Visual Studio Development Tools and Technologies. Today he is focused on delivering distributed systems with .NET and other platforms, but he still finds time to teach kids of all ages to make their first games to ignite their interest in coding.

 

Fluent interfaces are more than just a pretty way to write code. They can prevent errors, by ensuring your shared code is used correctly.

Our guest Scott Lilly will walk you through the topic of fluent interfaces and demonstrate how it can save you from needing to create the documentation that we never have time to write anyway.

Watch the webinar and learn:

  • What type of code can be improved with fluent interfaces
  • How to design the "grammar" for a fluent interface
  • How to quickly and easily write the code for your own fluent interfaces

Mistake-Proof Your Code with Fluent Interfaces on Vimeo.

Download code samples.

Video Content

  1.  What is a Fluent Interface? (1:40)
  2.  When to Create a Fluent Interface? (4:36)
  3.  How to Build a (Fluent Interface)? (9:58)
  4.  Define the Vocabulary (Functions) (10:14)
  5.  Map the Grammar (15:50)
  6.  Summary (43:42)
  7.  Q&A (44:28)

Webinar Transcript

Introduction

Alex:

Hello everyone, and welcome to today's webinar, Mistake-Proof Your Code with Fluent Interfaces. My name is Alex and I'll be your moderator. I work as a software developer here at PostSharp, and I'm excited to be hosting this session today. I'm pleased to introduce today's speaker, Scott Lilly. Scott is a C# developer and leading practitioner, and today he is going to walk you through the topic of fluent interfaces. Before I hand the mic over to Scott I have a few housekeeping items to cover about this presentation. First, this webinar is brought to you by PostSharp, and PostSharp is a compile extension that adds support for patterns to C# and VB.

So if you are tired of repeating yourself in your code, you may want to check it out, as did folks at Microsoft, Intel, or Bank of America who have being using PostSharp in their projects to save development and maintenance time. By using our products, customers typically cut down their code base by 15%. So feel free to go to our website, www.postsharp.net, for more details, or to get a free trial. Next, today's webinar is being recorded and it will be available after the live session.

You will receive the email with a recording link after the session. So the last thing is we'd love to hear from you during today's presentation. If you have any questions for our speaker, please feel free to send in through the question window at the bottom of the player. We'll be answering questions at the end of the session, and if we don't get to your question during the webinar, we'll be sure to follow up afterwards via email. So without any further ado, I'd like to kick things off by welcoming Scott Lilly. Scott, it's over to you now.

What is a Fluent Interface?

Scott:

Okay, thank you Alex. Today we're going to talk about fluent interfaces, and especially how you can use them to increase your code quality. Here's an example of a fluent interface for sending an email. If you're not familiar with them, it's just a way to write your code so that it almost looks like a natural language sentence. You've got fluent email, dot create email using SMTP server from/to.

You could show this code to a business analyst or a software development manager, and they'd understand what the code is supposed to do. This is also good for any library code that you're going to write. I do mostly large projects for corporate clients, and those often take years of development, and have dozens of team members, some who leave during the project and some who come back, some new programmers. We don't want to spend a lot of time explaining our framework code, our libraries that we use.

And fluent interface can actually help them use our libraries much easier. It kind of instructs them as they go along. You can only call a function in a particular order, and it uses all the functions and sets all the parameters that our classes need. Now, I've got a quick question here. You don't need to answer it, you don't need to type in an answer, but how many attempts does it take to plug in a USB device? For those of us who work in technology, we've done this hundreds of times, and we know that the answer is three. Because you try to fit it in, you have to turn it upside down, still doesn't fit, turn it back upside down, and now it actually fits in.

Why is that? Because the USB ports were not designed to be mistake proof, and we don't want to do that with our code. We want to write code that is kind of like a power cord: It can only go in one way, it can only be used the correct way, so that way anyone who uses our code isn't going to have problems. So what we're going to cover in this webinar, first I'm going to talk a little bit about when you might want to use a fluent interface, so when you see this come up you can say, "Ah, I should think about adding a fluent interface here." And then we'll talk about how to create one, we'll define the vocabulary in our fluent interface, which would be the from, the to, the BCC, and all that.

And then we'll define the grammar, the way we can make it so that anyone who uses our classes with the fluent interface has to use it correctly. And then we'll actually write some code to see how you might do this with a sample report generator class. 

When to Create a Fluent Interface

1. You want to prevent running potentially-dangerous code

First, you might consider creating a fluent interface when you have potentially dangerous code that could be run, and you want to make sure that the developer that calls it can't do something incorrectly. For example, here's a function to delete accounts that haven't logged in for over one year. We've got a SQL connections, we create a SQL command, we've got our SQL command text, and we say, "Execute the non-query," except there's a problem here: This SQL statement doesn't have a where clause on it.

Now, I know no one here has ever done this, especially I've never done this, running a delete statement without a where clause, but you probably know someone else who has done this. And it would be nice if we could find a way to prevent them from ever doing a delete all, delete* from the table unless they actually wanted to do that. So what we can do with the fluent interface is we can build a DB cleaner class. After they set up their connection string information and they tell us the table they want to delete rows from, the only two options they have are to either call a delete with some sort of condition, or specifically call a truncate table. This is one way we can make our code mistake proof. No one could accidentally delete everything. 

2. Code must be called in a specific order (hidden requirements)

Another situation where we might want to create a fluent interface is when code has to be called in a specific order, kind of some hidden requirements. I know this is a violation of object-oriented programming principles, but here's an example of Microsoft doing something in the .NET framework that violates those principles. If we want to send an email, we have to create a new email message, and then we have to set these properties. We don't have any requirements for setting these properties, so we could forget to include the from, forget to include the to, and then when we get to the SMTP client, to actually sending out the email, what's going to happen?

Is it going to throw us a nice, clean error, or is it going to silently fail? Same kind of thing could happen with the code that we write. We create an instance of our object, you have to set a bunch of properties, or call a bunch of functions in a specific order, and then you can call the final function that does what you want it to do. With a fluent interface, we can build something like this so that you have to go in a specific order, you have to include certain things, even though this one has some optional parameters like CC and BCC. And you can't call descend until you've done all the required things, until you've set all the required parameters.

So this is another way we can keep our code mistake proof with a fluent interface. 

3. A function has many overloads, almost-overloads or optional parameters 

Of course, one way that we try to get around problems like this is we create a big function that has 20 parameters, and we end up with a third situation: You've got a function in a class that has 15, 20, 25 parameters, and then you have a bunch of overloads on it, and you have optional parameters, and you have some functions that are almost overloads but not quite, because they don't meet a couple of parameters, or this one needs an additional parameter, and your code just gets very messy that way. So I'm going to show some code here, this is going to be what we're going to work with, and this is a sample report generator that fits the third situation.

We have a create sales report, and we have to pass in this long list of parameters which I've actually seen some of these that are even longer, like I say, 20 or 25 parameters. And then, over time, we usually end up building other versions of this create sales report. So we have one here, create sales report for all categories, which has pretty much the same parameters, except it's missing the category parameter, because we want to include everything. Then, our manager says or the user asks us to add something that lets them create the sales report for all salespeople. So we have a function here that eliminates the salesperson ID parameter. And then eventually we'll have one that says, "Create sales report for all categories and all salespeople," that gets rid of the category ID parameter and the salesperson ID parameter. And we could end up with ...

I've seen some classes like this that get up to 30 or 40 functions that are basically the same thing. This is often happening in report generating classes, export classes, anything that gets kind of dynamic.

How to Build a Fluent Interface

Define the Vocabulary (Functions)

So what we're going to do is build a fluent interface for this fictitious sales report generating class. 

First thing we'll do is we need to know what type of functions we're going to have in our fluent interface. I break this down into ... I use the acronym ICE, I-C-E. We have some instantiating functions. These are the ones that actually create the report generator object. We have our chaining functions. These are the ones that let us add parameters to the object. And then we have our ending function, or our executing functions. This is the one that actually does something.

So if we look at the report generator class for our instantiating function, we're going to call it once at the start, and it has an action name, usually. I'm going to call this one "create sales report". And we can make it whatever we want. Then, we are going to determine what chaining functions we need. And this is where we're going to replace passing in a long list of parameters, or setting properties like we did in the email class. And for our sales report, we're going to have our parameters like from, to. If we look at the code, we've got our start date, our end date, our list of category IDs to include, our list of salesperson IDs to include, how we want to group it, how we want to sort it, so on.

And then we finally get to the ending function, and this is the one that's going to build the report. So all of our other functions before, we're going to say, "Start building the report with our create sales report," we're going to pass in all of our parameters, and once we get to the point where all of our parameters are set, we can say, "Build the report." So when we get the chaining functions, we're going to look here, since this is an existing class, at our parameter list. That's what we're going to use to actually find our vocabulary. So we're going to want to be able to pass in a parameter for the starting date, the ending date, the categories. Since we have different versions down here, we want some way of saying, "Create the sales report for all categories."

So we want to either be able to say, "Create it for all categories," or, "Pass in some category IDs." Same type of thing for the salesperson: We're going to want to have some way of telling our fluent interface, once we get to build reports, we should create it for all salespeople, or optionally, we can have a list of salesperson IDs that have been passed in. As we look at all the parameters, this is what I ended up building for my list of the vocabulary. For the instantiating, I'm just going to have the one function, create sales report. We could expand this and have other instantiating functions: Maybe you wanted to have create current month's sales report. And that way, it eliminates you needing to pass in the from date and the to date.

Or, you could have create end-of-year report, or create unchipped order report, and that would automatically set some of these other parameters for us. For our chaining functions, looking at the different parameters we had and different scenarios we had for our report, I want to pass in a from date, a to date. I want to be able to set it so I can include all salespeople, or so I can include specific salesperson IDs. And I put a little asterisk here to remind me that this one, I want them to be able to call multiple times. Same thing with include categories. And then down here at the include returned orders or exclude returned orders, I could've made that so it was just a boolean parameter that we pass in. It could be include returned orders and then pass in a parameter, but I like to have it a little bit more ...

The whole idea of the fluent interface is that it kind of sounds like a natural sentence. So I like to have different ones that include or exclude for the boolean properties, instead of passing in a parameter. It just makes the sentence read a little bit more naturally to me. And then at the end we'll have the build report function. This is going to return an object in our code here, but this could, in a real situation it would return maybe a PDF file or some other report-type object. You could even have multiple ending functions, you know, build PDF report, build SSRS report if you're using SQL server reporting services. Whatever you want, but just for this demo we're going to keep it kind of simple. So now that we have all the functions that we want to include in our fluent interface, this is kind of the vocabulary, the words we're going to use.

Map the Grammar

Now, we need to set up the grammar, which is the rules of what function can be called after which function. And this is how we're going to mistake-proof the code. We're going to have rules that you can't get to the build report until you've passed in all the required parameters. The way I do that is with a spreadsheet. You could do this with paper if you want. I just did it with a spreadsheet so it looks a little bit nicer. In the rows, you're going to include your instantiating functions and then all of your chaining functions. You don't need to include your ending functions, so here you've got create sales report as our instantiating, and then all of our chaining functions. For the columns, this includes all of the chaining functions and then all of the ending functions, the build report.

So we don't have create sales report in there, we just have the chaining and the ending. What we're going to do now is actually map out what can be called after each function, and I'll do that with the actual spreadsheet. So here, after create sales report, I'll just pull up a little screenshot there, after create sales report, the next thing I want to be called is from, and that's the only thing I want the developer to be able to call. So I just put a little yes in here to mark that after create sales report you can call from, and you can't call any of the other things. After from, the only thing I want the developer to be able to call is to. So I look in the from row, find the to column, and set that TS.

After to, I want them to be able to call include all salespeople or they could call include salesperson ID. So I go to the to row, look for the include all salespeople, set that to yes, and include salesperson ID, set that to yes. Now, for include all salespeople, the next thing I want them to be able to call is include all categories or include category ID. Because if they select include all salespeople, we don't want them to be able to include individual salesperson IDs. That's kind of useless at this point. So for include all salespeople, the functions we're going to allow next are going to be include all categories and include category ID. So I can go in there, in the spreadsheet, and mark those as yes. Now, include salesperson ID is a little different.

I want them to be able to call include salesperson ID a second time or a third time. Or, once they've added the last one, they can call include all categories or start including category IDs. So the spreadsheet for that is going to be yes for include all salespeople, because that's an option, yes for include all categories, and yes for include category IDs. And then I'd work this way for the rest of the functions, for the rest of the instantiating and training functions. And eventually, I would get a list that looks like this. So now I know the only time build report can be called is after this last include unshipped orders or exclude unshipped orders is called.

Because at that point I know we've gotten through every other required parameter correctly, and now we can do the ending function. So the next step ... I'll delete this one. So the next thing we need to do after defining the grammar is we need to give some names to some interfaces that we're going to create. This is how we're going to enforce the grammar, by having interfaces returned from our functions. And I like to make them nice and simple for the create sales report. The only thing you can do is set the from date, so I've named it "I can set from date". After calling from, the only thing you can do is call to, so I named that one "I can set to date". After you call to, you've got a couple of options.

So I named this interface "I can set all or one salesperson". And if you go down, let's see, here to include returned orders and exclude returned orders, they both can do the same thing. They both can call include unshipped orders or exclude unshipped orders, so they have the same name. Because what we're doing to do with this interface is say, "These are the functions you can call next." Since they have the functions they can call next, we're going to use the same interface, and this is how we build the grammar. 

Write the Code

So now, let's go into actually building a code. I'll start with a report generator class, and this is similar to the other one with some enums that I have for sorting and grouping, and I have no functions in this yet.

What we need to do is create a private constructor, because we don't want anything to be able to construct a report generator object other than our instantiating functions. That's the only way we're going to allow those to be instantiated. And then I'm going to create my first instantiating function, the create sales report, and for right now, I'm going to have it return the report generator object. This will be public static, because it's going to be like a ... It's a factory method, basically, the factory design pattern. And for right now, we're going to have it return a report generator, and that's all it's going to do. So when we call ReportGenerator.CreateSalesReport, it will instantiate our report generator and pass it back.

Then, I'm going to set up some ... Right now I'm just going to set up the from, the to chaining functions, and these will be public, and I'm going to have them return report generator, and this will accept the date/time parameter, and it's going to return this. So this is how we do the actual chaining. Each one of our chaining functions is going to get the value, and the final thing it's going to do is return the object that was called. So this is standard method chaining. If you've ever done anything like with a string object, where you do string.true.toupper.substring or something like that, that's method chaining, because the string.true returns a string so you can do another function that you can do on strings, like toupper. Toupper returns a string, so you can do another string function.

You can call another string function from that, like substring. And that's the method chaining that we're going to use. But for the fluent interface, the whole idea of the chaining function is to get parameters. So I'm going to need to create a private variable to hold the parameter value before we return the object. Then I'll create the to function, which will also return the report generator object, and we're going to need another private variable to hold our to parameter. And this will return this at the end, so we can continue the chain. Then eventually, we're going to get to our ending function.

I'm going to skip the other chaining functions for right now just so I can show how the method chaining works. And the ending function here is actually going to return an object. In reality, since it's the build report, it would probably return some sort of report object, but we're just going to do this for right now. But the build report function would then look at all the different parameters we've set, do whatever it needs to do, and then return the object. For the email fluent interface, that would actually be more of a ... When we had the send email, that would actually be a public void function, because that type of function is just going to do something. It's going to send the email and it's not going to return anything. The same type of thing with the delete.

If we were to build a fluent interface and the final thing was delete some rows from a table, that's probably going to be an execute non-query which we don't care about any return results, so it would be public void. But for this report generator one, we're going to assume that the final call, this build report, is actually going to return something. So now, we can look at some calling codes for this to see how the method chaining works, save our report equals report generator dot, and then IntelliSense shows me create sales report is the only function available, because it's a public static one. So I call that ... Now, if I press the period again, IntelliSense shows me the functions that I created, the public functions. I've got build report, I've got from, and I've got to. So I could call to, date/time, UTC now, from, date/time, UTC now... Add months, minus one, and then I can call the final build report.

 

So this is how the method chaining works, because create sales report returns a report generator, and the report generator has to, from, and build report as possible functions. But you might notice I did those out of order. My chaining, I want it to do from and to. That's kind of more of a natural sentence, but this still let me do it. That's because we haven't installed any of the grammar yet, we haven't set up the grammar with the interfaces. And this also still has the possibility of us forgetting to include from. Will the report still work? It depends on how well we wrote our build report function, but we're going to add the fluent interface to make sure that we call all of the correct chaining functions first.

Alex:

So Scott, just looking at this example, I'm wondering if there's possibly any tool that can generate some of this API code for us, like some initial code. It seems like it could be possible, just based on the grammar that you defined.

Scott:

That’s a good question, Alex. I started looking for something a few months ago and I didn't find anything, so I started a little project, an open-source project I'm going to build. But everything else has gotten higher priority. I'm hoping to get back to it soon.

Alex:

Well, that would be very nice, I think, to have something like that. Okay, well, looking forward.

Scott:

Okay, and at the end of the webinar, I'm going to have the link to a page on my site for all the source code. And if I have, or when I have that finished, I'll also include a link to that project. It's going to be up on GitHub.

Alex: Great, thanks.

Scott:

You're welcome. So another thing we can do, just a quick little thing to mention, is within our chaining functions, mostly all they're doing is setting parameter values and returning this. But we could add some additional validation rules in here. So we could say something like, "If to is less than from, throw new arguments exception," or something like this. Just another nice way to make our class a little bit more helpful to any other developers that might be using this.

But for right now, I'm just going to take these out. Now, I'm going to paste in all of the functions because I'm sure you don't want to watch me mistype commands on Visual Studio. So now I've got all the chaining functions in here, our from, our to, our include all salespeople, which is setting a boolean, our include salesperson ID, which adds the ID to a list, our group by, sort by, and so on.

So down here in the build report, we would look at all of those values that were set in these private variables, and use those to actually build the data set for our report. So the next step is we're going to create the interfaces. You could do this a little bit differently once you get some experience with it, but I kind of want to show step by step how this flows, how you could develop it. So what I need to do is I look at all of the interfaces here, and I'm going to just get a unique list of these, and I'm going to create those in the source code. For this sample, I'm going to create them inside the reportgenerator.cs file. You could put them in external files.

Normally, I don't like to put more than one class or more than one interface in a file. But these interfaces are only ever used by the reportgenerator class, so I'm not going to worry too much about that here. Then I'm just going to create public interfaces, I can set from date. And then set one for ... I can set to date, and again, I'm going to paste these in so you don't watch me type. But we're going to have one empty interface for all these different classes, or for all these different interfaces we've defined here in the spreadsheet. So now, these are all available as places for us to put our grammar in. Next thing we need to do is have this report generator class implement these interfaces, and I'm just going to paste in the names of all those interfaces up here.

Alex:

Okay, I've got another question: Maybe you can suggest any tools or practices, how to write this code faster. Maybe once you've finished the initial example, or just in general, some suggestions. Because it seems, while we still don't have this automated tool, it can be quite laborious to follow all this manually.

Scott:

Yes, the first time, if you set this up manually, it does take a bit of work. The idea is it's kind of an investment so that down the road everyone can use your library codes. But one thing you could do is instead of creating these functions, you could create the interfaces that are populated correctly, and I'll show you how we actually do that next. And then, have your report generator class implement these interfaces, and use a refactoring tool, like ReSharper, and say, "Implement the interfaces."

Alex:

Yeah, so it can generate at least a bit of that for you.

Scott:

Right, that would go and create all of these functions for you. You'd still have to go in, and have it return this, and set the private variables, but at least that way you're structurally building a lot of the code automatically.

Alex:

Okay, thanks.

Scott:

You're welcome. Okay, so now the next step is to actually have the functions return the correct interfaces. Because right now, they're returning report generator, which exposes every function in report generator. If we go back here to the calling code, and after I call to, IntelliSense shows me all the functions, and I can even call to a second time. So we want to use the interface to actually control what function can be called next.

And the way we do that is by going back to the spreadsheet, and seeing that create sales report, the interface that we want it to return is I can set from date. So I'll just copy that, go back in my code, and change its return type from report generator to I can set from date. It's still returning a report generator object, but it's casting it as an I can set from date, so the only thing that would be visible is any functions we've defined in the I can set from date, and this is how we're going to set up the grammar.

And I would go and do this for all the functions, change its return type from report generator, the temporary one we're using, to its actual correct one. And again, so you don't have to watch me type, I'm just going to copy and paste all that in. So now, each one of these functions, like conclude salesperson ID, it's going to return the object, but it's going to return it as an I can add salesperson ID or set categories, which is how we have include salesperson ID defined here in the spreadsheet.

So then, the final step is to actually add the functions that you can call next inside the different interfaces. So for example, I can set from date, if we look at the spreadsheet again, the only thing you can call is from. So I'll go up here in the code, find the from signature, I'm just going to copy that, and put it down here in the interface. So if we go back to our calling code, after I call create sales report, if I hit the dot, IntelliSense shows me the only function available now is from, so this is how we enforce the grammar. And if I do dot again, the only functions that are available are the ones that are available in object, because we haven't set up the rest of the interfaces.

So what I do next is for, I can set to date, we need to have it called the to function. So I'm going to add that to this interface. Just copy the signature of it, add it to the interface, and now if we go back to the calling code, and I type dot, IntelliSense shows me that to is available. So I'm going to just copy in all those, because we all know the worst way to mistype something is when you have hundreds of people watching you. And now, the interfaces are all defined. So now, we know the flow of what functions can be called after which function. And if we go back to the calling class, now we see to is available.

If I hit the dot again, IntelliSense shows me that my next available options are include all salespeople or include salesperson ID. So I'll use the include salesperson ID. Now, it shows me the only available options are include another salesperson ID, or start moving on to the categories. So I'll include another salesperson, then I'll say, "Include all categories." GroupBy, then I've got the SortBy next, and I'm going to say, "Include returned orders and include unshipped orders." And then finally, since all the parameters have been passed in and set for our class, we can call the BuildReport function, and that's how the fluent interface would work.

Alex:

Okay, I've got one, I suppose last question maybe from me. I was just thinking about LINQ, how it's kind of a fluent API is based on extension methods. So my question is how these extension methods can kind of affect our API that we design, our fluent API. On one side, you can make it extensible, but on another side, there's probably some consequences, and some probably things that we cannot ... Or we need to protect from properly, right, when someone tries to extend interfaces with extension methods that we didn't expect.

Scott:

Oh yes, I know the big problem I have in LINQ is when you have something in the middle of your LINQ chain that unexpectedly returns nul, so if you have a collection and you're doing collection.first, and you put in some sort of condition, and it doesn't have anything that matches that condition, yeah, then it returns-

Alex: Exception

Scott:

Yes, yes, because the rest of your chain, it says, "I've got nothing to work with." But the nice thing with the fluent interface is, since we're writing them and controlling them, everything is always going to return this at the end, so we know we don't have that possible null problem.

Alex:

Okay, yeah, so we need to pay attention to that.

Scott: Yes

Alex: Okay. Thanks.

Summary

So I'm just going to put this little closing slide out here: The next time you have a project where you're writing something that you think, "This is potentially dangerous," or, "Someone could misuse it, or miscall it, or not know how to use this function," especially if it's some framework code, or some library code, or something you're going to share on open source, don't write your code mistake possible, like the USB. Instead, consider putting a fluent interface around it for your facade, to make it easier to use, and also to ensure that other developers can only use it correctly.

 

 

Q&A

Here are Scott's follow-up answers to the questions. If the question was misunderstood, not answered completely, or if you can think of a different answer, please let him know by leaving a comment at http://scottlilly.com/FIWebinar.

Q: So from and to on create report are methods code after create report. If those values were required, can we provide these values as parameters to the create report method? I got the grammar should be present on the method name, but I understand also that the parameters of a function are part of the overall signature.

A: One possible thing we could do, so let's say we wanted to have something like CreateSalesReportForDateRange and include the from and the to in that, so then what we would do, we'd that here to the spreadsheet, I'd look to see what I can call next. So obviously, I don't want them to call from, I don't want them to call to. I want them to go onto the next step, which is basically the same as calling the To. So we would use the same signature as calling the to, and I would create another instantiating function, public static. See ... Get the correct interface, I can set all or one salesperson. Then, take our to parameters, and what we'd have to do is because this is a static function, we're going to need to a little bit of work on this.

But I would say you need to create another private constructor that takes those values, and set the parameters, set our private variables in here, and return new ReportGenerator with from and to, and I think that should work. So if we go back here, off our ReportGenerator I can see ... So I've got two options now: CreateSalesReport or CreateSalesReportForDateRange, and if I call that one for the date range, I can pass in the to date times. And now, when I hit the period, IntelliSense shows me that my next options are not from and to, but they're instead include all salespeople or include salesperson ID. So that would let our code look like this.

Alex:

Okay, thanks. I suppose that would be the answer, that basically, adding parameters to some of these methods is also a valid option. It's just probably, you need to pay attention not to add many parameters, because you go back to the same problem.

Scott:

Right, I generally like to keep one, maybe two parameters, sometimes zero. Like this include salesperson ID, we could also have created one that doesn't just take an integer, but it takes a list of integers. So maybe in the UI, the actual user is checking which salespeople they want to include, and we would pass a list of those IDs to the function. So then, we'd want an overload here, or we'd want another ... Include all these salespeople that accepts a list of integers.

Q: What's the reason to return type of an object instead of report generator and build report?

A: Because the build report is our final function, it's the one that's actually going to do something. So for this example, build report in reality would probably return like a PDF file, or some other type of object. You could also do this, like if you were doing, let's say, an average calculator and you wanted to do ... Your syntax would be like .includevalue.includevalue, and just, you're keeping all these. You keep passing in values, then your final function would be, your ending function would be calculateaverage, and that would return a double or whatever you wanted. So it all really depends on what your actual business purpose of this last function is.

Alex:

Okay, so that was just a sample object, in that case.

Scott: Right. 

Q: How would you have default values and avoid too lengthy code? Let's say for example that 90% of the time you would include all categories so I would not want to repeat that each time I use the FI.

A: You could make an instantiating function named „CreateStandardSalesReport()“ (for example). Inside that function, it would call the private constructor and set the default values. Its returning datatype would be an interface that is farther in the chain, past the functions for variables that were automatically set inside CreateStandardSalesReport.

Here is an example of how that might be done: https://gist.github.com/ScottLilly/85091b9f61e66256a69a7909a05337fd

I would change the interfaces, so the functions that can be skipped over are first in the „chain“. It’s easier to skip over the first five functions (for example), than to create interfaces and functions that let you optionally skip over the first two functions, then the fifth function, then the seventh function.
You might also want to integrate the Builder pattern, as mentioned in one of the questions below.

Q: How does this compare to the "Curiously Recurring Template Pattern" such as expressed here?

A: That pattern is an interesting way to do method chaining. Although, it looks like you will still need to create individual interfaces, to enforce any grammar rules.

Q: Can't you just use annotations to require a certain order and ensure parameters contain data?

A: Yes, you could use annotations on an entity, to ensure the required properties were set, before being able to execute a function. However, another developer could forget to set a property value, and the error will only be detected at runtime.

With the fluent interface pattern, other developers will have the additional help of IntelliSense, to lead them through the chain of required functions to call. They could still pass an invalid parameter to a function. However, they would not be able to skip over calling the function.

Q: Can we use Builder pattern where we create different set of Builder class for different values of Report properties?

A: Yes, you could combine fluent interfaces with the Builder pattern. That would be a good way to handle a situation where you have several common ways to set the values for the some of the chaining functions.

For example, if Accounting reports should always call IncludeAllSalesPeople, IncludeAllCategories, ExcludeReturnedOrders, and IncludeUnshippedOrders, that could have one Builder class that calls those functions. You could have a different Builder class to set the values to only include all the categories for items that are physical products (and not downloadable items), for the Shipping department.

Q: I would like to know about many Id's inside the filter, how to deal with this?

A: If I was building the ReportGenerator class for a real program, I would probably have a function for passing in a list of Salesperson IDs (as if the function was receiving a list of checked items in a datagrid that displayed the salespeople).

Inside that function, I would add the IDs from that passed parameters into the private _includedSalespoersonIDs variable, if that ID was not already in the list.

Q: Is this the only way to implement fluent interfaces? If not, what are the other approaches and how are they different from your approach?

A: This is the only way I’ve used. You might be able do something similar with extension methods that only work for specific datatypes (which would be the interfaces we use for the grammar). But, method seems less clear, to me.
If anyone is aware of a different method, please share it.

Q: How would you implement the IncludeSalespersonID function within the report class or option that are mutually exclusive when the report is actually being built?

A: When you create your fluent interface’s grammar rules, you should design it to prevent mutually-exclusive functions. For example, in the ReportGenerator fluent interface, you can call “IncludeAllSalespeople”, or you can call “IncludeSalespersonID”. You can call “IncludeUnshippedOrders”, or you can call “ExcludeUnshippedOrders”. You can only call one, or the other – not both.

Q: How exception handling works in fluent interface? How it works with optional parameters?

A: Exceptions would be caught at runtime. You could add parameter validation, that could throw an exception if the passed parameter was invalid. Also, when the ending function is called (BuildReport or SendEmail, for example), it could throw an exception.

The fluent interface can ensure that, when other programmers user your class, they will call the all the required functions to set the required parameters. However, if you do not include other data validation, they could set the parameters to invalid values – for example, setting the “to” date before the “from” date, when specifying a data range.

Q: Would it be possible to convert BDD scenarios to fluent?

A: This seems like it could be a great idea. I haven’t used SpecFlow, but a fluent interface would almost match with creating FitNesse fixtures.

This seems like it could be a great idea. I haven’t used SpecFlow, but a fluent interface would almost match with creating FitNesse fixtures.

If you show the users (or business analysts) the concept of method chaining, they should be able to create the grammar for you, using business terms. Then, you could use that to build the required fluent interfaces.

This is definitely an idea I want to think about some more. It may be a great way to deal with correctly understanding complex business requirements.

Q: If include All categories is optional, will the group by sort by be available to continue with the BuildReport? So, the order would include a Build report method for all functions that are deemed optional?

A: If you build the grammar to allow that, it should let you create that as a valid “chain”. I think the answer to the first question (<a href="https://gist.github.com/ScottLilly/85091b9f61e66256a69a7909a05337fd">see source code here</a>) will show how to do that.
Please let me know if that sample doesn’t answer your question.

Q: How to handle exception thrown by preceding method? How to stop chaining, or ignore error if not critical and continue chaining?

A: The chaining functions only set the values of the private variables, and “return this”. So, there should not ever be an exception – unless you add your own data validation in those functions. In that case, when the code is run, and an invalid parameter is passed, it will throw whatever type of exception you specify, and stop executing.

You could put logic into the chaining functions that would check if the passed parameter is invalid. If it is, instead of having the function throw an exception, you could have it determine a good (or default) value to use.

Q: Over time, let's say the ReportGenerator might add new functionality. Is it possible to have more than one path to reach the ending function? And how can we ensure maybe though unit testing that the chain leads to an ending function?

A: Yes, it is very possible to have more than one path reach the ending function. The ReportGenerator class has several possible paths (Include, or Exclude the returned and unshipped orders, for example).

Ensuring complete unit testing of all possible chains is interesting. If I was doing that, I’d probably to TDD, and use ReSharper (or some other static analysis tool) to show any functions in any interfaces that are not called. By looking at uncalled functions in the interfaces, that should inform us of any missing paths.

When I work on the fluent interface creating tool, that sounds like a good feature to add – automated generation of unit tests for each chain.

Q: I think you just need to get the result of IncludeSalespersonId inside the foreach and continue from there to avoid the casting.

A: I think this example is a little tricky, because you must either enter at least on salesperson ID (through IncludeSalespoersonID), or call IncludeAllSalespeople, before you can call one of the category-setting functions.

If you have an example that works, please share it at http://scottlilly.com/FIWebinar

Q: How would you go about baseclassing and extending this pattern with generics and inheritance is the essence of what I am trying to understand?

A: I have not tried to create a generic version of a fluent interface engine. Every time I’ve built a fluent interface, it has been very specific to one class (such as the ReportGenerator class).

If you needed to create “chains” for a class that were extremely different, you might want to have ReportGenerator as a base class, and create child classes, with their own interfaces that implement the different chains. For example, if you wanted to have a ManagementReportGenerator fluent interface for management reports, which might show different information, and have very different “chaining” options. You might also have an AccountingReportGenerator for the accountants, which might have a massively different fluent interface. Those would have their own sets of interfaces, but might use some functions from the BaseReportGenerator class.

Q: I want to call one interface or a different one according to a value previously set. For example, would this be possible? if IncludeSalesPersonID is called then AllCategories is selectable, and if they select allSalesPersons, then AllCategories is not enabled (to prevent a call from being too big).

A: Yes, you could do this. When building the grammar spreadsheet, in the row for IcludeAllSalespeople, don’t put a „Y“ in the IncludeAllCategories. Then, your interface for that row might be named IcanSetOneCategoryID. That interface would be defined to only have one function „IncludeCategoryID“, and its return datatype would be „ICanAddCategoryIDOrSetReturnedOrdersInclusion“.

 

About the speaker, Scott Lilly

Scott Lilly

Scott Lilly is a C# developer, creator of "Learn C# by Building a Simple RPG", and lean practitioner.
Scott develops line-of-business systems for corporate clients, and publishes videos and tutorials for C# developers.
Scott's blog.