Archive

Do you know how to write very fast C# code? Here's a sobering fact: many schools and universities only teach how to write valid C# code, and not how to write fast and efficient code.

Did you know that adding strings together inefficiently can slow down your code by a factor of more than two hundred? And ’swallowing’ exceptions will make your code run a thousand times slower than normal.

Slow C# code is a big problem. Slow code on the web will not scale to thousands of users. Slow code will make your Unity game unplayable. Slow code will have your mobile apps catching dust in the app store.

In this session, our guest speaker Mark Farragher will show you many common performance bottlenecks and how to fix them. We’ll introduce each problem, write a small test program to measure the baseline performance, and then learn how you can radically speed up the code.

Watch the webinar and learn:

  • The low-hanging fruit: basic optimizations
  • How to read compiled MSIL code
  • The struct versus class debate
  • Optimize for the garbage collector
  • Writing directly into memory with unsafe pointers
  • Use dynamic delegates to dramatically speed up reflection

 

How to Write Very Fast C# Code on Vimeo.

For source code of the examples, please email Mark at mark@mdfarragher.com

Q & A

Q: Why 9% are exceptions?

A: Several viewers have pointed out that the 9% number I mention in the webinar is incorrect. Here is the correct calculation:

I’m building numbers from individual digits. There are 11 digits, 0-9 and the letter ‘X’. So, the chance of a single digit being invalid is 1/11. A number consists of 5 digits, so the chance of a single number being invalid is (1/11) * 5 = 45%. The loop in my code will fail 45% of the time and throw an Exception.

Q: How to get mastery in reflection and dynamic code?

A: By practicing a lot. Write lots of code that uses reflection and dynamic emitting. Experiment, measure performance, see how far you can go optimizing your code. Play around and discover what works and what doesn’t. Plus: read lots of blog posts and articles.

Q: Why would it not be beneficial to use structs for all simple business objects? Is there a point of degradation or some limitation over a class? Is a struct usable with Entity Framework to represent database objects?

A: The .NET Runtime makes certain assumptions about structs and classes, specifically that structs will be very small (in terms of memory space) and have a short lifetime, and classes will either be small or large and have a long lifetime. Simply replacing all classes with structs in your code is dangerous because you will go against these assumptions. For example - if you change a long-living object to a struct, it will get boxed on the heap and your code will be even slower than when using classes. A struct also get copied during each method call, so passing a very large struct to many different methods will slow down your code a lot.

The rule of thumb here is to always start with classes, and only use structs when it makes sense to do so.

The Entity Framework does not support structs.

Q: Can we use DynamicMethod trick on AOT platforms (via Mono)?

A: Nope. The ILGenerator class is missing, so you can’t emit your own CIL code into the dynamic method. Makes sense, right? It couldn’t possibly work with AOT.

Q: CIL stuff is really interesting.  Perhaps worth mentioning that string interpolation and string.Format uses StringBuilder so you don't always need to explicitly use StringBuilder.  Also, StringBuilder has a little overhead so for <4 strings something like str1 + str2 + str3 is faster - I think

A: Correct! String interpolation ($”yadday {yadda}”) compiles to a String.Format call, so it’s exactly the same thing. I always use interpolation because it’s so much easier to type.

You’re also spot-on with the string versus StringBuilder comment. A StringBuilder has some overhead initializing, so it is actually slower for a small number of additions. The cutoff point is at 3 additions. For zero to three the string is faster, for four and more the StringBuilder is faster. For larger number of additions, they start to diverge very quickly.

In my logging and diagnostic code, I always use strings (string interpolation) because I usually stay below the 3-addition limit, and it makes my code so much easier to read.

Q: Hi, For Exceptions, what if TryParse is not there. For user-defined types instead of Primitive what needs to be done.
A: You need to do the same that TryParse is doing internally – scan the input data first, and only start parsing if the scan says it’s okay. Also make sure you return a parsing failure as a return value (i.e. a bool) instead of throwing a FormatException.

An easy way to scan is by using a precompiled regular expression to make sure the input data doesn’t contain any invalid characters. Regular expressions are super-fast.

Q: Any comment about differences between copping arrays, lists, c # hash table, etc at the heap?

A: In terms of memory layout, there’s not that much difference between an array, a list, or a hashtable. All three use arrays internally to hold the data. A hashtable is optimized for key/value lookup, whereas list and array are intended for indexed access.

They all have a CopyTo method that attempts to block-copy all data in one go. If you’re storing value types, you will see great performance for all three.

Q: Are you going to review LINQ / Parallel performance someday?

A: That’s a great idea! Thanks for the suggestion. I have an existing course already that scratches the surface of LINQ versus PLINQ performance, but I’d love to go deeper.

Q: Nice talk. BTW Stringbuilder may not be the fastest. it depends on the size etc. you have to calculate the GC allocations also. best tool for that is BenchmarkDotNet with memory diagnoser on windows! it is a fantastic tool. General rule of thumb: whatever you do you have to measure in order to see perfomance benefits.

A: Thanks for the suggestion. I’ll check out BenchmarkDotNet. And you’re right about the rule of thumb – you always have to do actual measurements, you can’t rely on just theoretical knowledge to optimize your code.

Q: Just a question on array.CopyTo(...) where Mark said that the memory copy was done out of process by the OS (in C libs guessing "memcpy"). In the profiling application during the webcast, array.CopyTo(..) executed in 32ms, whereas the copy via index and loops was >300ms, in other words, using array.copyTo is an order of magnitude faster with OSX as the OS. It the 10-fold difference "about" the same with .NET on Windows? Different OS different ratio?

A: Yes, the ratio is roughly the same. The speed of a memory copy is more or less the same for all operating systems, whereas you might see small differences in 1-dimensional array performance. I’ve noticed that .NET Core tends to be slightly faster than Mono in handling arrays, because it’s much better optimized.

Q: I measured. GetType() is 171 ms vs. typeof() at 6 ms in a test of a million iterations.

A: That’s because typeof() is processed at compile-time, whereas GetType() is processed at runtime.

Q: How do you keep yourself upto date on the latest and greatest technology?

A: I read lots of technical blogs, and when I’m preparing for a new course or webinar, I do a lot of research and write small test programs to experiment. And I probably have a talent for learning new stuff very quickly.

Q: Would you use some form of multi-dimensional converter to convert a single dimensional array back to a multi-dimension array or would you take another approach?

A: It depends on the use case. I usually just wrap a 1-dimensional array so from the outside it looks like the original multi-dimensional array. The disadvantage of converting the other way is that you’re slowing the code down again, so I am a bit hesitant to use any kind of converter.

Q: Do you have any advice for Parallel.ForEach?

A: Yeah, use it! Parallel.ForEach is great for parallelizing regular for or foreach loops. It is my first step in parallelizing code, and quite often it’s all I need to do.

Two years ago, I wrote an app that processes Sharepoint documents. I had a for-loop in my code that would process each document individually. I parallelized the code simply by replacing my for-loop with a Parallel.ForEach. This drop-in replacement to make code multi-threaded is really nice.

Q: Have you tried these performance tests on .NET Core?

A: Yeah. Everything I show you in the Webinar is running on .NET Core 1.1

Q: Foreach loops do have a performance optimization over for loops in cases where the collection is already an enumeration or a function that yield returns?

A: No. Enumerations or methods with yield return cannot be indexed and they don’t have a well-defined upper limit, so there’s no benefit using a for-loop with them. If you do try to use a for loop, you’d have to manually access MoveNext() and Current, and this would be the exact same code the compiler produces when you use foreach.

Q: Is there any significant difference between pre- and post-increment operations. In C++ I am accustomed to always doing ++i in preference to i++ but I rarely see this being done by C# developers.

A: It works exactly the same as in C++, the difference between the two is the return value: i before increment or i after increment.

Q: Does the performance benefits you described for structs vs classes get lost when comparing the performance of passing classes vs structures to other functions (excluding cases where structs are being passed by reference)?

A: Passing structs to functions will slow down your code, because structs are copied by value. For every method call the entire struct will be cloned in memory. When you’re using classes, only the reference to the object instance is copied into the method.

So yes, for large structs with lots of fields you’ll see a measurable slowdown when doing lots of method calls with struct parameters.

Q: What is the difference between the heap and the stack?

A: The stack is a highly-optimized block of memory intended for data with a very short lifetime, just for the duration of a single method call. Stack memory is created when you enter a method and gets cleaned up when you exit out of a method. The stack is also fairly small, usually around 100MB. It’s optimized for a manageable number of small objects (thousands, not millions) with a very short lifetime.

The heap is a very large block of memory (multiple GBs) optimized for long-term storage. You can easily put millions of objects on the heap, and they can be either small or large. The heap has a special internal process for archiving long-lived data, and there’s a separate process called the Garbage Collector that cleans up objects that are no longer in use.

As a rule of thumb, the stack is slightly faster than the heap. It can also very quickly initialize new data by writing zeroes directly to memory (the heap calls the constructor of each object individually). The disadvantage of the stack is that it’s relatively small, and it assumes your data will be short-lived. The stack can also slow down if you have a very deep chain of nested method calls.

Q: Why and when we use reflection?

A: We use reflection when we want to dynamically access object fields or call object methods. With ‘dynamically’ I mean based on data that is not known during compile-time. For example, when we store database configuration data in a configuration file. The configuration file might say we need an OracleConnection or a SQLLiteConnection. With reflection, we can read this configuration field and then dynamically instantiate the correct object.

Basically, any time an object type, property, field or method appears somewhere in text format, we’re going to need reflection to perform instantiation, access fields and properties, or execute a method call.

Q: What does emit mean?

A: Emit means injecting a single CIL instruction into a dynamic method.

Q: What do you mean by baseline test?

A: A baseline test is a performance test of un-optimized code, to get a baseline performance value.

 

About the speaker, Mark Farragher

Mark Farragher

Mark Farragher is a blogger, investor, serial entrepreneur, and the author of 10 successful IT courses in the Udemy marketplace. His IT career spans 2 decades and he has worn many different hats over the years.

Mark started using C# and the .NET framework 15 years ago, and creates online courses that make complex C# programming topics easy to understand and accessible to anyone.

 

.NET Core is here! You've probably heard that it is lightweight in nature and that you can use the tools that make you happy. Most of us are going to let Visual Studio do the heavy lifting, and that's fine, but you can learn much about how things work under the hood if you put the IDE aside and work with .NET Core more directly.

And why stop there? .NET Core is cross platform, so in this webinar, our guest speaker Chris Gomez will do all of the development and testing on Ubuntu Linux.

This session is perfect for .NET veterans who are brand new to .NET Core and want to see what the brave new world looks and feels like. It's okay if you're unfamiliar with Linux, but are interested in having options available to you. We'll even learn how Microsoft Azure can make the heavy lifting to production much easier.

Watch the webinar and learn:

  • What the brave new world coming with .NET Core looks like
  • The acquisition and use of .NET Core on a Linux VM untouched by a Visual Studio installation
  • How things work under the hood if you work with .NET Core more directly
  • Development tools such as Visual Studio Code and how you can contribute to the .NET Core and tools ecosystem

Who needs Visual Studio? A look at using .NET Core on Linux on Vimeo.

Download slides.

Q & A

Q: Is Visual Studio code open source?

A: Visual Studio Code is open source.  You can find the project here: https://github.com/Microsoft/vscode.

Q: How did "dotnet restore" know which packages to restore?

A: When you install the dotnet tool following the instructions on the .NET Core download site (https://www.microsoft.com/net/download/core), a default NuGet Config file is created with a default feed.  You can find this in Ubuntu in the ~/.nuget/NuGet folder.  This can be overridden in your projects if you include a NuGet.Config file.  For more information, read about dotnet restore in the documentation at: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-restore

Q: Does ASP.NET Core run on ARM?

A: I haven’t personally investigated ARM, yet.  However, you can see the daily builds for .NET core on various platforms here: https://github.com/dotnet/core-setup#daily-builds. Here you will find builds for ARM versions of Windows and Linux.

An interesting source for information is a recent podcast with Scott Hanselman and his guest Adi Avivi.  In the show, they discuss developing RavenDB on .NET Core for the Raspberry PI: https://www.hanselminutes.com/579/ravendb-the-open-source-nosql-database-for-net-with-adi-avivi

Q: Is there a different NuGet website for Core or it's all in the same place with .net packages?

A: NuGet as a product has evolved to support the needs of .NET Framework and .NET Core.  If you use dotnet new or Visual Studio 2017 to create a new project today, the feed location is https://api.nuget.org/v3/index.json for both. 

Q: Can you provide us the commands that you used in this presentation?

A: Unfortunately, it would take a few posts to recap everything used here to download and install .NET Core and to use the dotnet tool for its various features.  We also quickly published a Docker image and I used one published previously to Docker Hub for the Azure App Service on Linux.  Some great resources to start are:

Step by step instructions to install the .NET Core SDK on Ubuntu Linux: https://www.microsoft.com/net/core#linuxubuntu

dotnet command (https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet) - This documents the various features of the dotnet command line tool.

An overview of the process to create your own Docker images with your application: https://hajekj.net/2016/12/25/building-custom-docker-images-for-use-in-app-service-on-linux/

Using your docker image with Azure App Service for Linux: https://docs.microsoft.com/en-us/azure/app-service-web/app-service-linux-using-custom-docker-image

Q: Does it make sense to use IIS to host an ASP.NET Core application?

A: If you are going to run you ASP.NET Core application on a Windows Server, it absolutely makes sense.  In fact, today it is required to run a full featured web server as a reverse proxy.  .NET Core ships with a lightweight server named Kestrel.  Kestrel has been tuned as a high performance web server built with .NET.  However, it has not to this point been hardened to be the public facing server.

In the Linux world, this had already become more common.  The idea was that programming stacks would ship with small lightweight and fast resource servers, but that you would use an application server to guard it and configure access to it from the outside world.

Please carefully read the section called Set Up A Reverse Proxy in the following documentation discussing how to host ASP.NET Core applications today: https://docs.microsoft.com/en-us/aspnet/core/publishing/

You should also read When to use Kestrel wth a reverse proxy in the documentation: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel#when-to-use-kestrel-with-a-reverse-proxy.

Finally, to host on IIS, you will need to learn about the ASP.NET Core Module on IIS: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/aspnet-core-module

Q: I noticed that the support for Dependency Injection and IoC is also only minimally supported.

A: ASP.NET Core supports a minimal dependency injection model without any external dependencies. Some developers prefer minimizing dependencies and don’t need more than this minimal model. However, the system is not closed, and other dependency injection systems may be used.

The documentation discusses the built-in system at length and provides an example of using Autofac to replace it in the document called Introduction to Dependency Injection in ASP.NET Core: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection

Q: Did you also try VS for Mac? Can I also use it to build apps with .NET Core?

A: I do not have a Mac and have not tried Visual Studio for Mac.  Visual Studio for Mac was made generally available during the BUILD conference.  Among other features, you can begin creating .NET Core and ASP.NET Core applications with the new IDE.

Q: I have some existing C# libraries that I would like to try on .NET Core for possible use on Linux. Any advice?

A: Research .NET Standard.  There is some common confusion about the difference between .NET Standard and .NET Core.  .NET Standard is a specification.  It defines the APIs that are available at a certain release level.

NET Standard accomplishes this by defining the intersection of APIs available in older platforms (some that existed before anyone had the idea for .NET Standard) and paves the way for newer .NET platforms (which includes newer versions of .NET Core, .NET Framework, and perhaps new “.NET’s” no one has thought of yet) to embrace common sets of APIs.

If you’ve written .NET libraries intended to work on multiple platforms, you may be familiar with Portable Class Libraries.  .NET Standard intends to remedy the problems with PCLs and is the present and future of .NET library compatibility. 

This post on the .NET Blog introduces .NET Standard and links to some valuable living information such as a FAQ and the compatibility matrix:  https://blogs.msdn.microsoft.com/dotnet/2016/09/26/introducing-net-standard/

For documentation on .NET Standard, visit: https://docs.microsoft.com/en-us/dotnet/standard/library

Q: All the things we've done so far are already available in .NET Framework in a very elegant and easy manner. Apart from the Cross platform, why should we go for .NET Core?

A: You are asking a very good question that your team must answer for itself.  If you are satisfied with your solution, .NET Framework and support on Windows Server is not ending.  In fact, I would not expect any kind of announcement that .NET Framework will be superseded.  .NET Framework is the desktop platform for .NET on Windows client and server.  When new Windows features ship, a new .NET Framework often follows.

You may want to research .NET Standard and watch the evolution of that space.  Over time, you could responsibly make sure your internal library code supports a .NET Standard version.  At some point, you could consider a trial run of .NET Core and reuse your internal library investment because .NET Standard enables compatibility across .NET platforms (in this example, between .NET Framework and .NET Core).

The other major feature you may want to keep an eye on and test for yourself is performance.  Besides, cross-platform support, ASP.NET Core aims to be a high performance framework.  If you proved that significant and necessary performance gains could come from switching, that could be a good consideration for doing so.

For each case I have discussed, there should be no immediate urgency on your part.  Your question implies you are very satisfied with your solution.  I would just keep an eye on .NET Standard and see if it makes sense to eventually consider making your libraries implement the standard for future flexibility.

Q: Do we have NuGet package support in Linux?

A: When you use .NET Core or ASP.NET Core, you are retrieving packages from NuGet even for base class library items such as .NET Core itself.  You can created NuGet packages and either post them on internal feeds or the public NuGet feed and target them in your Linux projects.  For example, commonly used NuGet packages such as JSON.NET implement the .NET Standard and are part of the ASP.NET templates for a new project that you might create in Linux.

.NET Core is made up of NuGet packages.  This is a departure from .NET Framework, which was obtained and installed separately.

For more information on NuGet packages and their use in .NET Core, see the article Packages, Metapackages, and Frameworks in the documentation:
https://docs.microsoft.com/en-us/dotnet/core/packages

Q: Is there CMake support for C#? I think I read something about that a while ago...

A: I’m sorry.  I’m pretty unfamiliar with using CMake.  I am familiar with the tool, but have no practical experience with it.  However, I can tell you that CMake is used on .NET Core itself.  For example, the CoreCLR for .NET Core has CMake as a build prerequisite, so if you wanted to contribute to this repository, you would be using CMake:  https://github.com/dotnet/coreclr

Q: Is there a way to execute all the TESTS on the solution using VS Code?

A: The best answer I have for this is to learn about Tasks in Visual Studio Code.  Tasks allow you to setup command line tools to be executed within Visual Studio Code.  You can learn more about Tasks here:  https://code.visualstudio.com/Docs/editor/tasks

Next, you would combine Visual Studio Code tasks with dotnet test.  The dotnet test command will execute a test runner against a compiled .dll.  It is like “dotnet run” but for tests.  MSTest, NUnit, and xUnit are all supported test frameworks.  You can learn more about dotnet test here:  https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-test

But don’t stop there.  dotnet-watch is a command extension for the dotnet command.  The command doesn’t include watch by default.  You add a reference to your project and now the command “dotnet watch” will run a command of your choosing when files change in the project.  One of the things you could do is automate unit testing every time a file changes by using all of this together.

Scott Hanselman demonstrated bringing this all together in the following blog post: https://www.hanselman.com/blog/UsingDotnetWatchTestForContinuousTestingWithNETCoreAndXUnitnet.aspx
You can learn more at this documentation article called Developing ASP.NET Core apps using dotnet watch:  https://docs.microsoft.com/en-us/aspnet/core/tutorials/dotnet-watch

Q: Can we run Package Manager Console for the Nuget packages?

A: You can continue to use Package Manager Console in Visual Studio in Windows.  On Linux you will be using the command line tools such as “dotnet add package” to add a package to your project from the command line.  You may also edit the project’s .csproj file directly.  The new format is so streamlined it will not take long to understand.

This article discusses the changeover from .xproj to .csproj as .NET Core has matured: https://docs.microsoft.com/en-us/dotnet/core/tools/project-json-to-csproj

Q: Does .NET Core have the same libraries that already existed in ASP.NET?  For example: does System.Web.Security exist in .NET Core?

A: ASP.NET Core does not implement everything that you found in ASP.NET just as .NET Core does not attempt to implement everything found in .NET Framework.  Examples of items left out were some that were very Windows specific in nature, items that customers weren’t using, or items that would benefit from some redesign.

For example, when considering ASP.NET MVC, you will find in ASP.NET Core this is called ASP.NET Core MVC and is a “concept compatible” framework.  You can not simply life your ASP.NET 5 code and use it immediately, but the idea was that the code in ASP.NET Core MVC would be very familiar to an ASP.NET MVC developer, and they would have no problem transitioning to the new framework.

For the record, there is no System.Web.Security namespace today in ASP.NET Core.  The security concepts are presented in this article in the documentation: https://docs.microsoft.com/en-us/aspnet/core/security/

Q: Is it a good idea to invest in containers as a pattern?

A: Containerization was way beyond the scope of this talk.  However, I wanted to point out that .NET Core and ASP.NET Core were “container-ready”.

As a contrast, there was one way to host ASP.NET MVC in ASP.NET 4.6, and that was with IIS.  That automatically means Windows Server.

ASP.NET Core presents many options.  Some are great for your current datacenter, and require little or no change.  Other options, like Docker containers, are good options to explore for the future, especially a move to cloud based container services.

Reasons for using containers is a big topic, but one example of a benefit is that a container represents a complete image of an application.  The bits you run on your development machine once the container is built are identical to the bits running in your datacenter.  You can reduce or eliminate setup instructions and be assured that there isn’t a rogue configuration setting somewhere that makes it work for you but not in production.

 

 

About the speaker, Chris Gomez

Chris Gomez

Chris Gomez has been developing software professionally since 1993, but the love of coding began in grade school when he developed his first simple games on an IBM PC. His day jobs have included creating entertainment kiosks for theme parks and music retailers, commerical loan analytics, and clinical data exchange systems. Chris is recognized as a Microsoft MVP for Visual Studio Development Tools and Technologies. Today he is focused on delivering distributed systems with .NET and other platforms, but he still finds time to teach kids of all ages to make their first games to ignite their interest in coding.

 

Fluent interfaces are more than just a pretty way to write code. They can prevent errors, by ensuring your shared code is used correctly.

Our guest Scott Lilly will walk you through the topic of fluent interfaces and demonstrate how it can save you from needing to create the documentation that we never have time to write anyway.

Watch the webinar and learn:

  • What type of code can be improved with fluent interfaces
  • How to design the "grammar" for a fluent interface
  • How to quickly and easily write the code for your own fluent interfaces

Mistake-Proof Your Code with Fluent Interfaces on Vimeo.

Download code samples.

Q&A

Here are Scott's follow-up answers to the questions. If the question was misunderstood, not answered completely, or if you can think of a different answer, please let him know by leaving a comment at http://scottlilly.com/FIWebinar.

Q: How would you have default values and avoid too lengthy code? Let's say for example that 90% of the time you would include all categories so I would not want to repeat that each time I use the FI.

A: You could make an instantiating function named „CreateStandardSalesReport()“ (for example). Inside that function, it would call the private constructor and set the default values. Its returning datatype would be an interface that is farther in the chain, past the functions for variables that were automatically set inside CreateStandardSalesReport.

Here is an example of how that might be done: https://gist.github.com/ScottLilly/85091b9f61e66256a69a7909a05337fd

I would change the interfaces, so the functions that can be skipped over are first in the „chain“. It’s easier to skip over the first five functions (for example), than to create interfaces and functions that let you optionally skip over the first two functions, then the fifth function, then the seventh function.
You might also want to integrate the Builder pattern, as mentioned in one of the questions below.

Q: How does this compare to the "Curiously Recurring Template Pattern" such as expressed here?

A: That pattern is an interesting way to do method chaining. Although, it looks like you will still need to create individual interfaces, to enforce any grammar rules.

Q: Can't you just use annotations to require a certain order and ensure parameters contain data?

A: Yes, you could use annotations on an entity, to ensure the required properties were set, before being able to execute a function. However, another developer could forget to set a property value, and the error will only be detected at runtime.

With the fluent interface pattern, other developers will have the additional help of IntelliSense, to lead them through the chain of required functions to call. They could still pass an invalid parameter to a function. However, they would not be able to skip over calling the function.

Q: Can we use Builder pattern where we create different set of Builder class for different values of Report properties?

A: Yes, you could combine fluent interfaces with the Builder pattern. That would be a good way to handle a situation where you have several common ways to set the values for the some of the chaining functions.

For example, if Accounting reports should always call IncludeAllSalesPeople, IncludeAllCategories, ExcludeReturnedOrders, and IncludeUnshippedOrders, that could have one Builder class that calls those functions. You could have a different Builder class to set the values to only include all the categories for items that are physical products (and not downloadable items), for the Shipping department.

Q: I would like to know about many Id's inside the filter, how to deal with this?

A: If I was building the ReportGenerator class for a real program, I would probably have a function for passing in a list of Salesperson IDs (as if the function was receiving a list of checked items in a datagrid that displayed the salespeople).

Inside that function, I would add the IDs from that passed parameters into the private _includedSalespoersonIDs variable, if that ID was not already in the list.

Q: Is this the only way to implement fluent interfaces? If not, what are the other approaches and how are they different from your approach?

A: This is the only way I’ve used. You might be able do something similar with extension methods that only work for specific datatypes (which would be the interfaces we use for the grammar). But, method seems less clear, to me.
If anyone is aware of a different method, please share it.

Q: How would you implement the IncludeSalespersonID function within the report class or option that are mutually exclusive when the report is actually being built?

A: When you create your fluent interface’s grammar rules, you should design it to prevent mutually-exclusive functions. For example, in the ReportGenerator fluent interface, you can call “IncludeAllSalespeople”, or you can call “IncludeSalespersonID”. You can call “IncludeUnshippedOrders”, or you can call “ExcludeUnshippedOrders”. You can only call one, or the other – not both.

Q: How exception handling works in fluent interface? How it works with optional parameters?

A: Exceptions would be caught at runtime. You could add parameter validation, that could throw an exception if the passed parameter was invalid. Also, when the ending function is called (BuildReport or SendEmail, for example), it could throw an exception.

The fluent interface can ensure that, when other programmers user your class, they will call the all the required functions to set the required parameters. However, if you do not include other data validation, they could set the parameters to invalid values – for example, setting the “to” date before the “from” date, when specifying a data range.

Q: Would it be possible to convert BDD scenarios to fluent?

A: This seems like it could be a great idea. I haven’t used SpecFlow, but a fluent interface would almost match with creating FitNesse fixtures.

This seems like it could be a great idea. I haven’t used SpecFlow, but a fluent interface would almost match with creating FitNesse fixtures.

If you show the users (or business analysts) the concept of method chaining, they should be able to create the grammar for you, using business terms. Then, you could use that to build the required fluent interfaces.

This is definitely an idea I want to think about some more. It may be a great way to deal with correctly understanding complex business requirements.

Q: If include All categories is optional, will the group by sort by be available to continue with the BuildReport? So, the order would include a Build report method for all functions that are deemed optional?

A: If you build the grammar to allow that, it should let you create that as a valid “chain”. I think the answer to the first question (<a href="https://gist.github.com/ScottLilly/85091b9f61e66256a69a7909a05337fd">see source code here</a>) will show how to do that.
Please let me know if that sample doesn’t answer your question.

Q: How to handle exception thrown by preceding method? How to stop chaining, or ignore error if not critical and continue chaining?

A: The chaining functions only set the values of the private variables, and “return this”. So, there should not ever be an exception – unless you add your own data validation in those functions. In that case, when the code is run, and an invalid parameter is passed, it will throw whatever type of exception you specify, and stop executing.

You could put logic into the chaining functions that would check if the passed parameter is invalid. If it is, instead of having the function throw an exception, you could have it determine a good (or default) value to use.

Q: Over time, let's say the ReportGenerator might add new functionality. Is it possible to have more than one path to reach the ending function? And how can we ensure maybe though unit testing that the chain leads to an ending function?

A: Yes, it is very possible to have more than one path reach the ending function. The ReportGenerator class has several possible paths (Include, or Exclude the returned and unshipped orders, for example).

Ensuring complete unit testing of all possible chains is interesting. If I was doing that, I’d probably to TDD, and use ReSharper (or some other static analysis tool) to show any functions in any interfaces that are not called. By looking at uncalled functions in the interfaces, that should inform us of any missing paths.

When I work on the fluent interface creating tool, that sounds like a good feature to add – automated generation of unit tests for each chain.

Q: I think you just need to get the result of IncludeSalespersonId inside the foreach and continue from there to avoid the casting.

A: I think this example is a little tricky, because you must either enter at least on salesperson ID (through IncludeSalespoersonID), or call IncludeAllSalespeople, before you can call one of the category-setting functions.

If you have an example that works, please share it at http://scottlilly.com/FIWebinar

Q: How would you go about baseclassing and extending this pattern with generics and inheritance is the essence of what I am trying to understand?

A: I have not tried to create a generic version of a fluent interface engine. Every time I’ve built a fluent interface, it has been very specific to one class (such as the ReportGenerator class).

If you needed to create “chains” for a class that were extremely different, you might want to have ReportGenerator as a base class, and create child classes, with their own interfaces that implement the different chains. For example, if you wanted to have a ManagementReportGenerator fluent interface for management reports, which might show different information, and have very different “chaining” options. You might also have an AccountingReportGenerator for the accountants, which might have a massively different fluent interface. Those would have their own sets of interfaces, but might use some functions from the BaseReportGenerator class.

Q: I want to call one interface or a different one according to a value previously set. For example, would this be possible? if IncludeSalesPersonID is called then AllCategories is selectable, and if they select allSalesPersons, then AllCategories is not enabled (to prevent a call from being too big).

A: Yes, you could do this. When building the grammar spreadsheet, in the row for IcludeAllSalespeople, don’t put a „Y“ in the IncludeAllCategories. Then, your interface for that row might be named IcanSetOneCategoryID. That interface would be defined to only have one function „IncludeCategoryID“, and its return datatype would be „ICanAddCategoryIDOrSetReturnedOrdersInclusion“.

 

About the speaker, Scott Lilly

Scott Lilly

Scott Lilly is a C# developer, creator of "Learn C# by Building a Simple RPG", and lean practitioner.
Scott develops line-of-business systems for corporate clients, and publishes videos and tutorials for C# developers.
Scott's blog.