We’re excited to announce that PostSharp 5.0 RTM is out and ready for download on Visual Studio Gallery and NuGet. The long awaited new version adds support for .NET Core 1.1, Visual Studio 2017 and C# 7.0. It also introduces brand new features like the OnInvokeAsync advise, the [Cache] aspect, or the [Command] and [DependencyProperty] aspects for XAML applications. And the Logging feature has been completely revamped, now fully customizable and faster than ever.

As in any major version, PostSharp 5.0 is the opportunity for us to do some clean up at the cost of a few breaking changes. We’re also updating our product line, renaming products and regrouping features differently. The most disruptive change will affect PostSharp Express users.

New Platforms

Visual Studio 2017 – We support the new MSBuild project format, side-by-side installations of VS, lightweight solution loads, and achieved significant performance improvements.

C# 7.0 – We tested and fixed all aspects with the new features of C# 7.0, including value-typed tasks and multiple return values.

.NET Core 1.1 – You can now build applications that run on .NET Core 1.1, but you can still only build and debug them on a Windows machine running Visual Studio. Support for .NET Core is a long-term project and you will see gradual improvements in future versions.

.NET Standard 1.3 – Support for .NET Core is achieved through .NET Standard, so you can use PostSharp in your own .NET Standard class libraries.

New Features

Async support in aspects – We’ve closed the gaps in the support for async methods in aspects OnMethodBoundary, so ReturnValue and FlowBehavior are now properly supported. In MethodInterceptionAspect, we’ve added an advice method OnInvokeAsync to handle async methods.

Caching – We've added a brand new ready-made caching framework, which includes not only a caching aspect, but also a cache invalidation aspect. PostSharp Caching 5.0 comes with support for MemoryCache and Redis. See Caching reference documentation for details.

Logging – That's a complete rewrite! The new PostSharp Logging is fully customizable and faster than ever. See Logging reference documentation for details.

XAML – If you're writing XAML applications, you probably wrote a lot of boilerplate code for commands and dependency properties. We've created new aspects to automate that. See XAML reference documentation for details.

Code Contracts – It is now possible to add code contracts to return values and out or ref parameters. The values are validated when the method succeeds.

Architecture Framework – We’re adding NamingConventionAttribute, ParameterValueConstraint  and  ReferenceConstraint.

Deprecated Platforms

Windows Phone, WinRT, Silverlight – These platforms have never got any traction among PostSharp users and we will no longer support them.

Portable Class Libraries – An evil that’s no longer necessary. We’re glad to deprecate them too.

Xamarin – We still believe in Xamarin but had to make choices to reach the 5.0 finish line. We chose to suspend support for Xamarin. Our intention is to get back to work on this platform, but to support it through .NET Standard.

Changes in the Product Line and Licensing

In PostSharp 5.0, we’re reshaping our product line:

  • PostSharp Professional becomes PostSharp Framework and now includes everything you need to automate the implementation or validation of your own patterns, including the Architecture Framework which used to be a part of PostSharp Ultimate. However, PostSharp Diagnostics is removed. PostSharp Professional customers will be offered a free subscription to PostSharp Diagnostics for the whole duration of their PostSharp Professional subscription that has already been paid for. Please contact our sales team if you’re interested. Support for the license server is also removed. Please contact us if you’re impacted.
  • PostSharp Ultimate now has a big brother named PostSharp Enterprise. PostSharp Ultimate will still be an “all you can eat” version: the difference is that PostSharp Enterprise will address the typical non-technical requirements of large companies, namely custom license agreement, on-premises license server, and blueprint source code license. Please contact us if you have a PostSharp Ultimate license and are using the license server.
  • PostSharp Model becomes PostSharp XAML, the must-have companion to your XAML development. Besides NotifyPropertyChanged, undo/redo and code contracts, we’re adding command and dependency property aspects.
  • PostSharp Diagnostics now has a free edition named PostSharp Diagnostics Developer Edition and no longer has any project size limitation. It means you can now add logging to your whole solution for free. There is however a time limitation: your applications will stop logging one day after they have been built. If you need logging, you have to rebuild them. That’s why we call it the Developer Edition.
  • PostSharp Express is renamed PostSharp Essentials. PostSharp Essentials is a free but limited edition of PostSharp. You can use all the features of PostSharp Ultimate, but the number of enhanced classes is limited to 10 per project or 50 per solution as in PostSharp 4.3. Additionally, it includes the time-limited PostSharp Diagnostics Developer Edition. We have removed the licensing mode that enabled for backward compatibility with PostSharp 2.0-4.2.

The next table summarizes the licensing changes:

New Product

Previous Product

Changes

PostSharp Enterprise

PostSharp Ultimate

Tiered licensing, min. 50 licenses.

More enterprise licensing options.

Source code blueprint license added.

PostSharp Ultimate

PostSharp Ultimate

Support for the license server removed.

PostSharp Framework

PostSharp Professional

PostSharp Diagnostics removed.

PostSharp Achitecture Framework added.

PostSharp Essentials

PostSharp Express

Backward-compatibility mode with PostSharp 4.2 licensing removed.

PostSharp Diagnostics Developer Edition added.

PostSharp Diagnostics

PostSharp Diagnostics

Tiered licensing.

Totally revamped product.

Code contracts added.

PostSharp XAML

PostSharp Model

Command and dependency properties added.

PostSharp Threading

PostSharp Threading

Code contracts added.

 

So you’re now asking money for a feature that used to be free?

We hate Orwellian language just as you do. Yes. We’re removing free features from PostSharp Express. We have decided to move from a licensing concept based on feature limitations to a concept based on scale limitations. We have made a first step in August 2016 with PostSharp 4.3, but since it was a minor release, we did not want to break backward compatibility. Therefore, we still included (but did not document) a backward-compatible licensing mode in PostSharp 4.3. We’re now removing this mode. PostSharp 5.0 works exactly as PostSharp 4.3 was advertised to work, minus the backward compatibility with PostSharp 2.0-4.2.

What if you’ve been using PostSharp Express for a long time and you don’t fit within the limitations of PostSharp Essentials? I understand you wish to continue the same features for free in PostSharp 5.0 and may feel pushed into the corner by the new licensing model. You have several options:

  1. Do not upgrade to PostSharp 5.0. Remember that all PostSharp licenses, except evaluation licenses, are perpetual. We are not withdrawing your right to use any prior version of PostSharp. Staying with PostSharp 4.3 may be a perfectly viable option, but remember we will not implement support for new versions of frameworks, languages, or Visual Studio.
  2. Remove PostSharp from your project: use a competitor product or rewrite the boilerplate manually.
  3. Purchase a commercial edition of PostSharp 5.0.

I’m sure there is going to be some emotions out there, and we’re likely to see some angry reactions on social media. But I’m also convinced the best service we can render to the community of PostSharp users is to build a healthy, forward-looking, prosperous company, which implies to discontinue business models that have proved unsuccessful. Our decision will perhaps be unpopular, but this is a healthy, data-based one.

Summary

PostSharp 5.0 is a major release, adding support for .NET Core, Visual Studio 2017, C# 7.0, and introducing exciting new features such as a fully new logging framework, much improved support for async methods, a caching aspect, command and dependency property aspects, and much more.

We couldn’t have implemented all these new functionalities without doing a few breaking changes, which I suggest you double check before you upgrade.

PostSharp 5.0 is also the opportunity for us to reshape our product line. We’ve renamed our products, sharpened their positioning, and moved the boundaries between them. Most commercial customers will not be affected, but if you think you are losing functionalities because of these changes, please contact us to find a solution.

Happy PostSharping!

-gael

Do you know how to write very fast C# code? Here's a sobering fact: many schools and universities only teach how to write valid C# code, and not how to write fast and efficient code.

Did you know that adding strings together inefficiently can slow down your code by a factor of more than two hundred? And ’swallowing’ exceptions will make your code run a thousand times slower than normal.

Slow C# code is a big problem. Slow code on the web will not scale to thousands of users. Slow code will make your Unity game unplayable. Slow code will have your mobile apps catching dust in the app store.

In this session, our guest speaker Mark Farragher will show you many common performance bottlenecks and how to fix them. We’ll introduce each problem, write a small test program to measure the baseline performance, and then learn how you can radically speed up the code.

Watch the webinar and learn:

  • The low-hanging fruit: basic optimizations
  • How to read compiled MSIL code
  • The struct versus class debate
  • Optimize for the garbage collector
  • Writing directly into memory with unsafe pointers
  • Use dynamic delegates to dramatically speed up reflection

 

How to Write Very Fast C# Code on Vimeo.

For source code of the examples, please email Mark at mark@mdfarragher.com

Q & A

Q: Why 9% are exceptions?

A: Several viewers have pointed out that the 9% number I mention in the webinar is incorrect. Here is the correct calculation:

I’m building numbers from individual digits. There are 11 digits, 0-9 and the letter ‘X’. So, the chance of a single digit being invalid is 1/11. A number consists of 5 digits, so the chance of a single number being invalid is (1/11) * 5 = 45%. The loop in my code will fail 45% of the time and throw an Exception.

Q: How to get mastery in reflection and dynamic code?

A: By practicing a lot. Write lots of code that uses reflection and dynamic emitting. Experiment, measure performance, see how far you can go optimizing your code. Play around and discover what works and what doesn’t. Plus: read lots of blog posts and articles.

Q: Why would it not be beneficial to use structs for all simple business objects? Is there a point of degradation or some limitation over a class? Is a struct usable with Entity Framework to represent database objects?

A: The .NET Runtime makes certain assumptions about structs and classes, specifically that structs will be very small (in terms of memory space) and have a short lifetime, and classes will either be small or large and have a long lifetime. Simply replacing all classes with structs in your code is dangerous because you will go against these assumptions. For example - if you change a long-living object to a struct, it will get boxed on the heap and your code will be even slower than when using classes. A struct also get copied during each method call, so passing a very large struct to many different methods will slow down your code a lot.

The rule of thumb here is to always start with classes, and only use structs when it makes sense to do so.

The Entity Framework does not support structs.

Q: Can we use DynamicMethod trick on AOT platforms (via Mono)?

A: Nope. The ILGenerator class is missing, so you can’t emit your own CIL code into the dynamic method. Makes sense, right? It couldn’t possibly work with AOT.

Q: CIL stuff is really interesting.  Perhaps worth mentioning that string interpolation and string.Format uses StringBuilder so you don't always need to explicitly use StringBuilder.  Also, StringBuilder has a little overhead so for <4 strings something like str1 + str2 + str3 is faster - I think

A: Correct! String interpolation ($”yadday {yadda}”) compiles to a String.Format call, so it’s exactly the same thing. I always use interpolation because it’s so much easier to type.

You’re also spot-on with the string versus StringBuilder comment. A StringBuilder has some overhead initializing, so it is actually slower for a small number of additions. The cutoff point is at 3 additions. For zero to three the string is faster, for four and more the StringBuilder is faster. For larger number of additions, they start to diverge very quickly.

In my logging and diagnostic code, I always use strings (string interpolation) because I usually stay below the 3-addition limit, and it makes my code so much easier to read.

Q: Hi, For Exceptions, what if TryParse is not there. For user-defined types instead of Primitive what needs to be done.
A: You need to do the same that TryParse is doing internally – scan the input data first, and only start parsing if the scan says it’s okay. Also make sure you return a parsing failure as a return value (i.e. a bool) instead of throwing a FormatException.

An easy way to scan is by using a precompiled regular expression to make sure the input data doesn’t contain any invalid characters. Regular expressions are super-fast.

Q: Any comment about differences between copping arrays, lists, c # hash table, etc at the heap?

A: In terms of memory layout, there’s not that much difference between an array, a list, or a hashtable. All three use arrays internally to hold the data. A hashtable is optimized for key/value lookup, whereas list and array are intended for indexed access.

They all have a CopyTo method that attempts to block-copy all data in one go. If you’re storing value types, you will see great performance for all three.

Q: Are you going to review LINQ / Parallel performance someday?

A: That’s a great idea! Thanks for the suggestion. I have an existing course already that scratches the surface of LINQ versus PLINQ performance, but I’d love to go deeper.

Q: Nice talk. BTW Stringbuilder may not be the fastest. it depends on the size etc. you have to calculate the GC allocations also. best tool for that is BenchmarkDotNet with memory diagnoser on windows! it is a fantastic tool. General rule of thumb: whatever you do you have to measure in order to see perfomance benefits.

A: Thanks for the suggestion. I’ll check out BenchmarkDotNet. And you’re right about the rule of thumb – you always have to do actual measurements, you can’t rely on just theoretical knowledge to optimize your code.

Q: Just a question on array.CopyTo(...) where Mark said that the memory copy was done out of process by the OS (in C libs guessing "memcpy"). In the profiling application during the webcast, array.CopyTo(..) executed in 32ms, whereas the copy via index and loops was >300ms, in other words, using array.copyTo is an order of magnitude faster with OSX as the OS. It the 10-fold difference "about" the same with .NET on Windows? Different OS different ratio?

A: Yes, the ratio is roughly the same. The speed of a memory copy is more or less the same for all operating systems, whereas you might see small differences in 1-dimensional array performance. I’ve noticed that .NET Core tends to be slightly faster than Mono in handling arrays, because it’s much better optimized.

Q: I measured. GetType() is 171 ms vs. typeof() at 6 ms in a test of a million iterations.

A: That’s because typeof() is processed at compile-time, whereas GetType() is processed at runtime.

Q: How do you keep yourself upto date on the latest and greatest technology?

A: I read lots of technical blogs, and when I’m preparing for a new course or webinar, I do a lot of research and write small test programs to experiment. And I probably have a talent for learning new stuff very quickly.

Q: Would you use some form of multi-dimensional converter to convert a single dimensional array back to a multi-dimension array or would you take another approach?

A: It depends on the use case. I usually just wrap a 1-dimensional array so from the outside it looks like the original multi-dimensional array. The disadvantage of converting the other way is that you’re slowing the code down again, so I am a bit hesitant to use any kind of converter.

Q: Do you have any advice for Parallel.ForEach?

A: Yeah, use it! Parallel.ForEach is great for parallelizing regular for or foreach loops. It is my first step in parallelizing code, and quite often it’s all I need to do.

Two years ago, I wrote an app that processes Sharepoint documents. I had a for-loop in my code that would process each document individually. I parallelized the code simply by replacing my for-loop with a Parallel.ForEach. This drop-in replacement to make code multi-threaded is really nice.

Q: Have you tried these performance tests on .NET Core?

A: Yeah. Everything I show you in the Webinar is running on .NET Core 1.1

Q: Foreach loops do have a performance optimization over for loops in cases where the collection is already an enumeration or a function that yield returns?

A: No. Enumerations or methods with yield return cannot be indexed and they don’t have a well-defined upper limit, so there’s no benefit using a for-loop with them. If you do try to use a for loop, you’d have to manually access MoveNext() and Current, and this would be the exact same code the compiler produces when you use foreach.

Q: Is there any significant difference between pre- and post-increment operations. In C++ I am accustomed to always doing ++i in preference to i++ but I rarely see this being done by C# developers.

A: It works exactly the same as in C++, the difference between the two is the return value: i before increment or i after increment.

Q: Does the performance benefits you described for structs vs classes get lost when comparing the performance of passing classes vs structures to other functions (excluding cases where structs are being passed by reference)?

A: Passing structs to functions will slow down your code, because structs are copied by value. For every method call the entire struct will be cloned in memory. When you’re using classes, only the reference to the object instance is copied into the method.

So yes, for large structs with lots of fields you’ll see a measurable slowdown when doing lots of method calls with struct parameters.

Q: What is the difference between the heap and the stack?

A: The stack is a highly-optimized block of memory intended for data with a very short lifetime, just for the duration of a single method call. Stack memory is created when you enter a method and gets cleaned up when you exit out of a method. The stack is also fairly small, usually around 100MB. It’s optimized for a manageable number of small objects (thousands, not millions) with a very short lifetime.

The heap is a very large block of memory (multiple GBs) optimized for long-term storage. You can easily put millions of objects on the heap, and they can be either small or large. The heap has a special internal process for archiving long-lived data, and there’s a separate process called the Garbage Collector that cleans up objects that are no longer in use.

As a rule of thumb, the stack is slightly faster than the heap. It can also very quickly initialize new data by writing zeroes directly to memory (the heap calls the constructor of each object individually). The disadvantage of the stack is that it’s relatively small, and it assumes your data will be short-lived. The stack can also slow down if you have a very deep chain of nested method calls.

Q: Why and when we use reflection?

A: We use reflection when we want to dynamically access object fields or call object methods. With ‘dynamically’ I mean based on data that is not known during compile-time. For example, when we store database configuration data in a configuration file. The configuration file might say we need an OracleConnection or a SQLLiteConnection. With reflection, we can read this configuration field and then dynamically instantiate the correct object.

Basically, any time an object type, property, field or method appears somewhere in text format, we’re going to need reflection to perform instantiation, access fields and properties, or execute a method call.

Q: What does emit mean?

A: Emit means injecting a single CIL instruction into a dynamic method.

Q: What do you mean by baseline test?

A: A baseline test is a performance test of un-optimized code, to get a baseline performance value.

 

About the speaker, Mark Farragher

Mark Farragher

Mark Farragher is a blogger, investor, serial entrepreneur, and the author of 10 successful IT courses in the Udemy marketplace. His IT career spans 2 decades and he has worn many different hats over the years.

Mark started using C# and the .NET framework 15 years ago, and creates online courses that make complex C# programming topics easy to understand and accessible to anyone.

 

.NET Core is here! You've probably heard that it is lightweight in nature and that you can use the tools that make you happy. Most of us are going to let Visual Studio do the heavy lifting, and that's fine, but you can learn much about how things work under the hood if you put the IDE aside and work with .NET Core more directly.

And why stop there? .NET Core is cross platform, so in this webinar, our guest speaker Chris Gomez will do all of the development and testing on Ubuntu Linux.

This session is perfect for .NET veterans who are brand new to .NET Core and want to see what the brave new world looks and feels like. It's okay if you're unfamiliar with Linux, but are interested in having options available to you. We'll even learn how Microsoft Azure can make the heavy lifting to production much easier.

Watch the webinar and learn:

  • What the brave new world coming with .NET Core looks like
  • The acquisition and use of .NET Core on a Linux VM untouched by a Visual Studio installation
  • How things work under the hood if you work with .NET Core more directly
  • Development tools such as Visual Studio Code and how you can contribute to the .NET Core and tools ecosystem

Who needs Visual Studio? A look at using .NET Core on Linux on Vimeo.

Download slides.

Q & A

Q: Is Visual Studio code open source?

A: Visual Studio Code is open source.  You can find the project here: https://github.com/Microsoft/vscode.

Q: How did "dotnet restore" know which packages to restore?

A: When you install the dotnet tool following the instructions on the .NET Core download site (https://www.microsoft.com/net/download/core), a default NuGet Config file is created with a default feed.  You can find this in Ubuntu in the ~/.nuget/NuGet folder.  This can be overridden in your projects if you include a NuGet.Config file.  For more information, read about dotnet restore in the documentation at: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-restore

Q: Does ASP.NET Core run on ARM?

A: I haven’t personally investigated ARM, yet.  However, you can see the daily builds for .NET core on various platforms here: https://github.com/dotnet/core-setup#daily-builds. Here you will find builds for ARM versions of Windows and Linux.

An interesting source for information is a recent podcast with Scott Hanselman and his guest Adi Avivi.  In the show, they discuss developing RavenDB on .NET Core for the Raspberry PI: https://www.hanselminutes.com/579/ravendb-the-open-source-nosql-database-for-net-with-adi-avivi

Q: Is there a different NuGet website for Core or it's all in the same place with .net packages?

A: NuGet as a product has evolved to support the needs of .NET Framework and .NET Core.  If you use dotnet new or Visual Studio 2017 to create a new project today, the feed location is https://api.nuget.org/v3/index.json for both. 

Q: Can you provide us the commands that you used in this presentation?

A: Unfortunately, it would take a few posts to recap everything used here to download and install .NET Core and to use the dotnet tool for its various features.  We also quickly published a Docker image and I used one published previously to Docker Hub for the Azure App Service on Linux.  Some great resources to start are:

Step by step instructions to install the .NET Core SDK on Ubuntu Linux: https://www.microsoft.com/net/core#linuxubuntu

dotnet command (https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet) - This documents the various features of the dotnet command line tool.

An overview of the process to create your own Docker images with your application: https://hajekj.net/2016/12/25/building-custom-docker-images-for-use-in-app-service-on-linux/

Using your docker image with Azure App Service for Linux: https://docs.microsoft.com/en-us/azure/app-service-web/app-service-linux-using-custom-docker-image

Q: Does it make sense to use IIS to host an ASP.NET Core application?

A: If you are going to run you ASP.NET Core application on a Windows Server, it absolutely makes sense.  In fact, today it is required to run a full featured web server as a reverse proxy.  .NET Core ships with a lightweight server named Kestrel.  Kestrel has been tuned as a high performance web server built with .NET.  However, it has not to this point been hardened to be the public facing server.

In the Linux world, this had already become more common.  The idea was that programming stacks would ship with small lightweight and fast resource servers, but that you would use an application server to guard it and configure access to it from the outside world.

Please carefully read the section called Set Up A Reverse Proxy in the following documentation discussing how to host ASP.NET Core applications today: https://docs.microsoft.com/en-us/aspnet/core/publishing/

You should also read When to use Kestrel wth a reverse proxy in the documentation: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel#when-to-use-kestrel-with-a-reverse-proxy.

Finally, to host on IIS, you will need to learn about the ASP.NET Core Module on IIS: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/aspnet-core-module

Q: I noticed that the support for Dependency Injection and IoC is also only minimally supported.

A: ASP.NET Core supports a minimal dependency injection model without any external dependencies. Some developers prefer minimizing dependencies and don’t need more than this minimal model. However, the system is not closed, and other dependency injection systems may be used.

The documentation discusses the built-in system at length and provides an example of using Autofac to replace it in the document called Introduction to Dependency Injection in ASP.NET Core: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection

Q: Did you also try VS for Mac? Can I also use it to build apps with .NET Core?

A: I do not have a Mac and have not tried Visual Studio for Mac.  Visual Studio for Mac was made generally available during the BUILD conference.  Among other features, you can begin creating .NET Core and ASP.NET Core applications with the new IDE.

Q: I have some existing C# libraries that I would like to try on .NET Core for possible use on Linux. Any advice?

A: Research .NET Standard.  There is some common confusion about the difference between .NET Standard and .NET Core.  .NET Standard is a specification.  It defines the APIs that are available at a certain release level.

NET Standard accomplishes this by defining the intersection of APIs available in older platforms (some that existed before anyone had the idea for .NET Standard) and paves the way for newer .NET platforms (which includes newer versions of .NET Core, .NET Framework, and perhaps new “.NET’s” no one has thought of yet) to embrace common sets of APIs.

If you’ve written .NET libraries intended to work on multiple platforms, you may be familiar with Portable Class Libraries.  .NET Standard intends to remedy the problems with PCLs and is the present and future of .NET library compatibility. 

This post on the .NET Blog introduces .NET Standard and links to some valuable living information such as a FAQ and the compatibility matrix:  https://blogs.msdn.microsoft.com/dotnet/2016/09/26/introducing-net-standard/

For documentation on .NET Standard, visit: https://docs.microsoft.com/en-us/dotnet/standard/library

Q: All the things we've done so far are already available in .NET Framework in a very elegant and easy manner. Apart from the Cross platform, why should we go for .NET Core?

A: You are asking a very good question that your team must answer for itself.  If you are satisfied with your solution, .NET Framework and support on Windows Server is not ending.  In fact, I would not expect any kind of announcement that .NET Framework will be superseded.  .NET Framework is the desktop platform for .NET on Windows client and server.  When new Windows features ship, a new .NET Framework often follows.

You may want to research .NET Standard and watch the evolution of that space.  Over time, you could responsibly make sure your internal library code supports a .NET Standard version.  At some point, you could consider a trial run of .NET Core and reuse your internal library investment because .NET Standard enables compatibility across .NET platforms (in this example, between .NET Framework and .NET Core).

The other major feature you may want to keep an eye on and test for yourself is performance.  Besides, cross-platform support, ASP.NET Core aims to be a high performance framework.  If you proved that significant and necessary performance gains could come from switching, that could be a good consideration for doing so.

For each case I have discussed, there should be no immediate urgency on your part.  Your question implies you are very satisfied with your solution.  I would just keep an eye on .NET Standard and see if it makes sense to eventually consider making your libraries implement the standard for future flexibility.

Q: Do we have NuGet package support in Linux?

A: When you use .NET Core or ASP.NET Core, you are retrieving packages from NuGet even for base class library items such as .NET Core itself.  You can created NuGet packages and either post them on internal feeds or the public NuGet feed and target them in your Linux projects.  For example, commonly used NuGet packages such as JSON.NET implement the .NET Standard and are part of the ASP.NET templates for a new project that you might create in Linux.

.NET Core is made up of NuGet packages.  This is a departure from .NET Framework, which was obtained and installed separately.

For more information on NuGet packages and their use in .NET Core, see the article Packages, Metapackages, and Frameworks in the documentation:
https://docs.microsoft.com/en-us/dotnet/core/packages

Q: Is there CMake support for C#? I think I read something about that a while ago...

A: I’m sorry.  I’m pretty unfamiliar with using CMake.  I am familiar with the tool, but have no practical experience with it.  However, I can tell you that CMake is used on .NET Core itself.  For example, the CoreCLR for .NET Core has CMake as a build prerequisite, so if you wanted to contribute to this repository, you would be using CMake:  https://github.com/dotnet/coreclr

Q: Is there a way to execute all the TESTS on the solution using VS Code?

A: The best answer I have for this is to learn about Tasks in Visual Studio Code.  Tasks allow you to setup command line tools to be executed within Visual Studio Code.  You can learn more about Tasks here:  https://code.visualstudio.com/Docs/editor/tasks

Next, you would combine Visual Studio Code tasks with dotnet test.  The dotnet test command will execute a test runner against a compiled .dll.  It is like “dotnet run” but for tests.  MSTest, NUnit, and xUnit are all supported test frameworks.  You can learn more about dotnet test here:  https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-test

But don’t stop there.  dotnet-watch is a command extension for the dotnet command.  The command doesn’t include watch by default.  You add a reference to your project and now the command “dotnet watch” will run a command of your choosing when files change in the project.  One of the things you could do is automate unit testing every time a file changes by using all of this together.

Scott Hanselman demonstrated bringing this all together in the following blog post: https://www.hanselman.com/blog/UsingDotnetWatchTestForContinuousTestingWithNETCoreAndXUnitnet.aspx
You can learn more at this documentation article called Developing ASP.NET Core apps using dotnet watch:  https://docs.microsoft.com/en-us/aspnet/core/tutorials/dotnet-watch

Q: Can we run Package Manager Console for the Nuget packages?

A: You can continue to use Package Manager Console in Visual Studio in Windows.  On Linux you will be using the command line tools such as “dotnet add package” to add a package to your project from the command line.  You may also edit the project’s .csproj file directly.  The new format is so streamlined it will not take long to understand.

This article discusses the changeover from .xproj to .csproj as .NET Core has matured: https://docs.microsoft.com/en-us/dotnet/core/tools/project-json-to-csproj

Q: Does .NET Core have the same libraries that already existed in ASP.NET?  For example: does System.Web.Security exist in .NET Core?

A: ASP.NET Core does not implement everything that you found in ASP.NET just as .NET Core does not attempt to implement everything found in .NET Framework.  Examples of items left out were some that were very Windows specific in nature, items that customers weren’t using, or items that would benefit from some redesign.

For example, when considering ASP.NET MVC, you will find in ASP.NET Core this is called ASP.NET Core MVC and is a “concept compatible” framework.  You can not simply life your ASP.NET 5 code and use it immediately, but the idea was that the code in ASP.NET Core MVC would be very familiar to an ASP.NET MVC developer, and they would have no problem transitioning to the new framework.

For the record, there is no System.Web.Security namespace today in ASP.NET Core.  The security concepts are presented in this article in the documentation: https://docs.microsoft.com/en-us/aspnet/core/security/

Q: Is it a good idea to invest in containers as a pattern?

A: Containerization was way beyond the scope of this talk.  However, I wanted to point out that .NET Core and ASP.NET Core were “container-ready”.

As a contrast, there was one way to host ASP.NET MVC in ASP.NET 4.6, and that was with IIS.  That automatically means Windows Server.

ASP.NET Core presents many options.  Some are great for your current datacenter, and require little or no change.  Other options, like Docker containers, are good options to explore for the future, especially a move to cloud based container services.

Reasons for using containers is a big topic, but one example of a benefit is that a container represents a complete image of an application.  The bits you run on your development machine once the container is built are identical to the bits running in your datacenter.  You can reduce or eliminate setup instructions and be assured that there isn’t a rogue configuration setting somewhere that makes it work for you but not in production.

 

 

About the speaker, Chris Gomez

Chris Gomez

Chris Gomez has been developing software professionally since 1993, but the love of coding began in grade school when he developed his first simple games on an IBM PC. His day jobs have included creating entertainment kiosks for theme parks and music retailers, commerical loan analytics, and clinical data exchange systems. Chris is recognized as a Microsoft MVP for Visual Studio Development Tools and Technologies. Today he is focused on delivering distributed systems with .NET and other platforms, but he still finds time to teach kids of all ages to make their first games to ignite their interest in coding.

 

PostSharp Blog | April 2008

Archive

It's time again to have a look at PostSharp4EF, a project of the excellent Ruurd Boeke. PostSharp4EF turns your plain C# or VB classes into full-featured domain objects for client-server applications backed by an SQL database. Once you understand what it means, you understand it is actually very cool.

What is PostSharp4EF?

So what is it all about?

On the server side, PostSharp4EF binds your domain objects to the database. It uses the ADO.NET Entity Framework for that job, but implements automatically all the plumbing code during the project build. The result: you can use clean and lean C# or VB classes as your first-class asset. And since the plumbing code is generated at compile time, you still benefit from excellent performance.

On the client side, the challenge is different. We don't need domain objects to be bind to the database, but to user interface components. And GUI components have two requirements to domain objects: they should be editable (the interface IEditableObject makes it possible to accept or reject changes), and they should be observable (interface INotifyPropertyChanged). Additionally, we want the domain objects to remember that they have been modified, so we can send them back to the server when we press the 'Save' button. So we are actually in a disconnected, stateless model.

Well, if you have to implement IEditableObject and INotifyPropertyChanged by hand, it's again a lot of plumbing code!

Of course -- as you've guessed -- PostSharp4EF rescue us by implementing the interfaces automatically.

How does it work?

We've seen that the server and the client have different requirements to the domain objects. So PostSharp simply adds different aspects for the server-side assembly (persistence aspects) than from the client-side assembly (GUI aspects). That is, from the same source code, two different assemblies are created.

Clean C# code as first-class artifacts

What I like with PostSharp4EF is that it shows how aspect-oriented programming can let your domain objects really clean. If all binding code is "outsourced" as an aspect, what's left? Well, simply: the real domain object definition and the real business logic! So C# code can be a first-class artifact, even for domain objects.

The Economics of AOP

Another thing this project illustrates is how you can actually save money in your company with aspects. If you don't agree that it costs less to have tools generate the plumbing code, stop reading here. But if you agree with me so far, go on.

Of course, AOP has some costs. One of these are the cost of developing the aspect. Ruurd has done a great work, but I know it was a demanding one. A software shop with all the overhead of design documentation, meetings and reports would take maybe one month to implement that feature. So it's not exactly cheap. But what was produced exactly? Actually, Ruurd has produced a great production asset: a tool that makes the process of software production less expensive and of higher quality... So it's just like when a company invests in a new machine: yes, it costs some money to acquire it; yes, it costs some time to learn it; yes, there are risks. But how much money the machine will allow you to save, during the whole lifetime of your product family, by reducing development effort and improving quality? The job of a software engineer is to improve the process of software production, and aspects are one of the tools he should consider!

With PostSharp4EF, you have that production asset for free: both PostSharp and PostSharp4EF are open source.

But even in case that this particular project does not fit your needs, think one minute about what kind of code production could be appropriately replaced by an aspect-oriented tool.

Happy PostSharping!

Gael

I'm finally back from the AOSD 2008 conference, the principal conference of the Aspect-Oriented Software Development (AOSD) community. This is an academic conference, which means that most people are either academic researchers, either researchers in R&D departments of big companies. So I was a kind of stranger there, but this time I did not suffer too much from that.

I have delivered my speech on Wednesday 2nd April in the industry track. Slides are available for download. Note that the speech was designed for an audience who is comfortable with AspectJ, which I guess is not the case of the majority of readers of this blog. You can also download the paper, the most systematic and complete description of PostSharp Laos so far. It was published in the proceedings of the conference. If you need to convince someone that PostSharp is serious, you can put the paper on his table!

The conference was not only about PostSharp, of course. I mostly attended to the industry track and to some demonstrations. Here are the topics I was most interested in:

  • Daniel Wiese, Software Architect at Siemens Medical Solutions, presented how AOP was used in a huge project (SORIAN, a Hospital Information System) of 400 developers spread in 4 different countries. AOP was used principally to handle the "non-functionals", i.e. logging, auditing and so on. What's interesting with that? Well, the scale of course! It shows that AOP has already grown from its youth phase when it was used only in prototypes. Daniel cited some practical problems they met with AOP, principally the increased build time. The selected solution was to weave the code only for nightly builds, so developers do not loose time waiting for the weaver in local builds. Siemens also wanted to enable/disable aspects after deployment (for instance to enable monitoring in a part of the solution when it is in production). Therefore, they used load-time weaving in WebSphere, a Java application server. Since the built-in weaving solution of WebSphere was not satisfying (it was an old version of AspectJ with bad support of load-time weaving), they developed their own weaver: FastAOP. Thank you Daniel for this great show, and let's hope it will convince companies to adopt this technology!
  • SAP integrated AOP technologies in the new version of ABAP, the SQP programming language. True, it is not AOP in the sense we usually mean, but if SAP is getting interested in AOP, what are all the others waiting for? Another proof that it's useful!
  • ASLM, a producer of semiconductors, developed their own solution to weave hundreds of thousands of lines of code in C.
  • Hitachi developed an AO extension to COBOL. Yes, you read well: the old good COBOL.
  • Siemens team presented a funny show of Aspect Manager 2010, a plug-in to Eclipse allowing easy discovery of aspects and their injection into projects. An excellent source of inspiration for PostSharp! Indeed, it convinced me that we should design a standard packaging for aspect libraries to streamline the installation process.
  • M4JPDD, a tool to express quite graphically complex pointcut. Very interesting, although IMHO not backed by a sufficient demand.

The conclusion for me is that there is real interest for AOP in industry, and AOP has already proved itself in huge projects. Why would the .NET world make exception? Experience shows that people are currently mostly interested in the "easy" aspects like caching, auditing, security, and so on. But it is precisely what PostSharp Laos is good in! So it seems I am on the good track. On the other side, there is a lot more in AOP than what PostSharp is able to do! If you follow the blog and the work of Ruurd Boeke, you will find out that PostSharp does not perform well with complex tasks. Of course, you can do everything you want (which gives an amazing feeling), but it results in much more aspect code than what would be the case with AspectJ.

In other words, the conference was a great opportunity to assess the design choice of PostSharp Laos: make simple things easy, knowing that the market is currently at 80% interested in easy things.

Happy PostSharping!

Gael

PostSharp Blog | The official blog of PostSharp: annoucements, tips & tricks

Archive

We’re excited to announce that PostSharp 5.0 RTM is out and ready for download on Visual Studio Gallery and NuGet. The long awaited new version adds support for .NET Core 1.1, Visual Studio 2017 and C# 7.0. It also introduces brand new features like the OnInvokeAsync advise, the [Cache] aspect, or the [Command] and [DependencyProperty] aspects for XAML applications. And the Logging feature has been completely revamped, now fully customizable and faster than ever.

As in any major version, PostSharp 5.0 is the opportunity for us to do some clean up at the cost of a few breaking changes. We’re also updating our product line, renaming products and regrouping features differently. The most disruptive change will affect PostSharp Express users.

New Platforms

Visual Studio 2017 – We support the new MSBuild project format, side-by-side installations of VS, lightweight solution loads, and achieved significant performance improvements.

C# 7.0 – We tested and fixed all aspects with the new features of C# 7.0, including value-typed tasks and multiple return values.

.NET Core 1.1 – You can now build applications that run on .NET Core 1.1, but you can still only build and debug them on a Windows machine running Visual Studio. Support for .NET Core is a long-term project and you will see gradual improvements in future versions.

.NET Standard 1.3 – Support for .NET Core is achieved through .NET Standard, so you can use PostSharp in your own .NET Standard class libraries.

New Features

Async support in aspects – We’ve closed the gaps in the support for async methods in aspects OnMethodBoundary, so ReturnValue and FlowBehavior are now properly supported. In MethodInterceptionAspect, we’ve added an advice method OnInvokeAsync to handle async methods.

Caching – We've added a brand new ready-made caching framework, which includes not only a caching aspect, but also a cache invalidation aspect. PostSharp Caching 5.0 comes with support for MemoryCache and Redis. See Caching reference documentation for details.

Logging – That's a complete rewrite! The new PostSharp Logging is fully customizable and faster than ever. See Logging reference documentation for details.

XAML – If you're writing XAML applications, you probably wrote a lot of boilerplate code for commands and dependency properties. We've created new aspects to automate that. See XAML reference documentation for details.

Code Contracts – It is now possible to add code contracts to return values and out or ref parameters. The values are validated when the method succeeds.

Architecture Framework – We’re adding NamingConventionAttribute, ParameterValueConstraint  and  ReferenceConstraint.

Deprecated Platforms

Windows Phone, WinRT, Silverlight – These platforms have never got any traction among PostSharp users and we will no longer support them.

Portable Class Libraries – An evil that’s no longer necessary. We’re glad to deprecate them too.

Xamarin – We still believe in Xamarin but had to make choices to reach the 5.0 finish line. We chose to suspend support for Xamarin. Our intention is to get back to work on this platform, but to support it through .NET Standard.

Changes in the Product Line and Licensing

In PostSharp 5.0, we’re reshaping our product line:

  • PostSharp Professional becomes PostSharp Framework and now includes everything you need to automate the implementation or validation of your own patterns, including the Architecture Framework which used to be a part of PostSharp Ultimate. However, PostSharp Diagnostics is removed. PostSharp Professional customers will be offered a free subscription to PostSharp Diagnostics for the whole duration of their PostSharp Professional subscription that has already been paid for. Please contact our sales team if you’re interested. Support for the license server is also removed. Please contact us if you’re impacted.
  • PostSharp Ultimate now has a big brother named PostSharp Enterprise. PostSharp Ultimate will still be an “all you can eat” version: the difference is that PostSharp Enterprise will address the typical non-technical requirements of large companies, namely custom license agreement, on-premises license server, and blueprint source code license. Please contact us if you have a PostSharp Ultimate license and are using the license server.
  • PostSharp Model becomes PostSharp XAML, the must-have companion to your XAML development. Besides NotifyPropertyChanged, undo/redo and code contracts, we’re adding command and dependency property aspects.
  • PostSharp Diagnostics now has a free edition named PostSharp Diagnostics Developer Edition and no longer has any project size limitation. It means you can now add logging to your whole solution for free. There is however a time limitation: your applications will stop logging one day after they have been built. If you need logging, you have to rebuild them. That’s why we call it the Developer Edition.
  • PostSharp Express is renamed PostSharp Essentials. PostSharp Essentials is a free but limited edition of PostSharp. You can use all the features of PostSharp Ultimate, but the number of enhanced classes is limited to 10 per project or 50 per solution as in PostSharp 4.3. Additionally, it includes the time-limited PostSharp Diagnostics Developer Edition. We have removed the licensing mode that enabled for backward compatibility with PostSharp 2.0-4.2.

The next table summarizes the licensing changes:

New Product

Previous Product

Changes

PostSharp Enterprise

PostSharp Ultimate

Tiered licensing, min. 50 licenses.

More enterprise licensing options.

Source code blueprint license added.

PostSharp Ultimate

PostSharp Ultimate

Support for the license server removed.

PostSharp Framework

PostSharp Professional

PostSharp Diagnostics removed.

PostSharp Achitecture Framework added.

PostSharp Essentials

PostSharp Express

Backward-compatibility mode with PostSharp 4.2 licensing removed.

PostSharp Diagnostics Developer Edition added.

PostSharp Diagnostics

PostSharp Diagnostics

Tiered licensing.

Totally revamped product.

Code contracts added.

PostSharp XAML

PostSharp Model

Command and dependency properties added.

PostSharp Threading

PostSharp Threading

Code contracts added.

 

So you’re now asking money for a feature that used to be free?

We hate Orwellian language just as you do. Yes. We’re removing free features from PostSharp Express. We have decided to move from a licensing concept based on feature limitations to a concept based on scale limitations. We have made a first step in August 2016 with PostSharp 4.3, but since it was a minor release, we did not want to break backward compatibility. Therefore, we still included (but did not document) a backward-compatible licensing mode in PostSharp 4.3. We’re now removing this mode. PostSharp 5.0 works exactly as PostSharp 4.3 was advertised to work, minus the backward compatibility with PostSharp 2.0-4.2.

What if you’ve been using PostSharp Express for a long time and you don’t fit within the limitations of PostSharp Essentials? I understand you wish to continue the same features for free in PostSharp 5.0 and may feel pushed into the corner by the new licensing model. You have several options:

  1. Do not upgrade to PostSharp 5.0. Remember that all PostSharp licenses, except evaluation licenses, are perpetual. We are not withdrawing your right to use any prior version of PostSharp. Staying with PostSharp 4.3 may be a perfectly viable option, but remember we will not implement support for new versions of frameworks, languages, or Visual Studio.
  2. Remove PostSharp from your project: use a competitor product or rewrite the boilerplate manually.
  3. Purchase a commercial edition of PostSharp 5.0.

I’m sure there is going to be some emotions out there, and we’re likely to see some angry reactions on social media. But I’m also convinced the best service we can render to the community of PostSharp users is to build a healthy, forward-looking, prosperous company, which implies to discontinue business models that have proved unsuccessful. Our decision will perhaps be unpopular, but this is a healthy, data-based one.

Summary

PostSharp 5.0 is a major release, adding support for .NET Core, Visual Studio 2017, C# 7.0, and introducing exciting new features such as a fully new logging framework, much improved support for async methods, a caching aspect, command and dependency property aspects, and much more.

We couldn’t have implemented all these new functionalities without doing a few breaking changes, which I suggest you double check before you upgrade.

PostSharp 5.0 is also the opportunity for us to reshape our product line. We’ve renamed our products, sharpened their positioning, and moved the boundaries between them. Most commercial customers will not be affected, but if you think you are losing functionalities because of these changes, please contact us to find a solution.

Happy PostSharping!

-gael

Do you know how to write very fast C# code? Here's a sobering fact: many schools and universities only teach how to write valid C# code, and not how to write fast and efficient code.

Did you know that adding strings together inefficiently can slow down your code by a factor of more than two hundred? And ’swallowing’ exceptions will make your code run a thousand times slower than normal.

Slow C# code is a big problem. Slow code on the web will not scale to thousands of users. Slow code will make your Unity game unplayable. Slow code will have your mobile apps catching dust in the app store.

In this session, our guest speaker Mark Farragher will show you many common performance bottlenecks and how to fix them. We’ll introduce each problem, write a small test program to measure the baseline performance, and then learn how you can radically speed up the code.

Watch the webinar and learn:

  • The low-hanging fruit: basic optimizations
  • How to read compiled MSIL code
  • The struct versus class debate
  • Optimize for the garbage collector
  • Writing directly into memory with unsafe pointers
  • Use dynamic delegates to dramatically speed up reflection

 

How to Write Very Fast C# Code on Vimeo.

For source code of the examples, please email Mark at mark@mdfarragher.com

Q & A

Q: Why 9% are exceptions?

A: Several viewers have pointed out that the 9% number I mention in the webinar is incorrect. Here is the correct calculation:

I’m building numbers from individual digits. There are 11 digits, 0-9 and the letter ‘X’. So, the chance of a single digit being invalid is 1/11. A number consists of 5 digits, so the chance of a single number being invalid is (1/11) * 5 = 45%. The loop in my code will fail 45% of the time and throw an Exception.

Q: How to get mastery in reflection and dynamic code?

A: By practicing a lot. Write lots of code that uses reflection and dynamic emitting. Experiment, measure performance, see how far you can go optimizing your code. Play around and discover what works and what doesn’t. Plus: read lots of blog posts and articles.

Q: Why would it not be beneficial to use structs for all simple business objects? Is there a point of degradation or some limitation over a class? Is a struct usable with Entity Framework to represent database objects?

A: The .NET Runtime makes certain assumptions about structs and classes, specifically that structs will be very small (in terms of memory space) and have a short lifetime, and classes will either be small or large and have a long lifetime. Simply replacing all classes with structs in your code is dangerous because you will go against these assumptions. For example - if you change a long-living object to a struct, it will get boxed on the heap and your code will be even slower than when using classes. A struct also get copied during each method call, so passing a very large struct to many different methods will slow down your code a lot.

The rule of thumb here is to always start with classes, and only use structs when it makes sense to do so.

The Entity Framework does not support structs.

Q: Can we use DynamicMethod trick on AOT platforms (via Mono)?

A: Nope. The ILGenerator class is missing, so you can’t emit your own CIL code into the dynamic method. Makes sense, right? It couldn’t possibly work with AOT.

Q: CIL stuff is really interesting.  Perhaps worth mentioning that string interpolation and string.Format uses StringBuilder so you don't always need to explicitly use StringBuilder.  Also, StringBuilder has a little overhead so for <4 strings something like str1 + str2 + str3 is faster - I think

A: Correct! String interpolation ($”yadday {yadda}”) compiles to a String.Format call, so it’s exactly the same thing. I always use interpolation because it’s so much easier to type.

You’re also spot-on with the string versus StringBuilder comment. A StringBuilder has some overhead initializing, so it is actually slower for a small number of additions. The cutoff point is at 3 additions. For zero to three the string is faster, for four and more the StringBuilder is faster. For larger number of additions, they start to diverge very quickly.

In my logging and diagnostic code, I always use strings (string interpolation) because I usually stay below the 3-addition limit, and it makes my code so much easier to read.

Q: Hi, For Exceptions, what if TryParse is not there. For user-defined types instead of Primitive what needs to be done.
A: You need to do the same that TryParse is doing internally – scan the input data first, and only start parsing if the scan says it’s okay. Also make sure you return a parsing failure as a return value (i.e. a bool) instead of throwing a FormatException.

An easy way to scan is by using a precompiled regular expression to make sure the input data doesn’t contain any invalid characters. Regular expressions are super-fast.

Q: Any comment about differences between copping arrays, lists, c # hash table, etc at the heap?

A: In terms of memory layout, there’s not that much difference between an array, a list, or a hashtable. All three use arrays internally to hold the data. A hashtable is optimized for key/value lookup, whereas list and array are intended for indexed access.

They all have a CopyTo method that attempts to block-copy all data in one go. If you’re storing value types, you will see great performance for all three.

Q: Are you going to review LINQ / Parallel performance someday?

A: That’s a great idea! Thanks for the suggestion. I have an existing course already that scratches the surface of LINQ versus PLINQ performance, but I’d love to go deeper.

Q: Nice talk. BTW Stringbuilder may not be the fastest. it depends on the size etc. you have to calculate the GC allocations also. best tool for that is BenchmarkDotNet with memory diagnoser on windows! it is a fantastic tool. General rule of thumb: whatever you do you have to measure in order to see perfomance benefits.

A: Thanks for the suggestion. I’ll check out BenchmarkDotNet. And you’re right about the rule of thumb – you always have to do actual measurements, you can’t rely on just theoretical knowledge to optimize your code.

Q: Just a question on array.CopyTo(...) where Mark said that the memory copy was done out of process by the OS (in C libs guessing "memcpy"). In the profiling application during the webcast, array.CopyTo(..) executed in 32ms, whereas the copy via index and loops was >300ms, in other words, using array.copyTo is an order of magnitude faster with OSX as the OS. It the 10-fold difference "about" the same with .NET on Windows? Different OS different ratio?

A: Yes, the ratio is roughly the same. The speed of a memory copy is more or less the same for all operating systems, whereas you might see small differences in 1-dimensional array performance. I’ve noticed that .NET Core tends to be slightly faster than Mono in handling arrays, because it’s much better optimized.

Q: I measured. GetType() is 171 ms vs. typeof() at 6 ms in a test of a million iterations.

A: That’s because typeof() is processed at compile-time, whereas GetType() is processed at runtime.

Q: How do you keep yourself upto date on the latest and greatest technology?

A: I read lots of technical blogs, and when I’m preparing for a new course or webinar, I do a lot of research and write small test programs to experiment. And I probably have a talent for learning new stuff very quickly.

Q: Would you use some form of multi-dimensional converter to convert a single dimensional array back to a multi-dimension array or would you take another approach?

A: It depends on the use case. I usually just wrap a 1-dimensional array so from the outside it looks like the original multi-dimensional array. The disadvantage of converting the other way is that you’re slowing the code down again, so I am a bit hesitant to use any kind of converter.

Q: Do you have any advice for Parallel.ForEach?

A: Yeah, use it! Parallel.ForEach is great for parallelizing regular for or foreach loops. It is my first step in parallelizing code, and quite often it’s all I need to do.

Two years ago, I wrote an app that processes Sharepoint documents. I had a for-loop in my code that would process each document individually. I parallelized the code simply by replacing my for-loop with a Parallel.ForEach. This drop-in replacement to make code multi-threaded is really nice.

Q: Have you tried these performance tests on .NET Core?

A: Yeah. Everything I show you in the Webinar is running on .NET Core 1.1

Q: Foreach loops do have a performance optimization over for loops in cases where the collection is already an enumeration or a function that yield returns?

A: No. Enumerations or methods with yield return cannot be indexed and they don’t have a well-defined upper limit, so there’s no benefit using a for-loop with them. If you do try to use a for loop, you’d have to manually access MoveNext() and Current, and this would be the exact same code the compiler produces when you use foreach.

Q: Is there any significant difference between pre- and post-increment operations. In C++ I am accustomed to always doing ++i in preference to i++ but I rarely see this being done by C# developers.

A: It works exactly the same as in C++, the difference between the two is the return value: i before increment or i after increment.

Q: Does the performance benefits you described for structs vs classes get lost when comparing the performance of passing classes vs structures to other functions (excluding cases where structs are being passed by reference)?

A: Passing structs to functions will slow down your code, because structs are copied by value. For every method call the entire struct will be cloned in memory. When you’re using classes, only the reference to the object instance is copied into the method.

So yes, for large structs with lots of fields you’ll see a measurable slowdown when doing lots of method calls with struct parameters.

Q: What is the difference between the heap and the stack?

A: The stack is a highly-optimized block of memory intended for data with a very short lifetime, just for the duration of a single method call. Stack memory is created when you enter a method and gets cleaned up when you exit out of a method. The stack is also fairly small, usually around 100MB. It’s optimized for a manageable number of small objects (thousands, not millions) with a very short lifetime.

The heap is a very large block of memory (multiple GBs) optimized for long-term storage. You can easily put millions of objects on the heap, and they can be either small or large. The heap has a special internal process for archiving long-lived data, and there’s a separate process called the Garbage Collector that cleans up objects that are no longer in use.

As a rule of thumb, the stack is slightly faster than the heap. It can also very quickly initialize new data by writing zeroes directly to memory (the heap calls the constructor of each object individually). The disadvantage of the stack is that it’s relatively small, and it assumes your data will be short-lived. The stack can also slow down if you have a very deep chain of nested method calls.

Q: Why and when we use reflection?

A: We use reflection when we want to dynamically access object fields or call object methods. With ‘dynamically’ I mean based on data that is not known during compile-time. For example, when we store database configuration data in a configuration file. The configuration file might say we need an OracleConnection or a SQLLiteConnection. With reflection, we can read this configuration field and then dynamically instantiate the correct object.

Basically, any time an object type, property, field or method appears somewhere in text format, we’re going to need reflection to perform instantiation, access fields and properties, or execute a method call.

Q: What does emit mean?

A: Emit means injecting a single CIL instruction into a dynamic method.

Q: What do you mean by baseline test?

A: A baseline test is a performance test of un-optimized code, to get a baseline performance value.

 

About the speaker, Mark Farragher

Mark Farragher

Mark Farragher is a blogger, investor, serial entrepreneur, and the author of 10 successful IT courses in the Udemy marketplace. His IT career spans 2 decades and he has worn many different hats over the years.

Mark started using C# and the .NET framework 15 years ago, and creates online courses that make complex C# programming topics easy to understand and accessible to anyone.

 

.NET Core is here! You've probably heard that it is lightweight in nature and that you can use the tools that make you happy. Most of us are going to let Visual Studio do the heavy lifting, and that's fine, but you can learn much about how things work under the hood if you put the IDE aside and work with .NET Core more directly.

And why stop there? .NET Core is cross platform, so in this webinar, our guest speaker Chris Gomez will do all of the development and testing on Ubuntu Linux.

This session is perfect for .NET veterans who are brand new to .NET Core and want to see what the brave new world looks and feels like. It's okay if you're unfamiliar with Linux, but are interested in having options available to you. We'll even learn how Microsoft Azure can make the heavy lifting to production much easier.

Watch the webinar and learn:

  • What the brave new world coming with .NET Core looks like
  • The acquisition and use of .NET Core on a Linux VM untouched by a Visual Studio installation
  • How things work under the hood if you work with .NET Core more directly
  • Development tools such as Visual Studio Code and how you can contribute to the .NET Core and tools ecosystem

Who needs Visual Studio? A look at using .NET Core on Linux on Vimeo.

Download slides.

Q & A

Q: Is Visual Studio code open source?

A: Visual Studio Code is open source.  You can find the project here: https://github.com/Microsoft/vscode.

Q: How did "dotnet restore" know which packages to restore?

A: When you install the dotnet tool following the instructions on the .NET Core download site (https://www.microsoft.com/net/download/core), a default NuGet Config file is created with a default feed.  You can find this in Ubuntu in the ~/.nuget/NuGet folder.  This can be overridden in your projects if you include a NuGet.Config file.  For more information, read about dotnet restore in the documentation at: https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-restore

Q: Does ASP.NET Core run on ARM?

A: I haven’t personally investigated ARM, yet.  However, you can see the daily builds for .NET core on various platforms here: https://github.com/dotnet/core-setup#daily-builds. Here you will find builds for ARM versions of Windows and Linux.

An interesting source for information is a recent podcast with Scott Hanselman and his guest Adi Avivi.  In the show, they discuss developing RavenDB on .NET Core for the Raspberry PI: https://www.hanselminutes.com/579/ravendb-the-open-source-nosql-database-for-net-with-adi-avivi

Q: Is there a different NuGet website for Core or it's all in the same place with .net packages?

A: NuGet as a product has evolved to support the needs of .NET Framework and .NET Core.  If you use dotnet new or Visual Studio 2017 to create a new project today, the feed location is https://api.nuget.org/v3/index.json for both. 

Q: Can you provide us the commands that you used in this presentation?

A: Unfortunately, it would take a few posts to recap everything used here to download and install .NET Core and to use the dotnet tool for its various features.  We also quickly published a Docker image and I used one published previously to Docker Hub for the Azure App Service on Linux.  Some great resources to start are:

Step by step instructions to install the .NET Core SDK on Ubuntu Linux: https://www.microsoft.com/net/core#linuxubuntu

dotnet command (https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet) - This documents the various features of the dotnet command line tool.

An overview of the process to create your own Docker images with your application: https://hajekj.net/2016/12/25/building-custom-docker-images-for-use-in-app-service-on-linux/

Using your docker image with Azure App Service for Linux: https://docs.microsoft.com/en-us/azure/app-service-web/app-service-linux-using-custom-docker-image

Q: Does it make sense to use IIS to host an ASP.NET Core application?

A: If you are going to run you ASP.NET Core application on a Windows Server, it absolutely makes sense.  In fact, today it is required to run a full featured web server as a reverse proxy.  .NET Core ships with a lightweight server named Kestrel.  Kestrel has been tuned as a high performance web server built with .NET.  However, it has not to this point been hardened to be the public facing server.

In the Linux world, this had already become more common.  The idea was that programming stacks would ship with small lightweight and fast resource servers, but that you would use an application server to guard it and configure access to it from the outside world.

Please carefully read the section called Set Up A Reverse Proxy in the following documentation discussing how to host ASP.NET Core applications today: https://docs.microsoft.com/en-us/aspnet/core/publishing/

You should also read When to use Kestrel wth a reverse proxy in the documentation: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/kestrel#when-to-use-kestrel-with-a-reverse-proxy.

Finally, to host on IIS, you will need to learn about the ASP.NET Core Module on IIS: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/servers/aspnet-core-module

Q: I noticed that the support for Dependency Injection and IoC is also only minimally supported.

A: ASP.NET Core supports a minimal dependency injection model without any external dependencies. Some developers prefer minimizing dependencies and don’t need more than this minimal model. However, the system is not closed, and other dependency injection systems may be used.

The documentation discusses the built-in system at length and provides an example of using Autofac to replace it in the document called Introduction to Dependency Injection in ASP.NET Core: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection

Q: Did you also try VS for Mac? Can I also use it to build apps with .NET Core?

A: I do not have a Mac and have not tried Visual Studio for Mac.  Visual Studio for Mac was made generally available during the BUILD conference.  Among other features, you can begin creating .NET Core and ASP.NET Core applications with the new IDE.

Q: I have some existing C# libraries that I would like to try on .NET Core for possible use on Linux. Any advice?

A: Research .NET Standard.  There is some common confusion about the difference between .NET Standard and .NET Core.  .NET Standard is a specification.  It defines the APIs that are available at a certain release level.

NET Standard accomplishes this by defining the intersection of APIs available in older platforms (some that existed before anyone had the idea for .NET Standard) and paves the way for newer .NET platforms (which includes newer versions of .NET Core, .NET Framework, and perhaps new “.NET’s” no one has thought of yet) to embrace common sets of APIs.

If you’ve written .NET libraries intended to work on multiple platforms, you may be familiar with Portable Class Libraries.  .NET Standard intends to remedy the problems with PCLs and is the present and future of .NET library compatibility. 

This post on the .NET Blog introduces .NET Standard and links to some valuable living information such as a FAQ and the compatibility matrix:  https://blogs.msdn.microsoft.com/dotnet/2016/09/26/introducing-net-standard/

For documentation on .NET Standard, visit: https://docs.microsoft.com/en-us/dotnet/standard/library

Q: All the things we've done so far are already available in .NET Framework in a very elegant and easy manner. Apart from the Cross platform, why should we go for .NET Core?

A: You are asking a very good question that your team must answer for itself.  If you are satisfied with your solution, .NET Framework and support on Windows Server is not ending.  In fact, I would not expect any kind of announcement that .NET Framework will be superseded.  .NET Framework is the desktop platform for .NET on Windows client and server.  When new Windows features ship, a new .NET Framework often follows.

You may want to research .NET Standard and watch the evolution of that space.  Over time, you could responsibly make sure your internal library code supports a .NET Standard version.  At some point, you could consider a trial run of .NET Core and reuse your internal library investment because .NET Standard enables compatibility across .NET platforms (in this example, between .NET Framework and .NET Core).

The other major feature you may want to keep an eye on and test for yourself is performance.  Besides, cross-platform support, ASP.NET Core aims to be a high performance framework.  If you proved that significant and necessary performance gains could come from switching, that could be a good consideration for doing so.

For each case I have discussed, there should be no immediate urgency on your part.  Your question implies you are very satisfied with your solution.  I would just keep an eye on .NET Standard and see if it makes sense to eventually consider making your libraries implement the standard for future flexibility.

Q: Do we have NuGet package support in Linux?

A: When you use .NET Core or ASP.NET Core, you are retrieving packages from NuGet even for base class library items such as .NET Core itself.  You can created NuGet packages and either post them on internal feeds or the public NuGet feed and target them in your Linux projects.  For example, commonly used NuGet packages such as JSON.NET implement the .NET Standard and are part of the ASP.NET templates for a new project that you might create in Linux.

.NET Core is made up of NuGet packages.  This is a departure from .NET Framework, which was obtained and installed separately.

For more information on NuGet packages and their use in .NET Core, see the article Packages, Metapackages, and Frameworks in the documentation:
https://docs.microsoft.com/en-us/dotnet/core/packages

Q: Is there CMake support for C#? I think I read something about that a while ago...

A: I’m sorry.  I’m pretty unfamiliar with using CMake.  I am familiar with the tool, but have no practical experience with it.  However, I can tell you that CMake is used on .NET Core itself.  For example, the CoreCLR for .NET Core has CMake as a build prerequisite, so if you wanted to contribute to this repository, you would be using CMake:  https://github.com/dotnet/coreclr

Q: Is there a way to execute all the TESTS on the solution using VS Code?

A: The best answer I have for this is to learn about Tasks in Visual Studio Code.  Tasks allow you to setup command line tools to be executed within Visual Studio Code.  You can learn more about Tasks here:  https://code.visualstudio.com/Docs/editor/tasks

Next, you would combine Visual Studio Code tasks with dotnet test.  The dotnet test command will execute a test runner against a compiled .dll.  It is like “dotnet run” but for tests.  MSTest, NUnit, and xUnit are all supported test frameworks.  You can learn more about dotnet test here:  https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-test

But don’t stop there.  dotnet-watch is a command extension for the dotnet command.  The command doesn’t include watch by default.  You add a reference to your project and now the command “dotnet watch” will run a command of your choosing when files change in the project.  One of the things you could do is automate unit testing every time a file changes by using all of this together.

Scott Hanselman demonstrated bringing this all together in the following blog post: https://www.hanselman.com/blog/UsingDotnetWatchTestForContinuousTestingWithNETCoreAndXUnitnet.aspx
You can learn more at this documentation article called Developing ASP.NET Core apps using dotnet watch:  https://docs.microsoft.com/en-us/aspnet/core/tutorials/dotnet-watch

Q: Can we run Package Manager Console for the Nuget packages?

A: You can continue to use Package Manager Console in Visual Studio in Windows.  On Linux you will be using the command line tools such as “dotnet add package” to add a package to your project from the command line.  You may also edit the project’s .csproj file directly.  The new format is so streamlined it will not take long to understand.

This article discusses the changeover from .xproj to .csproj as .NET Core has matured: https://docs.microsoft.com/en-us/dotnet/core/tools/project-json-to-csproj

Q: Does .NET Core have the same libraries that already existed in ASP.NET?  For example: does System.Web.Security exist in .NET Core?

A: ASP.NET Core does not implement everything that you found in ASP.NET just as .NET Core does not attempt to implement everything found in .NET Framework.  Examples of items left out were some that were very Windows specific in nature, items that customers weren’t using, or items that would benefit from some redesign.

For example, when considering ASP.NET MVC, you will find in ASP.NET Core this is called ASP.NET Core MVC and is a “concept compatible” framework.  You can not simply life your ASP.NET 5 code and use it immediately, but the idea was that the code in ASP.NET Core MVC would be very familiar to an ASP.NET MVC developer, and they would have no problem transitioning to the new framework.

For the record, there is no System.Web.Security namespace today in ASP.NET Core.  The security concepts are presented in this article in the documentation: https://docs.microsoft.com/en-us/aspnet/core/security/

Q: Is it a good idea to invest in containers as a pattern?

A: Containerization was way beyond the scope of this talk.  However, I wanted to point out that .NET Core and ASP.NET Core were “container-ready”.

As a contrast, there was one way to host ASP.NET MVC in ASP.NET 4.6, and that was with IIS.  That automatically means Windows Server.

ASP.NET Core presents many options.  Some are great for your current datacenter, and require little or no change.  Other options, like Docker containers, are good options to explore for the future, especially a move to cloud based container services.

Reasons for using containers is a big topic, but one example of a benefit is that a container represents a complete image of an application.  The bits you run on your development machine once the container is built are identical to the bits running in your datacenter.  You can reduce or eliminate setup instructions and be assured that there isn’t a rogue configuration setting somewhere that makes it work for you but not in production.

 

 

About the speaker, Chris Gomez

Chris Gomez

Chris Gomez has been developing software professionally since 1993, but the love of coding began in grade school when he developed his first simple games on an IBM PC. His day jobs have included creating entertainment kiosks for theme parks and music retailers, commerical loan analytics, and clinical data exchange systems. Chris is recognized as a Microsoft MVP for Visual Studio Development Tools and Technologies. Today he is focused on delivering distributed systems with .NET and other platforms, but he still finds time to teach kids of all ages to make their first games to ignite their interest in coding.

 

PostSharp Blog | Announcing PostSharp 4.3 RTM: Faster builds, better debugging, easier deployment, and more

Archive

Today we’re excited to announce the release of PostSharp 4.3 RTM, available for download on Visual Studio Gallery and NuGet.  PostSharp 4.3 addresses the most important concerns of current customers.  It focuses on improving the current experience without adding brand new features.

In a nutshell, PostSharp 4.3 brings you:

  • Improved build-time performance: up to 3x faster
  • Improved debugging experience
  • Alternative to NuGet-based deployment
  • Command-line tool
  • Improvements in the NotifyPropertyChanged aspect
  • Some licensing goodness for everybody.

You can watch the recording of the webinar What's New in PostSharp 4.3 that shows the updates of the new version.

Let’s look at these improvements one by one:

Improved build-time performance

PostSharp 4.3 is up to 3 times faster than PostSharp 4.2. The real figures will depend on the number and size of your projects, how many aspects are being used, and whether you enabled the solution-wide build optimizations. This option can be found under the PostSharp tab of the Solution properties in Visual Studio.

What is does is to try to build the whole solution in a single AppDomain (or as few as possible) so the overhead per project is much smaller.

Improved debugging experience

Debugging an application enhanced with aspects is now even easier thanks to the following improvements:

  • Full support for Just My Code.

  • During Step Into, aspect code is now stepped over by default.

  • The call stack no longer contains PostSharp implementation details by default.

To learn more about these features, see the blog post New in PostSharp 4.3 Preview: Improved Debugging Experience.

Alternative to NuGet-based deployment

Between versions 3.0 and 4.2, the PostSharp compiler and libraries were only distributed as NuGet packages. Let’s face it: some companies did not like this at all that we forced them to use NuGet. Starting from version 4.3, we are re-introducing the old good zip file, and integrate it better with PostSharp Tools for Visual Studio.

For more information, see the blog post New in PostSharp 4.3 Preview – An alternative to NuGet.

Command-line tool

Using PostSharp as a command-line tool is now a supported and documented scenario. It means you can now even instrument assemblies that you are not building yourself – whether or not you have its source code.

You can see how it works in the blog post New in PostSharp 4.3 Preview: Command-Line Interface.

Some licensing goodness for everybody

We hate licensing just as you do but we’ve working on it in PostSharp 4.3 and we’re glad to come with two major improvements:

  • You no longer need a license to build source code that you just checked out from Git or TFS but did not create/edit yourself.
  • The limitations of PostSharp Express for new users are now much simpler: you get everything of PostSharp Ultimate, but only if you add aspects to 10 classes per project.

Improvements in the NotifyPropertyChanged aspect

Due to popular demand, we’ve added support for Caliburn.Micro and MVVM Light.

We’ve also added an option to suppress false positives. To enable the option (which has some runtime overhead and is disabled by default), set the PreventFalsePositives option when constructing the NotifyPropertyChanged aspect:

[NotifyPropertyChanged(PreventFalsePositives = true)]

Summary

With PostSharp 4.3, we’ve been addressing the top concerns of existing customers. PostSharp users will spend less time waiting for the build and will be much more productive while debugging. Plus, there’s a ton of improvements to make the deployment and the licensing of PostSharp easier.

PostSharp 4.3 is fully backward compatible with PostSharp 4.2, so it’s safe to update today.

Happy PostSharping!

-gael

 

Comments (5) -

Ivo
Ivo
8/5/2016 9:41:47 AM #

Is it possible to still get the Express license for older PostSharp (i.e. pre 4.3) please?  What is the process?

It would be deadly for teams that already use PostSharp with the Express licenses otherwise as they would not be able to get anyone new to their team.

Gael Fraiteur
Gael Fraiteur
8/5/2016 10:40:48 AM #

Yes, you can still get a license of PostSharp 4.2. Just download this version and follow the license wizard.

Ivo
Ivo
8/5/2016 10:04:46 AM #

Why did you delete my comment?

I guess I need to ask on another social media - facebook, twitter, visualstudio gallery, linked in.

Gael Fraiteur
Gael Fraiteur
8/5/2016 10:41:31 AM #

Your comment was pending for moderation.

Ivo
Ivo
8/5/2016 7:57:32 PM #

Thank you and apologies for the misunderstanding with the comment.

Comments are closed