:::: MENU ::::

Saturday, October 6, 2018

Yesterday I had the great privilege of speaking at the first ever Techorama Netherlandsevent (which was brilliant by the way - I highly recommend you visit if you get the chance). There were loads of great speakers covering the latest patterns, practices and tools for .NET, Azure and more.

But my talk actually covered a relatively old technology - LINQ, which was introduced in 2007, and has only had minor updates since then. However, it remains very relevant to day to day development in .NET which is why I think its still worth spending time educating developers in how to get the best out of it.

In the talk I went through several recommendations and best practices, many of which are featured in my More Effective LINQ Pluralsight Course.

But I also introduced a new term - "LINQ stinks" (or should that be 'stinqs') - which are code smells or anti-patterns in LINQ code.

A code smell is when there is nothing wrong with the functionality of the code, but there is an issue with the maintainability. The way we've written the code makes it hard to understand or extend. There are also several anti-patterns relating to poor performance whether you're using LINQ to objects, or using an ORM like Entity Framework to query the database with LINQ.

LINQ Stink #1 - Complex Pipelines

LINQ has the power to make your code much more expressive and succinct, and can enable you to solve tricky problems (such as my Lunchtime LINQ challenges) with a single pipeline of chained LINQ extension methods.

However, that is not an excuse to write unreadable code. Once a pipeline reaches more than 3 or 4 chained methods it becomes hard for someone reading the code to fully comprehend what the pipeline does.

In my talk I discussed a number of ways of solving this.

First of all, just because lambda expressions are an easy way of passing method calls into LINQ methods, doesn't mean you always need to use them. If a lambda expression starts to become a bit complicated, there is no reason why you can't refactor it into a well-named method that more clearly expresses the intent.

For example, this LINQ pipeline is very straightforward to understand, and we can always dive into the details of how the customers are getting filtered and projected by navigating into the methods.

customers      .Where(IsEligibleForDiscount)      .Select(GetDiscountPercentage)  

(Important note - you can't use this technique with Entity Framework as it won't know how to translate your custom methods into SQL).

Another way in which LINQ pipelines can get over-complicated is when LINQ is missing a method you need. For example, LINQ doesn't have a Batch operator, but this rather hacky technique achieves batching with the standard operators. Although it works, it obfuscates our intent.

orders     .Select((item, index) => new { item, index })     .GroupBy(x => x.index / 10)     .Select(g => g.Select(x => x.item).ToArray())  

Instead, use a library like MoreLINQ, or even create your own LINQ extension methods to enable you to write more declarative code that expresses your intent more clearly. With the MoreLINQ Batch method, we can simply write the following:

orders.Batch(10)  

LINQ Stink #2 - Reading too much

The next problem we discussed was reading more than we need from an IEnumerable<T>. This is especially important if that sequence is lazily evaluated, and may need to do a non-trivial amount of work to produce the items in the sequence.

Consider the following example.

var blobs = GetBlobs("MyContainer").ToList();  var blob = blobs.First(b => b.Name.EndsWith(".txt"));  Console.WriteLine(blob.Name);    IEnumerable<Blob> GetBlobs(string containerName)  {      // ???  }  

We're calling the GetBlobs method which goes off to an Azure blob storage container and downloads information about all the blobs in the container. There could be thousands of these, but notice that the code that uses them only requires the first blob with a .txt extension.

However, because we've used ToList on the return from GetBlobs, we will alwaysdownload information about all the blobs in the container, even if the very first one was a text file.

So in this case, we should remove ToList as it may cause us to do more work than we need. However, there are times we do want to use ToList, which brings us onto our next LINQ stink...

LINQ Stink #3 - Multiple Enumeration

A related problem when dealing with lazily evaluated IEnumerable<T> sequences is that if we enumerate them more than once, we end up doing the work to produce all the items in that sequence more than once, which is wasted time.

Consider the following example where the GetBlobs method is once again a lazily evaluated sequence:

var blobs = GetBlobs("MyContainer");  var biggest = blobs.MaxBy(b => b.Size);  var smallest = blobs.MinBy(b => b.Size);  

We're using the MoreLINQ MaxBy and MinBy extension methods to determine which the largest and smallest blobs in the container are. This does require us to download information about all blobs in the container, but this implementation will cause us to download them twice.

In this case, it would be better to perform ToList on the return sequence from GetBlobs, to store them all in memory to remove the performance penalty of enumerating twice.

Note: Tools like ReSharper are very good at warning you when you enumerate an IEnumerable<T> more than once, and will allow you to perform a "quick fix" by adding a ToList. This may be the appropriate solution, but often, the code that produced the IEnumerable<T> is your own code, and so if you know for sure that you are always going to pass an in-memory collection like a list or an array, then it might just be better to pass an ICollection<T> instead, making it clear that it is safe to enumerate through again.

LINQ Stink #4 - Inefficient SQL

ORMs with LINQ Providers like Entity Framework generally do a great job of converting your LINQ expressions into SQL, often producing better SQL than you might write yourself.

For example, using the MVC Music Store database, I can perform the following query which includes a join, sort, take and projection to anonymous object...

Albums      .Where(a => a.Artist.Name.StartsWith("A"))      .OrderBy(a => a.Artist.Name)      .Select(a => new { Artist = a.Artist.Name, Album = a.Title })      .Take(5)  

...and the SQL we see generated is perfectly reasonable:

DECLARE @p0 NVarChar(1000) = 'A%'    SELECT TOP (5) [t1].[Name] AS [Artist], [t0].[Title] AS [Album]  FROM [Album] AS [t0]  INNER JOIN [Artist] AS [t1] ON [t1].[ArtistId] = [t0].[ArtistId]  WHERE [t1].[Name] LIKE @p0  ORDER BY [t1].[Name]  

However, occasionally a programmer will put something into a LINQ expression that the LINQ provider doesn't know how to turn into SQL. In this case, we get a run-time exception.

The right thing to do in this situation is to rework the query to do as much as possible of the work in SQL (maybe even by creating a stored procedure) taking care to be as efficient as possible and minimising the data downloaded, and only then doing anything that can only be performed client side.

Unfortunately, inexperienced developers often "fix" the exception by inserting a ToList or AsEnumerable, which appears at first glance to work, but produces horribly inefficient SQL.

This next code example will not only download the entire Albums table to memory, but for every album in the entire table will perform an additional query to get hold of the artist name, resulting in what's known as a "Select N+1" anti-pattern.

Albums      .ToList()      .Where(a => a.Artist.Name.Count(c => c == ' ') > 2)      .OrderBy(a => a.Artist.Name)      .Select(a => new { Artist = a.Artist.Name, Album = a.Title })      .Take(5)  

What's the solution to this issue? Well, obviously a bit of experience helps - knowing what sort of SQL a given query is likely to turn into. But you don't have to guess.

Both Entity Framework and Entity Framework Core allow you to inject a logger that will emit the raw SQL statements that are being executed. And of course you can always use a profiler to view the SQL your application is generating.

So I think as part of any code review of LINQ to database code, you should be asking the questions "what SQL does this query get turned into?" and "is this the most efficient query to retrieve the necessary data?"

LINQ Stink #5 - Side Effects

The final LINQ stink I discussed at Techorama was introducing side effects into your pipelines. One of the great things about LINQ is that it brings many of the ideas of functional programming into C#.

It helps you write more declarative code, use pipelines, higher-order functions and even encourages "immutability" because the Select method doesn't take an Actionso you are encouraged to return new objects as they flow through your pipeline rather than mutating existing ones.

But another key functional programming concept is to always prefer "pure" functions wherever possible rather than functions that produce "side-effects". A pure function depends only on its inputs and the same inputs always produce the same output. The function is not allowed to perform any IO, such as making a network call or talking to the database.

Pure functions have the advantage of being inherently testable, make your code much easier to reason about, and are thread safe.

Now I don't think there is no place for side-effects or non-pure methods in a LINQ pipeline, but you should certainly be aware of the pitfalls that await if you do use them.

In my talk I showed this particularly egregious example of unhelpful use of side-effects.

var client = new HttpClient();  var urls = new [] { "https://www.alvinashcraft.com/",                      "http://blog.cwa.me.uk/",                      "https://codeopinion.com/"}; // imagine this list has 100s of URLs  var regex = @"<a\s+href=(?:""([^""]+)""|'([^']+)').*?>(.*?)</a>";  urls.Select(u => client.GetStringAsync(u).Result)      .SelectMany(h => Regex.Matches(h, regex).Cast<Match>())      .Where(m => m.Value.Contains("markheath.net"))      .Select(m => m.Value)      .ToList()      .ForEach(l => Console.WriteLine(l));  

This LINQ pipeline is attempting to see who is linking to me on their blog. It does so by downloading the HTML for several websites, then using regex to find the links, and then filtering them out to just the ones of interest to me.

At the end of the pipeline, because I want to print them out to the console, and LINQ doesn't have a built-in ForEach that operates on an IEnumerable<T> (although MoreLINQ does), ToList has been called first. I see developers doing this a lot as it allows them to perform side effects using a fluent syntax rather than the foreachkeyword.

There are numerous problems with the above code sample, not least the ugly way were trying to call an asynchronous method in the middle of a pipeline, with the .Result property which is a recipe for deadlocks (sadly there is no standard 'asynchronous' version of LINQ at the moment, although there are promising signs that we might see that in C# 8).

But another big problem here is that downloading web pages is something that is particularly susceptible to transient errors. Suppose my urls array contained 100 websites, and they all successfully downloaded except the last one. With the pipeline shown above, no output whatsoever would be written to the console, as one failure will cause the ToList call to fail before we even get to the ForEach. Yet for the purposes of this code, we probably would like to see the results from the sites we could download, even if some had failed.

Another issue with this code is that downloading web pages is inherently parallelizable - it would make sense to download more than one at a time, but it is hard to force that behaviour into a LINQ pipeline like this.

So this is an example of a LINQ pipeline that would probably be better implemented without leaning so heavily on LINQ. LINQ is great, but it is not a one-size-fits-all solution.

Anyway, I hope you found this brief tour of "LINQ stinks" helpful. The slides from my Techorama talk are available here, and I'd be happy to bring this talk to your local user group (assuming I can get there) or conference if you'd like to hear more about this.

More

Thursday, October 4, 2018

Azure is a big cloud with lots of services, and for even the most experienced user it can be intimidating to know which service will best meet your needs. This blog post is intended to provide a short overview of the most common concepts and services .NET developers need get started and provide resources to help you learn more.

Key Concepts

Azure Account: Your Azure account is the credentials that you sign into Azure with (e.g. what you would use to log into the Azure Portal). If you to not yet have an Azure account, you can create one for free

Azure Subscription: A subscription is the billing plan that Azure resources are created inside. These can either be individual or managed by your company. Your account can be associated with multiple subscriptions. If this is the case, you need to make sure you are selecting the correct one when creating resources. For more info see understanding Azure accounts, subscriptions and billing. Did you know, if you have a Visual Studio Subscription you have monthly Azure credits just waiting to be activated?

Resource Group: Resource groups are one of the most fundamental primitives you'll deal with in Azure. At a high level, you can think of a resource group like a folder on your computer. Any resources or service you create in Azure will be stored in a resource group (just like when you save a file on your computer you choose where on disk it is saved).

Hosting: When you want code you've written to run in Azure, it needs to be hosted in a service that supports executing user provided code.

Managed Services: Azure provides many services where you provide data or information to Azure, and Azure's implementation takes the appropriate action. A common example of this is Azure Blob Storage, where you provide files, and Azure handles reading, writing, and persisting them for you.

Choosing a Hosting Option

Hosting in Azure can be divided into three main categories:

  • Infrastructure-as-a-Service (IaaS): With IaaS, you provision the VMs that you need, along with associated network and storage components. Then you deploy whatever software and applications you want onto those VMs. This model is the closest to a traditional on-premises environment, except that Microsoft manages the infrastructure. You still manage the individual VMs (e.g. deciding what operating system you want, install custom software, when to apply security updates, etc.).
  • Platform-as-a-Service (PaaS): Provides a managed hosting environment, where you can deploy your application without needing to manage VMs or networking resources. For example, instead of creating individual VMs, you specify an instance count, and the service will provision, configure, and manage the necessary resources. Azure App Service is an example of a PaaS service.
  • Functions-as-a-Service (FaaS): Often called "Serverless" computing, FaaS goes even further than PaaS in removing the need to worry about the hosting environment. Instead of creating compute instances and deploying code to those instances, you simply deploy your code, and the service automatically runs it. You don't need to administer the compute resources, the platform seamlessly scales your code up or down to whatever level necessary to handle the traffic and you pay only when your code is running. Azure Functions are a FaaS service.

In general, the further towards Serverless you can host your application, the more benefits you'll see from running in the cloud. Below is a short cheat sheet for getting started with three common hosting choices in Azure and when to choose them (for a more complete list see Overview of Azure compute options)

  • Azure App Service: If you are looking to host a web application or service we recommend you look at App Service first. To get started with App Service, see the ASP.NET Quickstart (instructions are applicable to ASP.NET, WCF, and ASP.NET Core apps).
  • Azure Functions: Azure Functions are great for event driven workflows. Examples include responding to Webhooks, processing items placed into Queues or Blob Storage, Timers, etc. To get started with Azure Functions see the Create your first function quickstart.
  • Azure Virtual Machines: If App Service doesn't meet your needs for hosting an existing application (e.g. you need to install custom software on the machine, or access operating system APIs that are not available in App Service's environment) Virtual Machines will be the easiest place to start. To get started with Virtual Machines see our Deploy an ASP.NET app to an Azure virtual machine tutorial(applies equally to WCF).

If you need more help deciding on which hosting/compute option is best for you, see the Decision tree for Azure compute services.

Choosing a Data Storage Service

Azure offers many services for storing your data depending on your needs. The most common data services for .NET developers are:

  • Azure SQL Database: If you are looking to migrate an application that is already using SQL Server to the cloud, then Azure SQL Database is a natural place to start. To get started see the Build an ASP.NET app in Azure with SQL Database tutorial.
  • Azure Cosmos DBIs a modern database designed for the cloud. If you are looking to start a new application that doesn't yet have a specific database dependency, you should look at Azure Cosmos DB as a starting point. Cosmos DB is a good choice for new web, mobile, gaming, and IoT applications where automatic scale, predictable performance, fast response times, and the ability to query over schema-free data is important (common use cases for Azure Cosmos DB). To get started See build a .NET web app with Azure Cosmos DB quickstart
  • Blob Storage: Is optimized for storing and retrieving large binary objects (images, files, video and audio streams, large application data objects and documents, etc.). Object stores enable the management of extremely large amounts of unstructured data. To get started see the upload, download, and list blobs using .NET quickstart.

For more help deciding on which data storage service will best meet your needs, see choose the right data store.

Diagnosing Problems in the Cloud

Once you deploy your application to Azure, you'll likely run into cases where it worked locally but doesn't in Azure. Below are two good places to start:

  • Remote debug from Visual Studio: Most of the Azure compute services (including all three covered above) support remote debugging with Visual Studio and acquiring logs. To explore Visual Studio's capabilities for your application type, open the Cloud Explorer tool window (type "Cloud Explorer" into Visual Studio's Quicklaunch toolbar (in the top right corner), and locate your application in the tree. For details see diagnosing errors in your cloud apps.
  • Enable Application Insights: Application Insights is a complete application performance monitoring (APM) solution that captures diagnostic data, telemetry, and performance data from the application automatically. To get started collecting diagnostic data for your app, see start monitoring your ASP.NET Web Application with Azure Application Insights.

Conclusion and Resources

The above is a short list of what to look at when you're first starting with Azure. If you are interested in learning more here are a few resources:

As always, we want to see you successful, so if you run into any issues, let me know in the comment section below, or via Twitter and I'll do my best to get your question answered.

Blazor 0.6.0 is now available! This release includes new features for authoring templated components and enables using server-side Blazor with the Azure SignalR Service. We're also excited to announce our plans to ship the server-side Blazor model as Razor Components in .NET Core 3.0!

Here's what's new in the Blazor 0.6.0 release:

  • Templated components
    • Define components with one or more template parameters
    • Specify template arguments using child elements
    • Generic typed components with type inference
    • Razor templates
  • Refactored server-side Blazor startup code to support the Azure SignalR Service

A full list of the changes in this release can be found in the Blazor 0.6.0 release notes.

Get Blazor 0.6.0

Install the following:

  1. .NET Core 2.1 SDK (2.1.402 or later).
  2. Visual Studio 2017 (15.8 or later) with the ASP.NET and web development workload selected.
  3. The latest Blazor Language Services extension from the Visual Studio Marketplace.
  4. The Blazor templates on the command-line:

    dotnet new -i Microsoft.AspNetCore.Blazor.Templates  

You can find getting started instructions, docs, and tutorials for Blazor at https://blazor.net.

Upgrade an existing project to Blazor 0.6.0

To upgrade a Blazor 0.5.x project to 0.6.0:

  • Install the prerequisites listed above.
  • Update the Blazor package and .NET CLI tool references to 0.6.0. The upgraded Blazor project file should look like this:

    <Project Sdk="Microsoft.NET.Sdk.Web">    <PropertyGroup>      <TargetFramework>netstandard2.0</TargetFramework>      <RunCommand>dotnet</RunCommand>      <RunArguments>blazor serve</RunArguments>      <LangVersion>7.3</LangVersion>  </PropertyGroup>    <ItemGroup>      <PackageReference Include="Microsoft.AspNetCore.Blazor.Browser" Version="0.6.0" />      <PackageReference Include="Microsoft.AspNetCore.Blazor.Build" Version="0.6.0" />      <DotNetCliToolReference Include="Microsoft.AspNetCore.Blazor.Cli" Version="0.6.0" />  </ItemGroup>    </Project>  
  • If your project or solution has a global.json file from an earlier Blazor project template, we recommend removing it.

That's it! You're now ready to try out the latest Blazor features.

Templated components

Blazor 0.6.0 adds support for templated components. Templated components are components that accept one or more UI templates as parameters, which can then be used as part of the component's rendering logic. Templated components allow you to author higher-level components that are more reusable than what was possible before. For example, a list view component could allow the user to specify a template for rending items in the list, or a grid component could allow the user to specify templates for the grid header and for each row.

Template parameters

A templated component is defined by specifying one or more component parameters of type RenderFragment or RenderFragment<T>. A render fragment represents a segment of UI that is rendered by the component. A render fragment optionally take a parameter that can be specified when the render fragment is invoked.

TemplatedTable.cshtml

@typeparam TItem    <table>      <thead>          <tr>@TableHeader</tr>      </thead>      <tbody>      @foreach (var item in Items)      {          <tr>@RowTemplate(item)</tr>      }      </tbody>      <tfoot>          <tr>@TableFooter</tr>      </tfoot>  </table>    @functions {      [Parameter] RenderFragment TableHeader { get; set; }      [Parameter] RenderFragment<TItem> RowTemplate { get; set; }      [Parameter] RenderFragment TableFooter { get; set; }      [Parameter] IReadOnlyList<TItem> Items { get; set; }  }  

When using a templated component, the template parameters can be specified using child elements that match the names of the parameters.

<TemplatedTable Items="@pets">      <TableHeader>          <th>ID</th>          <th>Name</th>          <th>Species</th>      </TableHeader>      <RowTemplate>          <td>@context.PetId</td>          <td>@context.Name</td>          <td>@context.Species</td>      </RowTemplate>  </TemplatedTable>  

Template context parameters

Component arguments of type RenderFragment<T> passed as elements have an implicit parameter named context, but you can change the parameter name using the Context attribute on the child element.

<TemplatedTable Items="@pets">      <TableHeader>          <th>ID</th>          <th>Name</th>          <th>Species</th>      </TableHeader>      <RowTemplate Context="pet">          <td>@pet.PetId</td>          <td>@pet.Name</td>          <td>@pet.Species</td>      </RowTemplate>  </TemplatedTable>  

Alternatively, you can specify the Context attribute on the component element (e.g., <TemplatedTable Context="pet">). The specified Context attribute applies to all specified template parameters. This can be useful when you want to specify the content parameter name for implicit child content (without any wrapping child element).

Generic-typed components

Templated components are often generically typed. For example, a generic ListView component could be used to render IEnumerable<T>values. To define a generic component use the new @typeparam directive to specify type parameters.

GenericComponent.cshtml

@typeparam TItem    @foreach (var item in Items)  {      @ItemTemplate(item)  }    @functions {      [Parameter] RenderFragment<TItem> ItemTemplate { get; set; }      [Parameter] IReadOnlyList<TItem> Items { get; set; }  }  

When using generic-typed components the type parameter will be inferred if possible. Otherwise, it must be explicitly specified using an attribute that matches the name of the type parameter:

<GenericComponent Items="@pets" TItem="Pet">      ...  </GenericComponent>  

Razor templates

Render fragments can be defined using Razor template syntax. Razor templates are a way to define a UI snippet. They look like the following:

@<tag>...<tag>  

You can now use Razor templates to define RenderFragment and RenderFragment<T> values like this:

@{       RenderFragment template = @<p>The time is @DateTime.Now.</p>;      RenderFragment<Pet> petTemplate = (pet) => @<p>Your pet's name is @pet.Name.</p>  }  

Render fragments defined using Razor templates can be passed as arguments to templated components or rendered directly. For example, you can render the previous templates directly like this:

@template    @petTemplate(new Pet { Name = "Fido" })  

Use server-side Blazor with the Azure SignalR Service

In the previous Blazor release we added support for running Blazor on the server where UI interactions and DOM updates are handled over a SignalR connection. In this release we refactored the server-side Blazor support to enable using server-side Blazor with the Azure SignalR Service. The Azure SignalR Service handles connection scale out for SignalR based apps, scaling up to handle thousands of persistent connections so that you don't have to.

To use the Azure SignalR Service with a server-side Blazor application:

  1. Create a new server-side Blazor app.

    dotnet new blazorserverside -o BlazorServerSideApp1  
  2. Add the Azure SignalR Server SDK to the server project.

    dotnet add BlazorServerSideApp1/BlazorServerSideApp1.Server package Microsoft.Azure.SignalR  
  3. Create an Azure SignalR Service resource for your app and copy the primary connection string.

  4. Add a UserSecretsId property to the BlazorServerSideApp1.Server.csproj project file.

    <PropertyGroup>      <UserSecretsId>BlazorServerSideApp1.Server.UserSecretsId</UserSecretsId>  <PropertyGroup>  
  5. Configure the connection string as a user secret for your app.

    dotnet user-secret -p BlazorServerSideApp1/BlazorServerSideApp1.Server set Azure:SignalR:ConnectionString <Your-Connection-String>  

    NOTE: When deploying the app you'll need to configure the Azure SignalR Service connection string in the target environment. For example, in Azure App Service configure the connection string using an app setting.

  6. In the Startup class for the server project, replace the call to app.UseServerSideBlazor<App.Startup>() with the following code:

    app.UseAzureSignalR(route => route.MapHub<BlazorHub>(BlazorHub.DefaultPath));  app.UseBlazor<App.Startup>();  
  7. Run the app.

    If you look at the network trace for the app in the browser dev tools you see that the SignalR traffic is now being routed through the Azure SignalR Service. Congratulations!

Razor Components to ship with ASP.NET Core in .NET Core 3.0

We announced last month at .NET Conf that we've decided to move forward with shipping the Blazor server-side model as part of ASP.NET Core in .NET Core 3.0. About half of Blazor users have indicated they would use the Blazor server-side model, and shipping it in .NET Core 3.0 will make it available for production use. As part of integrating the Blazor component model into the ASP.NET Core we've decided to give it a new name to differentiate it from the ability to run .NET in the browser: Razor Components. We are now working towards shipping Razor Components and the editing in .NET Core 3.0. This includes integrating Razor Components into ASP.NET Core so that it can be used from MVC. We expect to have a preview of this support early next year after the ASP.NET Core 2.2 release has wrapped up.

Our primary goal remains to ship support for running Blazor client-side in the browser. Work on running Blazor client-side on WebAssembly will continue in parallel with the Razor Components work, although it will remain experimental for a while longer while we work through the issues of running .NET on WebAssembly. We will however keep the component model the same regardless of whether you are running on the server or the client. You can switch your Blazor app to run on the client or the server by changing a single line of code. See the Blazor .NET Conf talk to see this in action and to learn more about our plans for Razor Components:

More