Quantcast
Channel: Damir's Corner
Viewing all 641 articles
Browse latest View live

Book Review: Developing Windows Store Apps with HTML5 and JavaScript

$
0
0

Developing Windows Store Apps with HTML5 and JavaScriptAlthough I’ve done my share of Windows Store app development, I’ve always used C# and .NET framework, since these are technologies I have most experience with. When I was offered a free copy of Developing Windows Store Apps with HTML5 and JavaScript by Rami Sarieddine, I gladly accepted it to learn more about the alternative approach.

Strangely enough the book starts out with a pretty thorough overview of HTML 5 and CSS. For existing web developers who are interested in learning how to use their skills for developing Windows Store apps this probably won’t be of much use, but I still think it’s a welcome addition to the book. It certainly made it easier for me to follow the remaining chapters and I’m sure it will for other readers with similar (lack of) skills as well.

The rest of the book focuses on Windows Store app development, as expected. Having already gone through the process of learning it, I can say that in spite of its shortness, it manages to cover all the important topics. The reader should definitely be able to write and publish his own first application, once he’s done with it. More importantly, it provides a good basis for further learning about the topic which I would certainly recommend.

The last chapter of the book does feel a bit out of place, though. In only a couple of pages it tries to compare HTML5 / JavaScript development of Windows Store apps to .NET / C# approach. I’m not sure it’s really useful for anyone. For those without prior knowledge of C#, I think it is to brief to give any real value. On the other hand, those with existing previous knowledge will want to learn more and will probably find a different source of information.

Still, this chapter doesn’t really take away any value from all the other chapters. You can always skip if you’ re not interested in it. All in all I would recommend this book to anyone with no or minimal knowledge about Windows Store apps, who’s interested in developing them using HTML5 and JavaScript, even if he isn’t already proficient in them.

The book is available on Amazon and sold directly by the publisher.


Using Abstract Override in C#

$
0
0

Recently the following language construct has been brought to my attention:

public abstract override string ToString();

Knowing about both keywords used in the above example, there isn’t much doubt what it means to the compiler:

  • Abstract methods don’t have any implementation and require non-abstract derived classes to implement them. Of course they are allowed only in abstract classes.
  • Override allows a method to override a virtual or abstract method from its base class.

This means that the combination of both can be used in the following two cases:

  • In a class deriving from an abstract base class to override an abstract method and mark it as abstract. In this case the construct is completely redundant. The behavior would be the same if the abstract method from the base class wouldn’t be overridden at all: it would still have to be overridden in any non-abstract derived class. Its presence does acknowledge the programmer’s awareness of the abstract method in the base class and the intent not to implement it, but does nothing else. The construct is not even present in the compiled assembly.
  • In a class deriving from any base class to override a virtual method and break its inheritance chain. In this case the class deriving from this intermediate class will have to override the method even though it has an implementation in the original base class. This construct will prevent the derived class from accessing it and force it to come up with its own implementation. The introductory sample comes from such a scenario:
public abstract class BaseClass
{
    // remember that each class inherits from Object
    // which implements virtual ToString() method
    public abstract override string ToString();
}

public class DerivedClass : BaseClass
{
    // BaseClass forces DerivedClass to implement ToString()
    public override string ToString()
    {
        // base.ToString() doesn't compile
        // since method is not implemented in BaseClass
        // DerivedClass MUST come up with its own implementation
        return String.Empty;
    }
}

When would doing this be a good idea? I’m not really sure, and I couldn’t come up with a really good example. Still, it’s good to know that it can be done in C# and the CLR.

Book Review: .NET 4.5 Parallel Extensions Cookbook

$
0
0

.NET 4.5 Parallel Extensions CookbookRecently I received a free review copy of another .NET related book published by Packt. This time it was a pretty niche title, .NET 4.5 Parallel Extensions Cookbook by Bryan Freeman. I had quite a tight deadline for the review, but fortunately the book was so interesting, that I didn’t have any problems working through it at a faster pace.

Even though it might not be obvious from the title, the table of contents makes it clear that the book author decided to give a thorough overview of the current state in parallel development for .NET. As it turns out, he managed to do it pretty well. The book is definitely the best single source of information on this topic I’ve come across, suitable both for readers with previous experience in the subject and those without any.

The first two chapters introduce tasks and continuations as the basics which the rest of the book builds upon. The remaining chapters could probably be read in any order, although there are some cross references in other chapters as well, making it a better idea to read the book from cover to cover at least once. The recipe based approach still makes sense, though, as I’m pretty sure most of the readers will be returning to individual recipes at a later time when looking for a particular solution.

There are a couple of recipes in the book that could be further approved. The sample code occasionally includes some bad practices that could easily be avoided without making it any longer or more difficult to understand. Also some recipes could have used a more suitable scenario for demonstration or a couple of additional pages for a better explanation. None of them really miss the point, but a couple of them do stand out a little, probably because most of them are that good.

Still, I have learned quite a few new tricks while reading the book and I would have even more if it wasn’t for my previous hands-on experience with many of the topics covered. If you are considering or already developing parallel or asynchronous code, I strongly recommend reading this book. Actually, I recommend reading it even if you’re still managing without using such approaches. It will prepare you for the future or perhaps convince you that you should already be doing it. In spite of a few rough edges, it still is one of the most useful books, I’ve read recently.

It is available on Amazon and sold directly by the publisher.

Data-Driven or Parameterized Tests in Windows Store apps

$
0
0

Since I usually write my unit tests in NUnit, I got into the habit of using parameterized tests when testing methods for which I need to check the result for many different input values. Instead of having to write many tests for different sets of input values, all of them containing the same core code or calling the same inner method, I can write only a single test, specifying the input values and the expected result:

[TestCase(1, 1, Result = 2)]
[TestCase(1, 3, Result = 4)]
[TestCase(2, 4, Result = 6)]
[TestCase(6, 1, Result = 7)]
public int TestAdding(int a, int b)
{
    var calculator = new Calculator();
    return calculator.Add(a, b);
}

When you run such a test in ReSharper or Visual Studio test runner with NUnit Test Adapter you even get all of the test cases individually listed to quickly see which one of them failed:

Visual Studio test runner ReSharper test runner

Unfortunately using NUnit is not an option with Windows Store apps. Only MSTest is supported, therefore I had to learn what this testing framework had to offer in such cases. The answer are data-driven unit tests. Similarly to NUnit they allow specifying different test cases for a single test, but the values need to be stored in an external data source instead of with attributes. Having input values for tests in an external database doesn’t make much sense to me, so the remaining options are CSV or XML files included in the test project. Still the test code is less obvious, since a lookup in another file is required to see the actual values:

[TestMethod]
[DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV",
    "TestCases.csv", 
    "TestCases#csv", 
    DataAccessMethod.Sequential)]
[DeploymentItem("TestCases.csv")]
public void TestAdding()
{
    var a = Convert.ToInt32(TestContext.DataRow["a"]);
    var b = Convert.ToInt32(TestContext.DataRow["b"]);
    var result = Convert.ToInt32(TestContext.DataRow["result"]);
    var calculator = new Calculator();
    Assert.AreEqual(result, calculator.Add(a, b));
}

The test runners also don’t handle these tests in the same way. They list only a single test and display individual test outcomes in the output:

Visual Studio test runner ReSharper test runner

Not all that surprising, it turned out that MSTest for Windows Store apps doesn’t support such data driven tests. Instead, since Visual Studio 2012 Update 1, an approach very similar to parameterized NUnit tests is available:

[DataTestMethod]
[DataRow(1, 1, 2)]
[DataRow(1, 3, 4)]
[DataRow(2, 4, 6)]
[DataRow(6, 1, 7)]
public void TestAdding(int a, int b, int result)
{
    var calculator = new Calculator();
    Assert.AreEqual(result, calculator.Add(a, b));
}

For obvious reasons I like this syntax much better. Unfortunately it is only available for Windows Store app and Windows Phone 8 test projects, but not on other platforms. If you’re like me and would like to have it supported everywhere, consider voting for the suggestion at User Voice.

Test runners handle these tests similarly to other MSTest data-driven tests, i.e. only a single test is shown in the list. I’d like to give another word of caution at this point. ReSharper as of version 8.0.2 has a bug which causes some of the test cases to be skipped, thus not detecting a failed test case. You either need to run the tests in debug mode or use Visual Studio test runner instead. I’ve reported the bug and the fix seems to be coming with the next release.

Writing Modular NAnt Build Scripts

$
0
0

I’ve been quite happy with NAnt as the tool for writing build scripts. Though, once these scripts reach a certain size, they become harder and harder to maintain. The main reason for that is a lack of modularity. A common example when this becomes an issue are targets which perform their task based on an input property:

<target name="hello"><property name="hello.name" value="World" 
            unless="${property::exists('hello.name')}" /><echo message="Hello, ${hello.name}." /></target>

The above target will behave differently based on the current value of hello.name property. It even provides a default value to be used when the property hasn’t been defined before calling the target. This looks like a nice pattern, but since all properties are global, setting the input property for this target sets it for every other target depending on it and also for all future calls, unless you explicitly set it to a different value once the call returns.

<target name="wrapper"><call target="hello" /><property name="hello.name" value="Europe" /><call target="hello" /><call target="hello" /><property name="hello.name" value="Slovenia" /><call target="hello" /><call target="hello" /></target>

The above target will output the following messages:

Hello, World.
Hello, Europe.
Hello, Europe.
Hello, Slovenia.
Hello, Slovenia.

You can’t even clear the property once it has been set. In larger scripts this behavior can make maintenance unnecessarily difficult since you can never really be sure your change won’t effect something else unless you thoroughly inspect and test the complete script. Being very strict with naming and coding conventions will help, but it is still a burden, even more so if multiple developers are involved.

Reading through NAnt documentation before the latest build script refactoring task, we managed to find a better way to implement calling of such independent targets, by using nant instead of call:

<target name="wrapper"><nant buildfile="${project::get-buildfile-path()}" target="hello" /><nant buildfile="${project::get-buildfile-path()}" target="hello"><properties><property name="hello.name" value="Europe" /></properties></nant><nant buildfile="${project::get-buildfile-path()}" target="hello" /><nant buildfile="${project::get-buildfile-path()}" target="hello"><properties><property name="hello.name" value="Slovenia" /></properties></nant><nant buildfile="${project::get-buildfile-path()}" target="hello" /></target><target name="hello"><property name="hello.name" value="World" 
            unless="${property::exists('hello.name')}" /><echo message="Hello, ${hello.name}." /></target>

The above target still calls hello from within the same file, but it launches it in a separate context, isolating its properties from global ones. if we look at the output, the default values are now used every time they are not explicitly passed to the called target:

Hello, World.
Hello, Europe.
Hello, World.
Hello, Slovenia.
Hello, World.

Of course the called target can now even be moved to a separate file and then easily reused from multiple scripts, making it even more modular. The only real downside of this approach is its verbosity. Although each nant task is effectively just a “method call”, it requires much more typing than we are used from any real programming language.

Using NDepend to Analyze Your Code

$
0
0

Recently I got my hands on the full version of NDepend and I decided to take advantage of that by trying it out on a couple of projects I am working on, both personally and professionally. It turned out that NDepend isn’t all that easy to use if you want to make the most out of it. In this post I’ll go over the steps I made to set everything up and reconfigure it in a way that made the results more meaningful to me.

It starts out really easy. Once you have the add-in installed, you just need to attach a new NDepend project to the solution that you have already opened in Visual Studio. Once the initial analysis is completed, an HTML report is generated and opened in your browser, while a popup suggests a couple of next steps for you.

NDepend What to do Now?

Code Coverage metricSince I’m a great proponent of unit testing, it immediately stood out to me that I have a missing application metric: no info about the test code coverage.

NDepend relies on results from other code coverage tools to calculate this metric. If you have Premium or Ultimate edition of Visual Studio, you can use the results of its code coverage analysis, otherwise you can choose between two alternatives: NCover and JetBrains DotCover. Neither of them is free, but they both have a trial available that you can evaluate. I used Visual Studio and the process is really simple:

  • First, Analyze Code Coverage on All Tests from the Test menu.
  • Then open Code Coverage Results Window from the Test > Windows menu. The first icon in its toolbar allows the export of results into an XML based file which can be consumed by NDepend.
  • Open NDepend project properties on Analysis page and open Code Coverage Settings from the bottom of that page. Here you can add the exported file to the list. Don’t forget to save changes.

Now the Dashboard should refresh and show the code coverage metric value based on the data in the imported file.

The next logical step would be fixing the code rules violations, starting with the critical ones. NDepend namely includes approximately 200 code rules which analyze your code and issue warnings about its quality. As soon as your code base grows in size it is bound to violate at least some of the rules in the default set. Based on your own judgment and the requirements of the project, you can decide how you will address them:

  • Preferably you fix the code so that the rules are not violated any more.
  • You can decide to disable specific rules on the NDepend project level, i.e. on the Visual Studio solution level in my case. The default set of rules mostly does encourage good development practices therefore some thought should be given to each one of them, before deciding to actually disable it.

Instead of simply disabling the rule when it doesn’t fully match your situation, it might make sense to modify it a bit instead of completely disabling it. This works best when you want to ignore some violations but still keep the check for the rest of the code. Minor modification of the rules are actually quite easy if you know LINQ, since all of the rules are just LINQ queries, or to be more precise CQLinq queries, allowing you to query the results of NDepend’s static code analysis.

For example, I decided to modify the “Potentially dead Methods” query to make it ignore methods from specific auto generated classes. Double clicking the rule in the Queries and Rules Explorer brings you to the Queries and Rules Edit window which contains the query used by the rule. To modify the rule as required I just had to add an additional condition to the end of the canMethodBeConsideredAsDeadProc range variable existing checks:

let canMethodBeConsideredAsDeadProc = new Func<IMethod, bool>(
    m => !m.IsPubliclyVisible &&
         !m.IsEntryPoint &&
         !m.IsExplicitInterfaceImpl &&
         !m.IsClassConstructor &&
         !m.IsFinalizer &&
         !m.IsVirtual &&
         !(m.IsConstructor &&
           m.IsProtected) &&
         !m.IsEventAdder &&
         !m.IsEventRemover &&
         !m.IsGeneratedByCompiler &&
         !m.ParentType.IsDelegate &&

         !m.HasAttribute("System.Runtime.Serialization.OnSerializingAttribute".AllowNoMatch()) &&
         !m.HasAttribute("System.Runtime.Serialization.OnDeserializedAttribute".AllowNoMatch()) &&

         !m.HasAttribute("NDepend.Attributes.IsNotDeadCodeAttribute".AllowNoMatch()) &&
         !m.ParentType.FullName.StartsWith("My.Namespace.For.Generated.Classes."))

Although the editor provides IntelliSense and runs the query continuously showing you the results or the query compilation errors if there are any, it still is quite challenging to come up with rules of your own or even to significantly change the existing rules; mainly because there is no debugging available and only a small set of types can be output as a result to see the actual values being manipulated. Of course any changing or writing of the rules requires a project with representative data to be able to test them at all.

Nevertheless, I think the tool can be very useful when working with a large code base, and even more so when there are many developers working on it, requiring additional checks to be performed on every build to ensure that none of the agreed upon conventions are being violated. A large set of default rules are a nice bonus, making the tool easier to start with. Based on the experience so far, I’m still not really sure how difficult it would be to write a new custom rule from scratch and what are the limits for such checks. It would be interesting to compare its capabilities in this field to the Roslyn CTP that was released last year. This might be an interesting topic for another blog post.

My Book NuGet 2 Essentials is Now Available

$
0
0

NuGet 2 EssentialsI’ve been pretty busy since late spring this year, working on many extracurricular projects besides my full time job. Most of that time was definitely spent on the book I’ve been writing together with my coworker Dejan Dakić.

NuGet 2 Essentials is a comprehensive guide on NuGet, covering as many features as possible both for consumers and for publishers of packages. It starts off with basic instructions on setting up NuGet in different ways, followed by in-depth coverage of package consumption and related features, including source control and build server integration for readers working in larger development teams. The second part is focused on creating packages, again starting with the basics of creating the very first package, but then guiding the reader through most of the package features available in NuGet 2.7, accompanying each one of them with a working example demonstrating how it can be used. The book concludes with a listing of different options for hosting your own NuGet package source or server, followed up by instructions on how to actually set it up.

To sum it up, the book is an up-to-date complete guide to NuGet, written in a very concise and practical manner with many hands-on examples to learn from and to see the features in action. Since NuGet is now included with every edition of Visual Studio and serves as the release vehicle for many Microsoft’s libraries, not only open source and third party ones, its importance for .NET development will only grow. Recent announcement of support for Xamarin platforms only reinforces this fact. As the book author, I’m of course biased, but I think currently it is the best available resource for learning about NuGet.

The book is available on Amazon and sold directly by the publisher.

Unit Testing Asynchronous UI Code in WinRT

$
0
0

Writing unit tests for code that needs to be run on the UI thread can be quite a challenge in WinRT. On top of that one can quickly stumble upon classes required to run on the UI thread, even when not expecting them to. In a project I worked on a while ago WriteableBitmap was one such class.

It all started when I tried to write my first unit test with that class (I removed most of the actual code for clarity):

[TestMethod]
public void WriteableBitmapTest()
{
    var bitmap = new WriteableBitmap(100, 100);
    bitmap.DrawEllipseCentered(50, 50, 30, 30, Colors.Blue);
}

Fortunately the error message returned by Visual Studio 2013 test runner is quite clear about the problem: The application called an interface that was marshalled for a different thread. (Exception from HRESULT: 0x8001010E (RPC_E_WRONG_THREAD)). If you are using UI objects in test consider using [UITestMethod] attribute instead of [TestMethod] to execute test in UI thread. Hence my next attempt:

[UITestMethod]
public void UiThreadTest()
{
    var bitmap = new WriteableBitmap(100, 100);
    bitmap.DrawEllipseCentered(50, 50, 30, 30, Colors.Blue);
}

This works; at least until you try to add an asynchronous method call to the test:

[UITestMethod]
public async Task AsyncTestOnUiThread()
{
    var bitmap = new WriteableBitmap(100, 100);
    bitmap.DrawEllipseCentered(50, 50, 30, 30, Colors.Blue);

    await SaveToFileAsync(bitmap);
}

The test runner responds to such an attempt with a different error message: async TestMethod with UITestMethodAttribute are not supported. Either remove async or use TestMethodAttribute. This leaves us with little choice:

  • TestMethodAttribute supports asynchronous test methods but doesn’t run them on UI thread.
  • UITestMethodAttribute runs test methods on UI thread but doesn’t support asynchronous methods.

That’s where I got stuck for a while and decided not to write a couple of tests that I planned to. A Stack Overflow answer by chue x inspired me to give it another try. This time I stuck with TestMethodAttribute and took a different approach to both getting the code executed on the UI thread and to calling asynchronous methods:

[TestMethod]
public async Task TestAsyncCodeOnUiThread()
{
    var taskSource = new TaskCompletionSource<object>();
    await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(
        CoreDispatcherPriority.Normal, async () =>
    {
        try
        {
            var bitmap = new WriteableBitmap(100, 100);
            bitmap.DrawEllipseCentered(50, 50, 30, 30, Colors.Blue);

            await SaveToFileAsync(bitmap);
            taskSource.SetResult(null);
        }
        catch (Exception e)
        {
            taskSource.SetException(e);
        }
    });
    await taskSource.Task;
}

Using TestMethodAttribute allowed me to make the test method asynchronous. CoreDispatcher.RunAsync now takes care of running my code on the UI thread. Finally the test works as I wanted it to.

There is another very important detail in my test I would like to bring attention to, i.e. the use of TaskCompletionSource<T>. This is required to properly synchronize the code in lambda passed to RunAsync with the remaining code in the test method. Although it might not appear so, the lambda returns void (as declared by DispatchedHandler) and therefore is not guaranteed to complete before the test method body execution continues, since it isn’t awaited. By awaiting the TaskCompletionSource<T>.Task as shown above instead, the test method execution will only continue after SetResult is called at the end of the lambda. The try-catch statement inside the lambda makes sure that any exceptions thrown in its body will be properly caught by the test runner instead of making it crash.


Troubleshooting Web Applications Accepting Client Certificates

$
0
0

Configuring web applications which need to accept client certificates is one of those tasks which I am doing just rarely enough to forget about the issues I had to resolve to make everything work the previous time. Well, last week I had another opportunity to refresh my knowledge and as expected everything did not go smoothly.

I started out by configuring the server side:

  • In Internet Information Services (IIS) Manager I added a binding for https to my website. The selected certificate subject name must match the hostname that will be used to access the server and its publisher must be trusted on the client machine. I already encountered my first issue here: after I configured the binding, the web site didn’t start any more. It turned out the port was already in use by some other application: Skype. It didn’t take me too long to solve this; thanks to the answer I found on Stack Overflow.
  • At the web application level I changed the SSL Settings in IIS Manager: I switched Client certificates option from Ignore to Accept.

Here’s my IHttpModule code handling AuthenticateRequest on the server:

var context = HttpContext.Current;
var request = context.Request;
var certificate = request.ClientCertificate;

if (certificate.IsPresent)
{
    var identity = new GenericIdentity(certificate.Subject);
    var principal = new GenericPrincipal(identity, new string[0]);
    context.User = principal;
}
else
{
    var response = context.Response;
    response.StatusCode = (int)HttpStatusCode.Unauthorized;
    response.StatusDescription = "No valid certificate";
    response.Flush();
    response.Close();
}

Followed by the sample client code (comments contain some useful information):

var store = new X509Store(StoreLocation.CurrentUser);
store.Open(OpenFlags.ReadOnly);
// the selected certificate needs to be trusted on the server
var certificate = store.Certificates.Find(X509FindType.FindBySubjectName, "Subject", true)[0];

var handler = new WebRequestHandler();
handler.ClientCertificateOptions = ClientCertificateOption.Manual;
handler.ClientCertificates.Add(certificate);

var httpClient = new HttpClient(handler);
// https is required and hostname must match server certificate subject name
var body = await httpClient.GetStringAsync("https://localhost/WebApplication/WebPage.cshtml");

Still, for some reason the IsPresent property on HttpClientCertificate in the above authentication code kept returning false. After going through all the configuration steps in detail once again, I determined that the reason for this behavior was that the client certificate was not trusted on the server. I haven’t noticed that before because it showed as trusted when looking at it in my certificate store.

How could that happen? Well, I generated the client certificate myself and had to put it in the Trusted Root Certificate Authority store as well to establish trust. I did that; but I used the current user store for that instead of the local computer store, therefore the certificate was still not trusted by the server which was running with the application pool identity. Once I fixed that, everything started working as expected.

Configuring Common NuGet Repository for Multiple Solutions

$
0
0

When using NuGet, you typically don’t need to worry about its repository location. In simple every day scenarios it “just works” without even paying attention to it. If you’ve ever used package restore functionality, you might have noticed that the packages are actually downloaded to the packages folder alongside the solution file because you had to exclude it from source control, but that’s probably the most thought you have ever given to it.

Once you have more than a single solution file for your product, this default approach starts to break down. To be more precise; the problems occur when a single project file is included in multiple solutions files, which are not placed in the same folder. Since by default the repository is placed in the same folder as the solution file, there will be two repositories in this case – one for each solution file. Depending on the opened solution, the packages for the shared project will be placed in either one of the repositories when added to a project. Let’s demonstrate this on a simple example:

  • Root
    • Common
      • ProjectA
        • ProjectA.csproj
    • Solution1
      • Solution1.sln
      • ProjectB
        • ProjectB.csproj
    • Solution2
      • Solution2.sln
      • ProjectC
        • ProjectC.csproj

Let’s assume that ProjectA is included in both Solution1 and Solution2. Its references will by default point to:

  • “..\..\Solution1\packages” when added from Solution1, or
  • “..\..\Solution2\packages” when added from Solution2.

At first everything will work fine, but as soon as a different developer will retrieve the code from source control, who doesn’t yet have the packages on his disk, the build will fail even after restoring the packages. The package restore process will put all the packages in the repository of the currently opened solution, but the reference paths will point to the other repository if they were added from that second solution. A possible workaround would be, to restore the packages for both solutions before building; but there is a cleaner way to solve the problem.

The default repository location can be changed using a configuration file named NuGet.Config. Because of the way NuGet is searching for this file and reading its contents, it’s easy to create a single configuration file for all the solutions in a product and share it with other developers using source control: after checking the machine specific and user specific locations, NuGet will start traversing the folders from the drive root up to its working directory and use the setting value from the last configuration file it encounters, i.e. the one closest to the working directory.

The issue from the above example can therefore be solved by creating a NuGet.config file with the following contents in the Root folder and include it in source control:

<?xml version="1.0" encoding="utf-8"?><configuration><config><add key="repositorypath" value=".\packages" /></config></configuration>

Now both solutions will use the same repository (i.e. “Root\packages”) and the problem of missing packages after package restore will be gone. Of course you need to set this up before adding any packages to your projects, otherwise you’ll need to:

  • either remove all of the packages and add them again (the recommended way),
  • or manually fix the paths of the referenced assemblies from the packages in the project files (make sure you know exactly what you’re doing).

You can change the repositorypath value in the configuration file to your liking, just make sure it is defined relative to its location. Here’s why I think other approaches don’t work well, even if they seem a good idea at first:

  • Absolute paths will force all developers to have the repository at the same location which is not only impractical but can also be impossible when they don’t have a disk with that letter or don’t have enough available space or required permissions on it. Also, you really don’t want the referenced assemblies in your projects to point to a fixed path.
  • Leaving up the path to the developers (by setting it in a machine or user specific configuration file) won’t work for the same reason that the default location doesn’t work: project files contain fixed paths relative to the project file location and are the same for all developers.

You should also make sure to you use automatic package restore that is available since NuGet 2.7 instead of its older alternative: MSBuild-integrated package restore. For it to work and be used, you need your developers to have at least NuGet 2.7 installed and you must not execute the “Enable NuGet Package Restore” command from Visual Studio. Otherwise your project files will include a fixed reference to the NuGet.targets file placed below the current solution file, causing a similar problem to the original one.

Automatic package restore works without NuGet.exe and NuGet.targets included in source control and solves other issues of MSBuild-integrated package restore as well (e.g. projects failing to load or building incorrectly before the packages are restored if they reference a targets file from a package). From Visual Studio this new approach to package restore works without having to take any additional actions. If you want to build your projects with a build script as well (a common scenario on a build server), make sure you have an updated version of NuGet.exe in the path and call the following command for each solution before building it:

nuget.exe restore MySolution.sln

If you’ve found this blog post useful, you might also be interested in the NuGet book that I coauthored and has just recently been released: NuGet 2 Essentials. Until January 3rd you can buy it directly from the publisher in eBook format for only 5 USD.

How to Return Additional Info with HTTP Error Status Codes

$
0
0

HTTP protocol defines status codes as the means of informing the client about different error conditions on the server during the request processing. Sometimes it can be beneficial to include additional information about the error that occurred. The protocol allows two different approaches in such cases:

  • The status code can be extended with an optional reason phrase which is intended for the user and not parsed by the clients. There are some limitations to it, though: it does not support different encodings and of course it can’t span over multiple lines. While the latter usually can be avoided, the former one makes it impossible to return localized messages. Also, as I will describe in detail; depending on the API used, it might be impossible to retrieve reason phrase from code.
  • Even error pages can have content which doesn’t manifest any of the above limitations: both different encodings and multi line content are supported. Unfortunately, based on the API used, it’s again not always possible to retrieve the page content from code.

Let’s take a more detailed look at different scenarios, starting with two generic client classes: HttpWebRequest and the newer HttpClient. On the server side I’ll use Web API to return the desired response:

public class TestController : ApiController
{
    public string Get()
    {
        var response = new HttpResponseMessage(HttpStatusCode.InternalServerError);
        response.ReasonPhrase = "Database not available";
        response.Content = new StringContent("Database not available");
        throw new HttpResponseException(response);
    }
}

This will result in the following response:

HTTP/1.1 500 Database not available
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 22
Content-Type: text/plain; charset=utf-8
Expires: -1
Server: Microsoft-IIS/8.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sun, 29 Dec 2013 12:35:53 GMT

Database not available

Using HttpWebRequest the typical client code would look like:

var request = WebRequest.Create(uri);
var response = request.GetResponse();
using (var reader = new StreamReader(response.GetResponseStream()))
{
    var content = reader.ReadToEnd();
}

For error status codes response.GetResponseStream will throw a WebException which can be inspected as follows:

catch (WebException e)
{
    var message = e.Message;
    using (var reader = new StreamReader(e.Response.GetResponseStream()))
    {
        var content = reader.ReadToEnd();
    }
}

Message property will  contain a generic status dependent error message (“The remote server returned an error: (500) Internal Server Error.”), while Response.GetResponseStream() returns a stream with the page content (“Database not available” in my case). Reason phrase can’t be accessed in this case.

Using HttpClient this would be typical client code:

var httpClient = new HttpClient();
var result = await httpClient.GetStringAsync(uri);

For error status codes httpClient.GetStringAsync will throw a HttpRequestException with Message property containing “Response status code does not indicate success: 500 (Database not available).”, i.e. both the status code and the reason phrase. To retrieve the body of error responses, the client code needs to be modified:

var httpClient = new HttpClient();
var response = await httpClient.GetAsync(uri);
var content = await response.Content.ReadAsStringAsync();
response.EnsureSuccessStatusCode();

In this case content will contain page content for both non-error and error responses. StatusCode and ReasonPhrase properties of response can be used to get the corresponding response values. EnsureSuccessStatusCode will throw the same exception as GetStringAsync in the first sample code.

Web services should usually throw exceptions from web method code so that they are returned to the client inside the SOAP response. Still, sometimes the client needs to inspect other HTTP server responses as well. The simplest way to simulate this situation is to construct such a response from an IHttpModule:

public class ExceptionModule : IHttpModule
{
    public void Init(HttpApplication context)
    {
        context.AuthenticateRequest += ContextOnAuthenticateRequest;
    }

    private void ContextOnAuthenticateRequest(object sender, EventArgs eventArgs)
    {
        var response = HttpContext.Current.Response;
        response.StatusCode = (int)HttpStatusCode.InternalServerError;
        response.StatusDescription = "Database not available";
        response.Write("Database not available");
        response.Flush();
        response.Close();
    }

    public void Dispose()
    { }
}

Once the module is registered in web.config, the above response will be returned for any web service in the web application:

<configuration><system.webServer><modules><add name="ExceptionModule" type="WebAPI.ExceptionModule" /></modules></system.webServer></configuration>

After adding the web service to the client project as a service reference, the following (WCF) client code can be used:

var proxy = new WebServiceClient();
var result = proxy.WebMethod();

In case of an error status code the WebMethod will throw a MessageSecurityException, containing a WebException as InnerException. This WebException can be inspected just like when using HttpWebRequest.

Of course web services can still be added to the client project as an old school web reference. The client code stays similar:

var proxy = new WebService();
var result = proxy.WebMethod();

The behavior changes, though: WebMethod will throw an unwrapped WebException. Not only that; it will close the response stream returned by Response.GetResponseStream(), before returning, making it impossible to get the response body from code. Not so long ago I’ve spent too much time wondering why the stream is claimed to be disposed when I accessed it. Actually this has been the main reason, I started writing this blog post at all. Well, before concluding I should also mention that as a kind of a consolidation, the Message property in this case includes the reason phrase: “The request failed with HTTP status 403: Database not available.”

So, what is the best approach to returning additional information with error status codes? Obviously it depends on the scenario. As a general rule I would suggest, you always include a short non-localized reason phrase. To make sure it is accessible in all cases, it’s a good idea to include it in the page content as well. If this message needs to be expanded further or localized, included that in the page content instead.

Mocking EF Context for Unit Testing WCF Services

$
0
0

Units tests are all about testing a specific unit of code without any external dependencies. This makes the tests faster and less fragile, since there are no out-of-process calls and all dependencies are under the test’s control. Of course, it’s not always easy to remove all external dependencies. One such example is a WCF service using entity framework for database access in its operations.

It would be easy to create a test calling such a web service through a proxy (i.e. a service reference) while it is connected to an appropriate sample database. Though, that would be an integration test, not a unit test. Such tests are slow, it’s difficult to setup the environment for them to run correctly, and they tend to break easily because something happened to the database or the hosted service. Wouldn’t it be nice to be able to test a service without the database and without having to host it at all? Let’s see how this can be done.

Our sample DbContext will have only a single entity:

public class ServiceContext : DbContext
{
    public DbSet<ErrorReportEntity> ErrorReports { get; set; }
}

public class ErrorReportEntity
{
    public int Id { get; set; }
    public string ExceptionDetails { get; set; }
    public DateTime OccuredAt { get; set; }
    public DateTime ReportedAt { get; set; }
}

The sample service will have only a single method:

[ServiceContract]
public interface IService
{
    [OperationContract]
    void ReportError(ErrorReport report);
}

public class Service : IService
{
    public void ReportError(ErrorReport report)
    {
        using (var context = new ServiceContext())
        {
            var reportEntity = new ErrorReportEntity
            {
                ExceptionDetails = report.ExceptionDetails,
                OccuredAt = report.OccuredAt,
                ReportedAt = DateTime.Now,
            };
            context.ErrorReports.Add(reportEntity);
            context.SaveChanges();
        }
    }
}

We first need an alternative implementation of ServiceContext for testing which won’t require a database. This could be its interface:

public interface IServiceContext : IDisposable
{
    IDbSet<ErrorReportEntity> ErrorReports { get; set; }
    void SaveChanges();
}

Notice the use of IDbSet instead of DbSet. We also added SaveChanges to the interface since we need to call it from our service. ServiceContext now needs to implement this interface:

public class ServiceContext : DbContext, IServiceContext
{
    public IDbSet<ErrorReportEntity> ErrorReports { get; set; }

    public new void SaveChanges()
    {
        base.SaveChanges();
    }
}

For the tests we will of course have a different implementation.

public class MockServiceContext : IServiceContext
{
    public IDbSet<ErrorReportEntity> ErrorReports { get; set; }

    public MockServiceContext()
    {
        ErrorReports = new InMemoryDbSet<ErrorReportEntity>();
    }

    public void SaveChanges()
    { }

    public void Dispose()
    { }
}

You might wonder where InMemoryDbSet came from. It’s an in-memory implementation of IDbSet which you can get by installing FakeDbSet NuGet package.

Having two different implementations of IServiceContext, we need a way to inject the desired one into our service for each case: MockServiceContext when testing and ServiceContext when actually hosting the service in IIS. We’ll use Ninject as the dependency injection framework with the constructor injection pattern. This would be the naïve attempt at changing the service implementation:

public class Service : IService
{
    private readonly IServiceContext _context;

    public Service(IServiceContext context)
    {
        _context = context;
    }

    public bool ReportError(ErrorReport report)
    {
         var reportEntity = new ErrorReportEntity
         {
             ExceptionDetails = report.ExceptionDetails,
             OccuredAt = report.OccuredAt,
             ReportedAt = DateTime.Now,
         };
         _context.ErrorReports.Add(reportEntity);
         _context.SaveChanges();
    }
}

The downside of this approach is that we have changed the behavior. Instead of creating a new DbContext for each method call, we now use the same instance for the complete lifetime of the service. We’ll see how to fix that later. First we need to make sure that we always pass the correct IServiceContext implementation to the constructor. In the test we’ll do it manually:

[TestMethod]
public void ValidReport()
{
    var context = new MockServiceContext();
    var service = new Service(context);

    var error = new ErrorReport { /* initialize values */ };

    service.ReportError(error);

    var errorFromDb = context.ErrorReports
        .Single(e => e.OccuredAt == error.OccuredAt);
    // assert property values
}

For hosting in ISS we’ll take advantage of WCF extensions for Ninject. NuGet package installation among other things also adds a NinjectWebCommon.cs file in App_Start folder. We need to open it and add the following line of code to the RegisterServices method inside it to register the correct IServiceContext implementation with the Ninject kernel:

kernel.Bind<IServiceContext>().To<ServiceContext>();

We only need to add the Ninject factory to the service declaration in Service.svc file, and the Service class will be correctly created – Ninject will pass it an instance of ServiceContext:

<%@ ServiceHost Language="C#" 
    Service="WebService.Service" 
    CodeBehind="Service.svc.cs" 
    Factory="Ninject.Extensions.Wcf.NinjectServiceHostFactory" %>

Now it’s time to address the already mentioned issue of not instantiating a new ServiceContext for each method call. Ninject.Extensions.Factory NuGet package can help us with that. It will allow us to pass a ServiceContext factory to the service instead of passing it an already created ServiceContext. We first need a factory interface:

public interface IServiceContextFactory
{
    IServiceContext CreateContext();
}

Now we can change DbContext handling in Service back to the way it originally was:

public class Service : IService
{
    private readonly IServiceContextFactory _contextFactory;

    public Service(IServiceContextFactory contextFactory)
    {
        _contextFactory = contextFactory;
    }

    public bool ReportError(ErrorReport report)
    {
        using (var context = _contextFactory.CreateContext())
        {
            var reportEntity = new ErrorReportEntity
            {
                ExceptionDetails = report.ExceptionDetails,
                OccuredAt = report.OccuredAt,
                ReportedAt = DateTime.Now,
            };
            context.ErrorReports.Add(reportEntity);
            context.SaveChanges();
        }
    }
}

For this to work when service is hosted in IIS, we need to additionally register the factory in RegisterServices:

kernel.Bind<IServiceContextFactory>().ToFactory();

For the test we need to implement the factory ourselves – it only takes a couple of lines:

public class MockServiceContextFactory : IServiceContextFactory
{
    public IServiceContext Context { get; private set; }

    public MockServiceContextFactory()
    {
        Context = new MockServiceContext();
    }

    public IServiceContext CreateContext()
    {
        return Context;
    }
}

At the beginning of the test we now create a factory instead of the context directly:

var contextFactory = new MockServiceContextFactory();
var service = new Service(contextFactory);

That’s it. With some pretty simple refactoring we managed to get rid of all hard-coded external dependencies in our service. Writing tests is now much simpler and there is almost no code overhead when additional members are added to DbContext or the service contract.

Connecting to Local IIS Express Server from WP8 Emulator

$
0
0

If you’re developing a Windows Phone 8 application which doesn’t only connect to public web services to get its data, but also communicates with you own custom web service, you’ll want to be able to connect to it from the Windows Phone Emulator with as little hassle as possible. Usually that means that you’ll want it to connect to your local IIS Express server to avoid deploying the web service to the full IIS server on your local machine, or even worse: on a different machine.

While setting this up on my development machine twoexisting web pages proved very helpful, but to get everything working as I wanted it to, I had to combine information from both of them, and even configure some stuff on my own. In this blog post I’ll describe the process I used; as a reference for future use. Hopefully someone else will find it helpful as well.

The key to setting everything up, is putting the right web service host IP in your Windows Phone application. Since the application is actually running on a different virtual machine in Hyper-V, you can’t just use localhost – you’ll need to use the IP of your development machine. By default the emulator is connected to all of Hyper-V’s virtual switches, so your WP8 application can connect to the web service through any one of them, but I suggest you use the internal switch, because the corresponding IP of your development machine is always the same, even if it’s not on the network, which might happen to your notebook if you’re developing while travelling.

To figure out the IP to use, open a command prompt and type

ipconfig

The command will return information on multiple adapters; you’re interested in the one matching the name of Hyper-V’s internal switch. In my case this was the correct output:

Ethernet adapter vEthernet (Internal Ethernet Port Windows Phone Emulator Internal Switch):

   Connection-specific DNS Suffix  . :
   Link-local IPv6 Address . . . . . : fe80::c0fc:ebf2:755d:4a03%14
   IPv4 Address. . . . . . . . . . . : 169.254.80.80
   Subnet Mask . . . . . . . . . . . : 255.255.0.0
   Default Gateway . . . . . . . . . :

It seems, 169.254.80.80 is the IP I’m going to use. Now check for the port on which your web service is running. You can find it in Properties of its Visual Studio project, on the Web tab:

Web service port

According to the information I gathered, I can now configure the host part of any web service URL’s in my WP application to 169.254.80.80:3276. The next step is to bind the web service in IIS Express to this address. This can be done by editing the applicationhost.config file in IISExpress\config subfolder inside your personal Documents folder. Open the file and search for the name of your web service project inside it. You should find a corresponding site element:

<site name="WebServiceProjectName" id="3"><application path="/" applicationPool="Clr4IntegratedAppPool"><virtualDirectory path="/" physicalPath="Path\To\WebService\Folder" /></application><bindings><binding protocol="http" bindingInformation="*:3276:localhost" /></bindings></site>

You need to add the following binding to the bindings element:

<binding protocol="http" bindingInformation="*:3276:169.254.80.80" />

The unfortunate side effect of this change is that you will need to run Visual Studio as administrator for this to work. IIS Express can only listen to localhost without administrative privileges. After applying the change make sure IIS Express is not running and reopen the solution as administrator. I suggest you temporarily disable the firewall, to check that IIS Express is configured correctly. Your WP8 application should now successfully connect to the web service.

Let’s re-enable the firewall and add a rule to it for this to work even with enabled firewall. Open Windows Firewall with Advanced Security and select Inbound Rules from the tree view on the left. Select New Rule from the menu on the right. Once the wizard opens, select Port on the first page and click next. On the second Page select TCP and add your port, i.e. 3276 in my case.

Firewall inbound rule configuration

On the third page keep Allow the connection selected. On the fourth page select the connection profiles you want the rule to apply to. The best idea would be to uncheck Public, but at least I wasn’t able to change the location of the connection corresponding to the internal switch to anything else then public because it is marked as unidentified. I could make all unidentified connections private through group policy but that wouldn’t make much difference. You can check it for your machine in Network and Sharing Center window. In the last page of the wizard choose a suitable name for your rule, so you’ll be able to recognize it in the future, e.g. “IIS Express My Web Service” would be a good name.

With firewall enabled test your WP8 application one last time to make sure it still works, and you’re done.

Unit Testing log4net Logging Code

$
0
0

There’s usually no need to unit test the logging code. If you just want to ignore it in your tests, there’s nothing you need to do when using log4net. The emitted information will not be logged anywhere by default. For production use you will of course configure the appenders in the config file as required.

What if you still want to make sure you’re going to log the right information? It turns out you can use unit tests for this. The key to it is configuring an appender directly from your unit test code. The best candidate for it is MemoryAppender. Here’s the required code:

var appender = new MemoryAppender();
BasicConfigurator.Configure(appender);

Make sure you keep a reference to the appender. You’ll need it at the end of the test to check the logged information:

var logEntries = appender.GetEvents();

The call will return an array of LoggingEvents which you can then inspect using asserts, making sure the correct number of events was logged and that each one of them contains expected data.

You’re probably not going to use this often, but it can come in handy when you need it.

Unit Testing Windows Phone Applications

$
0
0

If you’re used to depend on unit testing during your development or even practice test driven development, working on Windows Phone applications can be a challenge. The fact that the code must always run on the phone (or an emulator), poses some limitations on the execution of the tests.

For quite some time the only available option for unit testing Windows Phone applications was the Windows Phone Toolkit Test Framework. It required you to create an additional Windows Phone application containing the unit tests. To run them, you had to actually run this application either on the emulator or on the phone and start them manually through the UI on the phone. Obviously the approach was from perfect. Because of the lengthy manual process for running the tests, you would usually avoid running them too often, diminishing their value a lot. Still, having the possibility to write and run unit tests was better than not having that option at all.

Running WP Unit Tests WP Unit Test Results

Visual Studio 2012 Update 2 brought built-in support for unit testing Windows Phone 8 applications, making the above described approach obsolete, unless you’re working on a Windows Phone 7 application. Now there is a Windows Phone Unit Test app template available in Visual Studio. The unit tests that you put in a project based on this template can be run directly from the Visual Studio test runner, as well as the ReSharper test runner which I personally prefer. The test runner will take care of starting the emulator, deploying the app and running the selected tests on it. The results will be displayed directly in the test runner’s interface just like they are for unit tests on other platforms.

Although this makes the situation much better, there is still a minor issue, interfering with my usual development workflow. I got used to NCrunch running the unit tests continuously for me and showing the results directly inside the code editor in the gutter of the lines covered by the tests. Unfortunately NCrunch doesn’t yet properly support Windows Phone platform and can’t run the tests on the emulator.

Still, there are ways to use it at least for some of the tests. You will need to use portable class libraries to achieve that, though. Putting your app’s business logic in a portable class libraries allows you to reference this library from a regular .NET framework Unit Test Project. NCrunch is able to run those flawlessly. Not only that; since a portable class library can be used on any supported platform, it will be much easier to port the app to Windows Store and Windows Desktop; and with the help of Xamarin tools even to Android, iOS and MacOS.

Of course, not all the code can be put inside a portable class library. As soon as it uses any platform specific APIs, it will need to remain in the native Windows Phone part of the app. For most of such code this shouldn’t be a problem, since you can always encapsulate it and hide it behind an interface which you want to mock in unit tests, anyway. The only exception to this is the UI specific code: view models and converters. With most MVVM frameworks their base classes depend on platform specific classes and interfaces, making them unsuitable for portable class libraries. If you really want to put those in a portable class library as well, you should take a look at MvvmCross. It’s the only MVVM framework I know of, which allows portable view models and converters. Of course, it doesn’t make any sense to port your existing project from a different framework to MvvmCross, just to be able to use NCrunch; but if you’re starting a new project or planning to support multiple platforms, it is definitely an option worth considering.


Migrating Unit Tests From WP Toolkit Test Framework To WP Unit Test App

$
0
0

Since Visual Studio 2012 Update 2 there is a project template available for unit testing Windows Phone apps: Windows Phone Unit Test App. Unlike its predecessor Windows Phone Toolkit Test Framework, it doesn’t require the tests to be manually started from the device or the emulator. They can be started directly from the unit test runner’s window in Visual Studio. This feature should be a good enough reason for migrating any existing test projects from the old framework to the new template. I’ve recently done this with one of my projects and decided to document the process in case I need to it again in the future. I’m posting it here since it might be useful for others, as well.

  1. Rename the existing unit test project, e.g. by adding a .Old postfix. First rename it in Visual Studio to rename the .csproj file, then remove it from the solution, rename the folder in the file system and add back the existing project from the new folder.
  2. Add a new project to the solution based on the Windows Phone Unit Test App template, using the original name.
  3. In the file system copy the files with test and any helper classes to the new project folder.
  4. In Visual Studio show all files in Solution Explorer and include all the copied files in the project.
  5. Add all the references (except Windows Phone Toolkit Test Framework and Windows Phone Toolkit NuGet packages) from the old unit test project to the new unit test project (NuGet packages, assembly references and project references within the solution).
  6. In the using directives in all test classes replace Microsoft.VisualStudio.TestTools.UnitTesting namespace with Microsoft.VisualStudio.TestPlatform.UnitTestFramework namespace.
  7. Delete any Description attributes on the test methods.
  8. The new unit test project doesn’t include AnyCPU configuration; therefore if your solution is configured for AnyCPU, you’ll need to open Configuration Manager and enable build for the test project in AnyCPU solution platform. Otherwise the test project won’t build and the test won’t run, unless you manually trigger the project to build.
  9. If you’re using NCrunch, open its configuration window, select the new unit test project and enable the “Ignore this component completely” option to prevent NCrunch from trying to run the Windows Phone assemblies in its own runtime. It will just cause false failures since it can’t run the tests in the Windows Phone runtime.
  10. Now you’re ready to compile the project and run the tests.
  11. Of course, the last remaining step is to remove the old unit test project from the solution and delete it.

After I’ve gone through the above described migration process, three test were failing in the new project. After a more detailed investigation, it turned out they were failing in the old project as well; they just didn’t show as such because they were asynchronous (returning async Task). The old framework didn’t really support asynchronous tests. Fortunately, the new one does. Another reason for migration, if you ask me.

Inconsistent Validation of Extensions in WorkflowApplication

$
0
0

While refactoring an application hosting workflow foundation runtime, I stumbled upon an inconsistent behavior in handling and validation of workflow extensions added to the host WorkflowApplication.

For those, not familiar with workflow extensions; they are specific classes that can provide external services to custom activities requiring them. They are added to the WorkflowApplication class hosting the workflow and can therefore serve as an interface enabling the activities to communicate with external resources. The activity can retrieve the instance of the extension when it executes and call its methods. It can also declare that it requires a specific extension. In this case a validation will be performed when starting the workflow. This will ensure that the activity will receive an instance of this extensions at runtime, otherwse the workflow wouldn’t get started at all.

Let’s illustrate it with a sample. First we’ll define a minimal extension class:

public class ChildExtension
{ }

Then we’ll create a custom activity requiring and retrieving an instance of that extension:

public class CustomActivity : NativeActivity
{
    protected override void Execute(NativeActivityContext context)
    {
        if (context.GetExtension<ChildExtension>() == null)
        {
            throw new NullReferenceException("No ChildExtension!");
        }
    }

    protected override void CacheMetadata(NativeActivityMetadata metadata)
    {
        metadata.RequireExtension<ChildExtension>();
        base.CacheMetadata(metadata);
    }
}

Now the workflow be defined and hosted in our application:

var wfActivity = new CustomActivity();
var wfApp = new WorkflowApplication(wfActivity);
wfApp.Extensions.Add(new ChildExtension());
wfApp.Run();

As you can see, an instance of our extension has been added to the host. Otherwise a ValidationException would be thrown when calling Run.

So far so good. The inconsistencies occur when we try to take advantage of IWorkflowInstanceExtension interface. If an extension implements it, it can serve as a parent extension, providing additional dependent extensions to the host. Let’s create such an extension:

public class RootExtension : IWorkflowInstanceExtension
{
    public IEnumerable<object> GetAdditionalExtensions()
    {
        return new[] {new ChildExtension()};
    }

    public void SetInstance(WorkflowInstanceProxy instance)
    { }
}

Although there’s not much documentation available about the interface, this should allow us to only add the parent extension to the host, while having the additional extensions added to the host automatically:

var wfActivity = new CustomActivity();
var wfApp = new WorkflowApplication(wfActivity);
wfApp.Extensions.Add(new RootExtension());
wfApp.Run();

If we do that, a ValidationException will be thrown when Run is called. Although this surprised me, I was even more surprised when I removed the call to RequireExtension from CustomActivity and everything worked just fine. The call to GetExtension returned the ChildExtension instance although an exception was thrown before the change, stating: An extension of type 'Workflow.ChildExtension' must be configured in order to run this workflow.

This behavior seems inconsistent to me. I’d expect the exception not to be thrown in this case or at least GetExtension not returning the extension if it’s not required. That’s why I reported an issue to Microsoft Connect hoping for an official explanation or fix. I wrote this blog post as a warning to anyone else trying to take advantage of the above functionality, and of course to attract additional attention to the Connect issue. Feel free to vote for it.

Creating Converters in MvvmCross

$
0
0

MvvmCross is a MVVM framework for XAML platforms, similar to Caliburn Micro and MvvmLight. Unlike its competition it very much focuses on portability and code reuse across all supported XAML platforms (WPF, Windows Phone and Windows Store), and the Xamarin platforms as well (Xamarin.iOS, Xamarin.Android and Xamarin.Mac). Therefore it has its own approach to creating converters, allowing them to be implemented in a portable class library and reused on all supported platforms.

The main reason preventing that even on the Microsoft platforms, are different native IValueConverter interfaces for eachsupportedplatform, making it unavailable in portable class libraries. MvvmCross resolves this issue by introducing its own converter interface IMvxValueConverter, as well as a strongly typed generic abstract base class MvxValueConverter<TFrom, TTo>. By implementing the former or deriving from the latter, it is possible to create a portable converter which can be used on any platform:

public class OnOffConverter : MvxValueConverter<bool, string>
{
    protected override string Convert(bool value, Type targetType, object parameter, CultureInfo culture)
    {
        return value ? "On" : "Off";
    }
}

Although such converters can only be bound directly using MvvmCross’s Tibet binding attached properties, there are native wrappers available for each supported platform, allowing them to be used with traditional XAML binding with next to none additional code:

public class NativeOnOffConverter : MvxNativeValueConverter<OnOffConverter>
{ }

Since MvvmCross v3.1 portable converter’s public properties can be exposed on native converter wrappers, as well:

public class BoolToTextConverter : MvxValueConverter<bool, string>
{
    public string TrueValue { get; set; }
    public string FalseValue { get; set; }

    protected override string Convert(bool value, Type targetType, object parameter, CultureInfo culture)
    {
        return value ? TrueValue : FalseValue;
    }
}

public class NativeBoolToTextConverter : MvxNativeValueConverter<BoolToTextConverter>
{
    public string TrueValue
    {
        get { return Wrapped.TrueValue; }
        set { Wrapped.TrueValue = value; }
    }

    public string FalseValue
    {
        get { return Wrapped.FalseValue; }
        set { Wrapped.FalseValue = value; }
    }
}

This way the converters can be implemented in a more generic manner, and only exactly specified when they are instantiated in XAML:

<conv:NativeBoolToTextConverter x:Key="OnOffConverter" TrueValue="On" FalseValue="Off" />

The added benefit of such an approach to implementing converters is their testability. Because they are portable, they can be tested independently of the target platform, even on platforms with only limited native unit testing support. If you’re interested in that, check out my blog post on unit testing Windows Phone applications.

Implementing Converters Returning Native Types in MvvmCross

$
0
0

Although MvvmCross allows creating portable converters which can be used on multiple different platforms, they can still only return types which are also portable. Of course there are cases when it is desirable for a converter to return a non-portable classes such as Visibility, Brush or BitmapImage on Windows platforms.

The basics are probably already covered by MvvmCross itself; it includes visibility and color converters which can be used on all platforms. Still, there will always be other native classes you will need. In this case the best approach is to implement a base portable class returning a common portable value which can be converted to specific native values in a simple manner. For a BitmapImage such a common value could be a Url.

On Windows platforms native converters always need to be wrapped in a MvxNativeValueConverter, anyway. That same wrapper can also take care of the above mentioned simple type conversion. This would be an example for BitmapImage:

public class ImageConverter : 
    MvxNativeValueConverter<MvxConverters.UriConverter>
{
    public override object Convert(object value, Type targetType, 
                                   object parameter, CultureInfo culture)
    {
        var uri =  base.Convert(value, targetType, parameter, culture) as Uri;
        return uri == null ? null : new BitmapImage(uri);
    }
}

This way all actual business logic is kept in the portable converter, allowing it to be reused and tested centrally. The native wrappers are just trivial type converters.

Slides And Demos From a Local Windows 8.1 Development Event

$
0
0

Last Tuesday the local Microsoft DPE team organized a free event for developers thinking about taking part in the regional Windows 8.1 Developers Contest. It was planned as an effective course for developers not having previous experience with development of Windows Store apps. I was really glad to see that many people turned up who were very interested in the topics presented, and asking the speakers a lot of questions. I presented two sessions at the event.

My first session was about one of my favorite topics – architectural patterns. I decided to base my session on a similar one I gave almost a year and a half ago at Bleeding Edge 2012. I took this opportunity to update the content and demos to the current state of Windows Store app development ecosystem. As always, I uploaded the slides to SlideShare and made the demos available on BitBucket.

In the second session I covered all aspects of using live tiles in Windows Store apps. I spent most of the time speaking about push notifications and taking advantage of Windows Azure Mobile Services. Again you can download the slides from SlideShare and the demos from BitBucket.

I’m looking forward to many new and interesting Windows Store apps created for the contest.

Viewing all 641 articles
Browse latest View live