Quantcast
Channel: Damir's Corner
Viewing all 640 articles
Browse latest View live

Dynamic Binding of Static and Instance Methods

$
0
0

Take a close look at the following piece of code:

String String = "";

var strTest = "TEST";
Console.Write(String.Format(strTest));

dynamic dynTest = "TEST";
Console.Write(String.Format(dynTest));

Think about it for a minute and try answering the following questions:

  • Is it going to compile without errors?
  • Is it going to run without errors?
  • What will the output be?

Feel free to open Visual Studio and compile the code, but do try answering the questions first.

Here are the correct answers for those of you who believe me without testing it your self – the code will compile, but it will throw a runtime exception at the last line: Member 'string.Format(string, params object[])' cannot be accessed with an instance reference; qualify it with a type name instead.

Why does the error happen? Let’s take a look at how .NET Reflector decompiles the above code:

string str = "";
string format = "TEST";
Console.Write(string.Format(format, new object[0]));
object obj2 = "TEST";
Console.Write(str.Format((dynamic) obj2));

That makes it much clearer, doesn’t it?

  • In the first case Format is called on the string type as expected.
  • In the second case Format is called on the local instance of string. Since there is no instance method on string type named Format and since C# doesn’t allow calling static methods with an instance reference, this results in a runtime error.

Obviously, the compiler doesn’t bind that second call as one would naively expect. Even worse; Visual Studio interprets the code differently than the compiler: in IDE String in the second call is shown as the type, not the variable.

Of course, the behavior can be explained, no matter how unintuitive it is. C# language specification comes to the rescue. Let’s start with a quote from section 7.2: Static and Dynamic Binding:

However, if an expression is a dynamic expression (i.e. has the type dynamic) this indicates that any binding that it participates in should be based on its run-time type (i.e. the actual type of the object it denotes at run-time) rather than the type it has at compile-time. The binding of such an operation is therefore deferred until the time where the operation is to be executed during the running of the program. This is referred to as dynamic binding.

In our example dynTest is a dynamic expression, therefore it makes String.Format binding dynamic as well. This explains, why the exception is thrown at runtime.

Still, why does it work correctly in the first case? While I’m pretty sure the answer to this question can be found in the language specification as well, it is much better described in Eric Lippert’s blog post:

So the reason that the compiler does not remove static methods when calling through an instance is because the compiler does not necessarily know that you are calling through an instance. Because there are situations where it is ambiguous whether you’re calling through an instance or a type, we defer deciding which you meant until the best method has been selected.

That’s the case for the first call in our example. String is ambiguous and could be both a local variable or a type. The compiler correctly selected the static method as the best candidate function member. In the second case the binding has been postponed till runtime, and the best candidate function member hasn’t been selected at all. Therefore the compiler decided that String is a local variable name, causing the runtime binding to fail because the best candidate function member is static and can’t be called with the instance reference of the local variable. Mystery solved.

If you haven’t done so already, do read the above mentioned Eric Lippert’s blog post; it is well worth your time and will give an even better insight into binding.


Refreshing Instance Store Handle in Workflow Foundation

$
0
0

Certain aspects of Workflow Foundation are still poorly documented; the persistence framework being one of them. The following snippet is typically used for setting up the instance store:

var instanceStore = new SqlWorkflowInstanceStore(connectionString);
instanceStore.HostLockRenewalPeriod = TimeSpan.FromSeconds(30);
var instanceHandle = instanceStore.CreateInstanceHandle();
var view = instanceStore.Execute(instanceHandle, new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(10));
instanceStore.DefaultInstanceOwner = view.InstanceOwner;

It’s difficult to find a detailed explanation of what all of this does; and to be honest, usually it’s not necessary. At least not, until you start encountering problems, such as InstanceOwnerException: The execution of an InstancePersistenceCommand was interrupted because the instance owner registration for owner ID '9938cd6d-a9cb-49ad-a492-7c087dcc93af' has become invalid. This error indicates that the in-memory copy of all instances locked by this owner have become stale and should be discarded, along with the InstanceHandles. Typically, this error is best handled by restarting the host.

The error is closely related to the HostLockRenewalPeriod property which defines how long obtained instance handle is valid without being renewed. If you try monitoring the database while an instance store with a valid instance handle is instantiated, you will notice [System.Activities.DurableInstancing].[ExtendLock] being called periodically. This stored procedure is responsible for renewing the handle. If for some reason it fails to be called within the specified HostLockRenewalPeriod, the above mentioned exception will be thrown when attempting to persist a workflow. A typical reason for this would be temporarily inaccessible database due to maintenance or networking problems. It’s not something that happens often, but it’s bound to happen if you have a long living instance store, e.g. in a constantly running workflow host, such as a Windows service.

Fortunately it’s not all that difficult to fix the problem, once you know the cause of it. Before using the instance store you should always check, if the handle is still valid; and renew it, if it’s not:

if (!instanceHandle.IsValid)
{
    instanceHandle = instanceStore.CreateInstanceHandle();
    var view = instanceStore.Execute(instanceHandle, new CreateWorkflowOwnerCommand(), TimeSpan.FromSeconds(10));
    instanceStore.DefaultInstanceOwner = view.InstanceOwner;
}

It’s definitely less invasive than the restart of the host, suggested by the error message.

Handling PropertyChanged Event in MvvmCross View Model Unit Tests

$
0
0

A great side effect of view models being implemented in portable class libraries when using MvvmCross, is the ability to unit test them on any platform, not necessarily the one you are targeting. So even when developing a mobile application, you can test it in full .NET framework and take advantage of all the tooling available there. In my opinion NCrunch test runner and Moq mocking framework are great examples of tools that are not available for mobile platforms but can make testing much more pleasant.

View model unit tests often depend on PropertyChanged events or have to test them. A typical simple view model unit test checking whether data gets loaded correctly on Start, could look like this:

[TestMethod]
public void LoadsPlayersOnStartAndNotifies()
{
    var expected = new[]
    {
        new Player(),
        new Player(),
        new Player()
    };

    var repositoryServiceMock = new Mock<IRepositoryService>();
    repositoryServiceMock.Setup(mock => mock.GetPlayers()).Returns(expected);
    var viewModel = new PlayersEditorViewModel(repositoryServiceMock.Object);

    var handle = new AutoResetEvent(false);
    viewModel.PropertyChanged += (sender, args) =>
    {
        if (args.PropertyName == "Players")
        {
            handle.Set();
        }
    };
    viewModel.Start();
    Assert.IsTrue(handle.WaitOne(TimeSpan.FromMilliseconds(50)));

    CollectionAssert.AreEqual(expected, viewModel.Players);
}

Contrary to what you would expect, the above test fails because the PropertyChanged event is never raised, even though the view model being tested, works correctly.

After taking a look at MvxNotifyPropertyChanged code, it was easy to explain the behavior. RaisePropertyChanged by default marshals PropertyChanged events to the UI thread (see ShouldAlwaysRaiseInpcOnUserInterfaceThread):

public virtual void RaisePropertyChanged(PropertyChangedEventArgs changedArgs)
{
    // check for interception before broadcasting change
    if (InterceptRaisePropertyChanged(changedArgs)
        == MvxInpcInterceptionResult.DoNotRaisePropertyChanged)
        return;

    var raiseAction = new Action(() =>
            {
                var handler = PropertyChanged;

                if (handler != null)
                    handler(this, changedArgs);
            });

    if (ShouldAlwaysRaiseInpcOnUserInterfaceThread())
    {
        // check for subscription before potentially causing a cross-threaded call
        if (PropertyChanged == null)
            return;

        InvokeOnMainThread(raiseAction);
    }
    else
    {
        raiseAction();
    }
}

In unit tests there’s no Dispatcher set, causing InvokeOnMainThread not to raise the event at all:

protected void InvokeOnMainThread(Action action)
{
    if (Dispatcher != null)
        Dispatcher.RequestMainThreadAction(action);
}

Fortunately the problem is really to fix now, with all the information at hand. There’s no need to marshal the events to UI thread in unit tests, therefore the corresponding view model setting can be disabled (notice the call to ShouldAlwaysRaiseInpcOnUserInterfaceThread(false)):

[TestMethod]
public void LoadsPlayersOnStartAndNotifies()
{
    var expected = new[]
    {
        new Player(),
        new Player(),
        new Player()
    };

    var repositoryServiceMock = new Mock<IRepositoryService>();
    repositoryServiceMock.Setup(mock => mock.GetPlayers()).Returns(expected);
    var viewModel = new PlayersEditorViewModel(repositoryServiceMock.Object);
    viewModel.ShouldAlwaysRaiseInpcOnUserInterfaceThread(false);

    var handle = new AutoResetEvent(false);
    viewModel.PropertyChanged += (sender, args) =>
    {
        if (args.PropertyName == "Players")
        {
            handle.Set();
        }
    };
    viewModel.Start();
    Assert.IsTrue(handle.WaitOne(TimeSpan.FromMilliseconds(50)));

    CollectionAssert.AreEqual(expected, viewModel.Players);
}

With this minor modification the unit test succeeds as expected.

There’s still quite a lot of code involved in setting up the handling of PropertyChanged event in the unit test. Since a similar pattern is often required in tests, it can be wrapped in an extension method to make the tests more straightforward and less repetitive:

public static Task WaitPropertyChangedAsync(this MvxViewModel viewModel, string propertyName)
{
    viewModel.ShouldAlwaysRaiseInpcOnUserInterfaceThread(false);

    var task = new Task(() => { });
    viewModel.PropertyChanged += (sender, args) =>
    {
        if (args.PropertyName == propertyName)
        {
            task.Start();
        }
    };
    return task;
}

Now the unit test can be much simpler and will only contain the code actually relevant to it:

[TestMethod]
public void LoadsPlayersOnStartAndNotifies()
{
    var expected = new[]
    {
        new Player(),
        new Player(),
        new Player()
    };

    var repositoryServiceMock = new Mock<IRepositoryService>();
    repositoryServiceMock.Setup(mock => mock.GetPlayers()).Returns(expected);
    var viewModel = new PlayersEditorViewModel(repositoryServiceMock.Object);
    var propertyChangedTask = viewModel.WaitPropertyChangedAsync("Players");

    viewModel.Start();
    Assert.IsTrue(propertyChangedTask.Wait(50));

    CollectionAssert.AreEqual(expected, viewModel.Players);
}

Ready to Use Dictionary for Objects of Different Types

$
0
0

Recently a colleague of mine mentioned that he has just learned about KeyedByTypeCollection, although it has been included with .NET framework since .NET 3.0. I’m writing this post, because I didn’t know about it until that day, either. This leads me to believe that there must be many more developers out there who aren’t aware of this niche class being readily available.

Unlike other generic collections, it provides a simple way of storing objects of multiple different types in a single collection, still being able to retrieve them in a strongly typed manner. Admittedly, it’s not a common requirement, but there are scenarios in which such a functionality is extremely useful and having a class for it in .NET framework is really nice. Let me present two such scenarios that have come up in my daily work lately.

Designing a data access layer usually also means creating a data context or repository service class, which provides access to repositories of different entity types. This could be a typical repository service interface:

public interface IRepositoryService
{
    IRepository<Player> Players { get; }
    IRepository<Game> Games { get; }
    IRepository<Session> Sessions { get; }
    IRepository<Round> Rounds { get; }
}

Instead of having a separate private field for each entity repository, they can all be stored in a KeyedByTypedCollection and retrieved from it in individual properties:

private KeyedByTypeCollection<object> _repositories;

public IRepository<Player> Players
{
    get
    {
        return _repositories.Find<IRepository<Player>>();
    }
}

This makes it really easy to implement a generic method for accessing repositories based on their entity type, alongside the strongly typed properties or even instead of them:

public IRepository<T> GetRepository<T>()
{
    return _repositories.Find<IRepository<T>>();
}

In Workflow Foundation extensions can be used to provide additional functionalities when hosting workflows, either to activities or during the persistence process. The IWorkflowInstanceExtension interface allows individual extensions to return additional extensions, they require to function correctly.

The handling of these extensions can again be internally implemented very elegantly using a KeyedByTypeCollection:

private KeyedByTypeCollection<object> _extensions = new KeyedByTypeCollection<object>
{
    new PersistenceExtension(),
    new ActivityExtension()
};

public IEnumerable<object> GetAdditionalExtensions()
{
    return _extensions;
}

public void UseIndividualExtensions()
{
    var persistenceExtension = _extensions.Find<PersistenceExtension>();
    var activityExtension = _extensions.Find<ActivityExtension>();
}

Both of the above examples could of course just as well be implemented without KeyedByTypeCollection, but it wouldn’t be as elegant and it would require more code. As I often say, know the tools you are using, .NET framework base class library in this case.

What's New in Windows 8.1 Update Session at spring NT Conference

$
0
0

On Wednesday I had my only session at the spring NT conference in Bled this year. I was speaking about the new stuff for developers in Windows 8.1 Update. After a short mention of the universal project, I focused on the changes available only to sideloaded enterprise applications.

Given the small amount of time available for preparing the session, I didn’t prepare any examples of my own and used the publicly available ones from Microsoft instead. As always, you can download my slides from SlideShare.

Apart from the links at the end of the slide deck, keep an eye on Harry Pierson’s blog. He had a session on this subject at Build and his blog is currently probably the best source of information about it.

Sqlite Only Executes the First Statement in a Command

$
0
0

Let’s take a look at the following code snippet (for those of you who don’t recognize the API, it’s Sqlite MvvmCross plugin, a slightly modified fork of sqlite-net):

using (var connection = new MvxWpfSqLiteConnectionFactory().Create(_filename))
{
  connection.Execute(
    "CREATE TABLE [MyTable] ([Id] INTEGER NOT NULL PRIMARY KEY);" + 
    "INSERT INTO [MyTable] ([Id]) VALUES (1);");

  var count = connection.ExecuteScalar<int>(
    "SELECT COUNT(*) FROM [MyTable];");

  Console.Write(count);
}

What do you think, the result is? 1? Well, actually it’s 0. And no exception is thrown, in case you were wondering. You can try it out yourself, if you don’t believe me.

This got me curious and I took a closer a look at the sqlite-net code. This is SQLiteConnection.Execute implementation:

public int Execute(string query, params object[] args)
{
  var cmd = CreateCommand(query, args);

  if (TimeExecution)
  {
    if (_sw == null)
    {
      _sw = new Stopwatch();
    }
    _sw.Reset();
    _sw.Start();
  }

  var r = cmd.ExecuteNonQuery();

  if (TimeExecution)
  {
    _sw.Stop();
    _elapsedMilliseconds += _sw.ElapsedMilliseconds;
    Debug.WriteLine(string.Format("...");
  }

  return r;
}

The interesting stuff is in SQLiteCommand.ExecuteNonQuery:

public int ExecuteNonQuery()
{
  if (_conn.Trace)
  {
    Debug.WriteLine("Executing: " + this);
  }

  var r = SQLite3.Result.OK;
  var stmt = Prepare();
  r = SQLite3.Step(stmt);
  Finalize(stmt);
  if (r == SQLite3.Result.Done)
  {
    int rowsAffected = SQLite3.Changes(_conn.Handle);
    return rowsAffected;
  }
  else if (r == SQLite3.Result.Error)
  {
    string msg = SQLite3.GetErrmsg(_conn.Handle);
    throw SQLiteException.New(r, msg);
  }
  else
  {
    throw SQLiteException.New(r, r.ToString());
  }
}

And also in SQLiteCommand.Prepare:

Sqlite3Statement Prepare()
{
  var stmt = SQLite3.Prepare2(_conn.Handle, CommandText);
  BindAll(stmt);
  return stmt;
}

All of the code correctly checks the result codes and throws exceptions accordingly.

Although I haven’t posted all relevant code here, I did review it, and the real origin of this behavior is elsewhere - in the native sqlite3.dll sqlite3_prepare_v2 method. Here’s the relevant part of the documentation: “These routines only compile the first statement in zSql, so *pzTail is left pointing to what remains uncompiled.”

Since sqlite-net doesn’t do anything with the uncompiled tail, only the first statement in the command is actually executed. The remainder is silently ignored. In most cases you won’t notice that when using sqlite-net. You will either use its micro ORM layer or execute individual statements. The only common exception that comes to mind, is trying to execute DLL or migration scripts which are typically multi statement batches.

To handle such cases, you will need to split the batches into single statements. Here’s a quick sample of the idea:

var statements = script.Split(new[] { ';' }, StringSplitOptions.RemoveEmptyEntries);
foreach (var statement in statements)
{
  connection.Execute(statement);
}

Don’t use this code in production, though. It will only work as long as you don’t have any semicolons inside your statements. If the batches are completely under you control, you might be able ensure that. Otherwise keep in mind that creating a bulletproof tokenizer for splitting batches into statements is not trivial and you’re probably better off fixing sqlite-net to correctly execute all statements in the command. And while you’re at it, create a pull request, so that others will benefit from your fix as well.

Book Review: Xamarin Mobile Application for Android

$
0
0

androidI’ve been planning to take a closer look at Xamarin products for quite some time now, I just needed something to actually get me started. When I got the offer to review Xamarin Mobile Application Development for Android written by Mark Reynolds and published by Packt publishing, I didn’t think much. It was a great opportunity to actually try out Xamarin.Android.

I really liked how the book started out with an introduction to the Android platform and some technical insight into the architecture of Xamarin.Android and its integration with the platform. It gave me a nice foundation to build upon during the remainder of the book. The author decided to organize the chapters around a single application, building it from start to finish, while gradually incorporating new features. I found this approach engaging and easy to follow by building that same application while reading.

For a book targeting existing C# developers, it started a bit too slow in my opinion, spending too much time on the basics if IDE. Even though Xamarin Studio is being used, it is similar enough to Visual Studio and shouldn’t require much attention. Once that was out of the way, the chapters nicely focused on individual Android specifics, such as building the UI, handling the navigation and working with sensors. These subjects are what .NET developers really need to transition to the new platform and the book does a good job at it. The book concludes with some basics about application deployment; again very useful for those, not already familiar with the platform.

There were a couple of topics I missed in the book. In particular more information about testing the applications on actual devices and some guidance on how to best take advantage of existing resources for Java Android development when working in Xamarin. It would also be very useful to include some recommended development practices for reusing code between platforms and handling Android specifics.

Nevertheless the book is a great first step into the world of Xamarin.Android for a seasoned .NET C# developer with no previous development experience on Android. It’s definitely enough to get you started and makes it much easier to decide whether this is the right way to build Android applications or not. It certainly convinced me to use Xamarin.Android for my first Android application.

The book is available on Amazon and sold directly by the publisher.

Strong Name Validation Failed Security Exception

$
0
0

A couple of days ago I encountered a FileLoadException with the following message: "Could not load file or assembly 'Microsoft.Web.XmlTransform, Version=2.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. Strong name validation failed." Being used to seeing similar exceptions when assemblies are missing or have their versions mismatched, it took me a while before I finally got to the bottom of it. Even more so because it was happening inside a WiX custom action, making it difficult to check the assemblies that were actually included.

The key to the solution was in carefully reading the complete message. The last sentence made it clear, that it wasn’t a "common" issue of a missing assembly. There was even an inner SecurityException with the same message: "Strong name validation failed." The next logical step was verifying the assembly strong name, using the Strong Name Tool:

PS> sn.exe -v .\Microsoft.Web.XmlTransform.dll 

Microsoft (R) .NET Framework Strong Name Utility Version 4.0.30319.33440 
Copyright (c) Microsoft Corporation. All rights reserved. 

.\Microsoft.Web.XmlTransform.dll is a delay-signed or test-signed assembly 

This made everything much clearer. Since we’re talking about a third party library, I couldn’t fix it myself. This was the latest version of the library at the time, therefore I had to revert back to version 1.0.0 and the problem was solved. The next step was contacting the Microsoft ASP.NET team who published the package. They responded in less than an hour and by the time of writing this post they have already published a new version and unlisted the defective one.


Book Review: Telerik WPF Controls Tutorial

$
0
0

Telerik WPF Controls TutorialWhen I got a chance to review Daniel R. Spalding’s book Telerik WPF Controls Tutorial published by Packt Publishing, I had my doubts about a book focusing solely on controls by a single vendor. As quite a proponent of using third party controls whenever it makes sense, I finally got curious and decided to go for it and get to know Telerik’s WPF control suite better in the process.

The book started out with a useful discussion in favor of using third party controls instead of customizing Microsoft’s default ones yourself. Unfortunately this wasn’t followed up with information I expected. I missed a lot of useful details that would help me decide whether to use these controls or not: their advantages in comparison to standard controls, which properties are bindable and which not, different options for getting the entered data back from controls, etc. I would much prefer more complex real world control usage scenarios instead of the really basic samples.

Although the book is already really short, it still contains too much content, hardly related to Telerik controls. Instead of diving into their built-in validation support, the author focused on his own implementation. A lot of attention was given to loading the data for the controls from various sources. I would prefer the sample data just being there and having its structure explained in more detail instead of having to check the accompanying code for that. This content also felt very repetitive, appearing in each chapter with only minor differences. The whole section about authentication seemed completely out of place to me, as well.

In my opinion this book is a big missed opportunity. What could have been a show case of designing great UI featuring Telerik’s controls with recommended best practices and usage patterns, turned out to only be a shallow overview of a small subset of available controls, interspersed with random opinionated half accurate information. I can hardly recommend the book to anyone except maybe to beginners in WPF with no previous exposure to any third party controls, and even they could be better served elsewhere.

The book is available on Amazon and sold directly by the publisher.

Testing View Model IoC Resolution in MvvmCross

$
0
0

To be honest, most of the time there’s no need to use an IoC container in unit tests. You actually want to manually provide specific implementations or even mocks of dependencies to class constructors in individual tests. Often that’s even one of the reasons for using dependency injection and IoC frameworks in the first place: to be able to provide a different testing implementation from the production one.

Still, testing the IoC container configuration makes sense in an application, even though that’s more an integration test than a unit test. When using IoC containers, we must be aware that runtime errors will occur when not all required dependencies for the created object are registered in the application beforehand. Since dependencies can change through time, having tests to make sure all the types can be instantiated, will give us additional assurance, that our application works as expected. Of course, this doesn’t necessarily mean, the right implementations are being used for all dependencies, but that’s not what we want to test.

Using MvvmCross’s built-in IoC in unit tests is not all that simple, though. There’s a helper base class available to make it easier, but unfortunately only for .NET 4.5. In our case this won’t be enough, because we want to test dependency registration in each platform specific application and we can’t do that by just testing a portable class library from within a .NET 4.5 test project. I even tried using the code from the base class in a portable or platform specific base class, but it also turned out to be the wrong approach. The test would only use the registrations from the portable Application class, but there are other locations where dependency registration is being performed, such as plugins and the platform specific Setup class.

This forced me to test the actual platform specific code by referencing it in the test project and calling its initialization from the test. This was my first attempt (the sample is Windows Store specific, because this is the platform, I am targeting with my current project):

[UITestMethod]
public void CanResolveMyViewModel()
{
  MvxSingleton.ClearAllSingletons();
  var ioc = MvxSimpleIoCContainer.Initialize();
  ioc.RegisterSingleton(ioc);
  ioc.RegisterSingleton<IMvxSettings>(new MvxSettings());

  var setup = new Setup(new Frame());
  setup.Initialize();

  var viewModel = Mvx.IocConstruct<MyViewModel>();
  Assert.IsNotNull(viewModel);
}

Notice the call to Setup constructor. It requires an instance of a Frame, which is a UI object and must be run on the UI thread; and tests by default aren’t. Providing a null instead, causes an exception. That’s the reason I had to use UITestMethod attribute instead of TestMethod. This works with most test runners, but unfortunately my favorite one, NCrunch, doesn’t recognize such methods as tests. I was able to work around this by using a different technique to execute the problematic code on the UI thread:

[ClassInitialize]
public static async Task Init(TestContext context)
{
  var taskSource = new TaskCompletionSource<object>();
  await CoreApplication.MainView.CoreWindow.Dispatcher.RunAsync(
    CoreDispatcherPriority.Normal, () =>
    {
      try
      {
        // Frame must be instantiated and used on UI thread
        var setup = new Setup(new Frame());
        setup.Initialize();

        taskSource.SetResult(null);
      }
      catch (Exception e)
      {
        taskSource.SetException(e);
      }
    });
  await taskSource.Task;
}

[TestMethod]
public void CanResolveMyViewModel()
{
  var viewModel = Mvx.IocConstruct<MyViewModel>();
  Assert.IsNotNull(viewModel);
}

As you can see, I even managed to wrap all my initialization code into a separate method, called by MSTest framework only once before all the other tests in the class. This way no initialization code needs to be put in the test methods, making them easy to write and understand.

Ensuring Unique Property Value Using Fluent Validation

$
0
0

Fluent Validation is a small portable validation library with fluent interface. It makes it really easy to create validators for a specific type of object:

public class PlayerValidator : AbstractValidator<Player>
{
  public PlayerValidator()
  {
    RuleFor(player => player.Name).NotEmpty().WithMessage("Name must not be empty.");
    RuleFor(player => player.Name).Length(0, 100).WithMessage("Name must not exceed 100 characters.");
  }
}

The fluent API makes validators easy to understand. It’s really simple to use them for validating any instance of the supported object:

var validator = new PlayerValidator();
var result = validator.Validate(player);
if (!result.IsValid)
{
  // iterate through result.Errors collection
}

You can find more information about basic usage and even more advanced scenarios in very detailed documentation. I’ll rather focus on extending the library with your own custom validation, instead.

We will create a rule for validating, that a specific property of the object is unique in a collection of such objects. Such a validation can be very useful for editing a list of distinct objects, such as players in our case. Let’s start with a basic validation algorithm:

public bool IsNameUnique(IEnumerable<Player> players, Player editedPlayer, string newValue)
{
  return players.All(player => player.Equals(editedPlayer) || player.Name != newValue);
}

We need to check every item in the collection and make sure it doesn’t have the same property value as is the new property value of the edited object. Of course, we skip the check for the object we are editing. It’s pretty easy to include this specific validation into our validator:

public class PlayerValidator : AbstractValidator<Player>
{
  private readonly IEnumerable<Player> _players;

  public PlayerValidator(IEnumerable<Player> players)
  {
    _players = players;
    RuleFor(player => player.Name).NotEmpty().WithMessage("Name must not be empty.");
    RuleFor(player => player.Name).Length(0, 100).WithMessage("Name must not exceed 100 characters.");
    RuleFor(player => player.Name).Must(IsNameUnique).WithMessage("Name must be unique");
  }

  public bool IsNameUnique(Player editedPlayer, string newValue)
  {
    return _players.All(player => player.Equals(editedPlayer) || player.Name != newValue);
  }
}

In such cases we can use the built-in predicate validator Must, and supply it with the predicate. Ours is almost identical to the original validation method above except that we are keeping the collection in a class level field initialized in the validator constructor instead of passing it to the predicate every time.

Although this works just fine for single cases, you might want to create a more generic solution that you can use for validating any object, thus avoid having to reimplement a similar method in each validator. This is where things get a bit more complicated, although the process is still well documented.

It turns out we need to create a custom class, derived from PropertyValidator. There’s only a single method to implement: IsValid. It automatically receives information about the object being validated, the property being validated and its new value. The rest of required information - in our case the collection of items - will need to be provided in a different way. Again we will inject it into the validator through the constructor. This is how a working property validator could look like:

public class UniqueValidator<T> : PropertyValidator
  where T: class 
{
  private readonly IEnumerable<T> _items;

  public UniqueValidator(IEnumerable<T> items)
    : base("{PropertyName} must be unique")
  {
    _items = items;
  }

  protected override bool IsValid(PropertyValidatorContext context)
  {
    var editedItem = context.Instance as T;
    var newValue = context.PropertyValue as string;
    var property = typeof(T).GetTypeInfo().GetDeclaredProperty(context.PropertyName);
    return _items.All(item => 
      item.Equals(editedItem) || property.GetValue(item).ToString() != newValue);
  }
}

The only difference worth pointing out, is the usage of reflection in our IsValid method. This is required to get the value of the property from other objects of the same type, since we only have its name.

Now that we have a custom PropertyValidator, we can rewrite the PlayerValidator without including the actual validation logic for ensuring uniqueness:

public class PlayerValidator : AbstractValidator<Player>
{
  public PlayerValidator(IEnumerable<Player> players)
  {
    _players = players;
    RuleFor(player => player.Name).NotEmpty().WithMessage("Name must not be empty.");
    RuleFor(player => player.Name).Length(0, 100).WithMessage("Name must not exceed 100 characters.");
    RuleFor(player => player.Name).SetValidator(new UniqueValidator<Player>(players))
                                  .WithMessage("Name must be unique");
  }
}

Still, the usage of UniqueValidator is somewhat more complex than the built-in validators. We can fix that as well, by writing a corresponding extension method, enabling fluent usage:

public static IRuleBuilderOptions<TItem, TProperty> IsUnique<TItem, TProperty>(
  this IRuleBuilder<TItem, TProperty> ruleBuilder, IEnumerable<TItem> items)
    where TItem : class
{
  return ruleBuilder.SetValidator(new UniqueValidator<TItem>(items));
}

Although the method body is pretty straightforward, the method signature could certainly use some additional explanation. The most important part to understand are the generic type parameters. The first one is the type of the object being validated and the second one the type of the property. IRuleBulderOptions and IRuleBuilder interfaces are the key to the fluent chaining of calls, while the items collection is our own additional parameter required by UniqueValidator.

Unique validation in our PlayerValidator now really looks nice and fluent. You could easily think IsUnique validator was actually already built-in.

public class PlayerValidator : AbstractValidator<Player>
{
  public PlayerValidator(IEnumerable<Player> players)
  {
    RuleFor(player => player.Name).NotEmpty().WithMessage("Name must not be empty.");
    RuleFor(player => player.Name).Length(0, 100).WithMessage("Name must not exceed 100 characters.");
    RuleFor(player => player.Name).IsUnique(players).WithMessage("Name must be unique");
  }
}

Fluent Validation might seem a little scary to get into at first, but there’s really very little to learn, if built-in validators are all you need. Even writing custom validators is not as difficult as it might seem and is definitely worth getting into if you need additional validations and want to reuse them in different objects or even projects. In any case; take a closer look at this library, before trying to come up with your own validation framework.

Deffered Evaluation of Collections Passed as Parameters

$
0
0

Most of us know that having a parameter of type List is not recommended and that IList or even IEnumerable should be used instead when a parameter is only going to be used for iterating through the list of items. We usually only take advantage of this by passing in different types of collections, such as arrays, lists, etc., although IEnumerable can prove much more useful in certain scenarios.

Let’s take a look at one such scenario - a unique validator, ensuring that a certain property remains unique in the collection. Its interface could look like this:

public interface IUniqueValidator<T>
{
  IEnumerable<T> Items { get; set; }
  bool IsValid(T item);
}

The actual implementation is not really important. It only matters that there is a list of all items available to the validator, against which the validated item’s unique property can be compared. It doesn’t really matter, what kind of a collection we are using outside the validator (in our view model, for example); as long as it implements IEnumerable, we can pass it to the validator.

Still, this only works, if the collection outside the validator contains items of the same type, required by the validator. This might seem obvious, but often this isn’t the case. Even in our above described scenario, it can easily happen that the view model will contain an instance of ObservableCollection<Wrapper<T>>, e.g. to provide additional properties for commands and UI specific flags.

Such a collection can’t be directly passed to the validator; at least not without changing the validator to be aware of the wrapper and be able to obtain the original type from it. Of course, one could always define casting operators for this conversion, but IEnumerable allows a much more elegant solution to this problem.

While all collection types implement IEnumerable, this is not the only way to implement it. A very good example of that is IQueryable - the core of LINQ, returned by most of its extension methods. Without knowing it, you are depending on it every time you use LINQ. The most important feature of this interface (apart from it actually implementing IEnumerable) is deferred execution. The filtering and projections are performed every time the collection is enumerated.

In our scenario this allows us to pass "live" projections of our wrapped items to the validator. Having the following definitions:

public class UniqueValidator<T> : IUniqueValidator<T>
{
  public IEnumerable<T> Items { get; set; }
  public bool IsValid(T item)
  {
    // perform unique validation against Items
  }
}

public class Wrapper<T>
{
  public T Item { get; set; }
}

public class ItemType
{ }

public ObservableCollection<Wrapper<ItemType>> ItemCollection { get; set; }

One can easily write the following code:

void Main()
{
  ItemCollection = new ObservableCollection<Wrapper<ItemType>>();

  var validator = new UniqueValidator<ItemType>
  {
    // notice the lack of ToList() or ToArray at the end
    Items = ItemCollection.Select(wrapper => wrapper.Item)
  };
}

This doesn’t create a static collection with the projected contents of ItemCollection at the time this code executes. Items will contain exactly the items from the ItemCollection, no matter how much we change it, because the Select() projection will only be performed when Items property is actually enumerated.

We can take this even a step further. IQueryable does not implement filtering and projections only on local collections (the so called LINQ to objects), but it can also work against databases (LINQ to SQL, Entity Framework and other ORM implementations using LINQ syntax). In this case the process of enumeration can actually query the database and read the results off the connection. Of course, one needs to be aware of this when using such a validator, but it can still offer a lot of additional flexibility.

Unit Testing Navigation in MvvmCross

$
0
0

The best resource on the subject of testing navigation in MvvmCross view models that I managed to find, was Slodge’s blog post from almost two years ago. While it still contains useful guidance for today, there have been changes in the framework, which prevent direct usage of included sample code. After I got it to work in my own project I decided to publish a more up-to-date set of instructions here.

Apart from the already mentioned blog post, the best starting point is the Testing article in MvvmCross Wiki on GitHub. Although it doesn’t explicitly focus on testing navigation, it still provides all the plumbing code, required to make it work. The core of it is MockDispatcher implementation of IMvxDispatcher and IMvxMainThreadDispatcher interfaces:

public class MockDispatcher : MvxMainThreadDispatcher, IMvxViewDispatcher
{
    public readonly List<MvxViewModelRequest> Requests = new List<MvxViewModelRequest>();
    public readonly List<MvxPresentationHint> Hints = new List<MvxPresentationHint>();

    public bool RequestMainThreadAction(Action action)
    {
        action();
        return true;
    }

    public bool ShowViewModel(MvxViewModelRequest request)
    {
        Requests.Add(request);
        return true;
    }

    public bool ChangePresentation(MvxPresentationHint hint)
    {
        Hints.Add(hint);
        return true;
    }
}

To use it in place of the default implementation, it must be registered before the tests are run. I suggest you include this initialization in a unit test base class, which you can derive from MvxIoSupportingTest class (distributed with MvvmCross.HotTuna.Tests NuGet package). This way you don’t have to worry about it in each test class. You can just derive all your view model test classes from this one and everything will already be initialized. Here’s my implementation of it (using MsTest unit testing framework):

public class ViewModelTestsBase : MvxIoCSupportingTest
{
    protected MockDispatcher MockDispatcher;
    protected override void AdditionalSetup()
    {
        base.AdditionalSetup();
        MockDispatcher = new MockDispatcher();
        Ioc.RegisterSingleton<IMvxViewDispatcher>(MockDispatcher);
        Ioc.RegisterSingleton<IMvxMainThreadDispatcher>(MockDispatcher);
        // required only when passing parameters
        Ioc.RegisterSingleton<IMvxStringToTypeParser>(new MvxStringToTypeParser());
    }
    [TestInitialize]
    public void TestInit()
    {
        Setup();
    }
}

Notice how I exposed the MockDispatcher instance as a protected field to make it available in derived test classes. This makes it possible for the tests to check the navigation calls that have been made by them. It is also important that this setup is done before each test to clear records of any calls triggered by other tests.

If you’re not familiar with navigation in MvvmCross, you can learn about it from a great article in MvvmCross Wiki. In this post I’m only going to focus on testing three typical navigation scenarios.

For simple navigation without any parameters being passed, you just need to check the view model type included in the only request that has been made:

[TestMethod]
public void SimpleNavigationTest()
{
    var viewModel = new StartingViewModel();
    viewModel.NavigationCommand.Execute(null);

    Assert.AreEqual(1, MockDispatcher.Requests.Count);
    Assert.AreEqual(typeof(DestinationViewModel), MockDispatcher.Requests[0].ViewModelType);
}

When the navigation requires parameters to be passed along, they need to be verified as well. They are included in the request as a collection of string key-value pairs:

[TestMethod]
public void NavigationWithParametersTest()
{
    var viewModel = new StartingViewModel();
    ViewModel.SelectedItem = new Item { Id = 42 };
    viewModel.NavigationCommand.Execute(null);

    Assert.AreEqual(1, MockDispatcher.Requests.Count);
    Assert.AreEqual(typeof(DestinationViewModel), MockDispatcher.Requests[0].ViewModelType);
    Assert.AreEqual(1, MockDispatcher.Requests[0].ParameterValues.Count);
    Assert.AreEqual("42", MockDispatcher.Requests[0].ParameterValues["Id"]);
}

Navigating back is implemented in MvvmCross using the semantics of close operation, therefore such calls are logged as presentation hints, not requests:

[TestMethod]
public void NavigationBackTest()
{
    var viewModel = newStartingViewModel();
    viewModel.BackCommand.Execute(null);

    Assert.AreEqual(1, MockDispatcher.Hints.Count);
    Assert.AreEqual(typeof(MvxClosePresentationHint), MockDispatcher.Hints[0].GetType());
}

That’s all there is to it. Really simple, once you know how to do it.

Binding to Individual Dictionary Items in WinRT

$
0
0

XAML has first class syntax support for binding to indexed properties, such as Dictionary. It’s even possible to use FallbackValue to handle missing keys in the collection:

<TextBox Text="{Binding Dictionary[MissingKey], FallbackValue='Error'}" />

Of course, real properties still have their advantages over indexed ones, such as full support for implementing INotifyPropertyChanged. For Dictionary properties there is no way to raise PropertyChanged for a single item in the collection. It can only be done for the Dictionary property which means notification for all items in the collection at once.

private IDictionary<string, string> _dictionary;
public IDictionary<string, string> Dictionary
{
  get { return _dictionary; }
  set
  {
    _dictionary = value;
    OnPropertyChanged();
  }
}

viewModel.Dictionary = new Dictionary<string, string>
{ 
  {"Key", "New Value"},
  // ...
};

This brings some performance overhead, except for scenarios where the complete Dictionary is being reconstructed at the same time, any way.

Unfortunately, while this approach works great and doesn’t cause any issues whatsoever in WPF, in WinRT (i.e. Windows Store applications) raising PropertyChanged for the Dictionary causes problems when there are bindings to keys that are not present in the new Dictionary. The KeyNotFoundException thrown by it, when the binding tries to access the non-existent item, remains unhandled and causes the application to crash.

Unhandled exception

Having FallbackValue set doesn’t help. Obviously the exception gets caught in application level UnhandledException handler, i.e. you can handle it there:

void App_UnhandledException(object sender, UnhandledExceptionEventArgs e)
{
  e.Handled = true;
}

You’ll usually want to be more selective when handling exceptions here and even add some type of logging or error reporting for other caught exceptions. Even if you add such a handler, your application will still break in debug mode, unless you add the following conditional compilation symbol to your project: DISABLE_XAML_GENERATED_BREAK_ON_UNHANDLED_EXCEPTION (which can also be seen in the above screenshot).

Because of all this, you might still want to handle the exception in some other way in Windows Store applications. A standard approach would be a custom IValueConverter:

public class DictionaryConverter : IValueConverter
{
  public object Convert(object value, Type targetType, object parameter, string language)
  {
    var dictionary = value as Dictionary<string, string>;
    if (dictionary == null || !(parameter is string))
    {
      return null;
    }
    string result;
    dictionary.TryGetValue((string)parameter, out result);
    return result;
  }

  public object ConvertBack(object value, Type targetType, object parameter, string language)
  {
    return null;
  }
}

The obvious advantages of it are simplicity and no changes to the view model. The price for this is a slight performance hit caused by the converter and more complex XAML syntax:

<TextBox Text="{Binding Dictionary, Converter={StaticResource DictionaryConverter},
                                    ConverterParameter=MissingKey}" />

Alternatively the Dictionary indexer could be replaced with a different one, returning null instead of throwing exceptions when the key is not present. The original Dictionary could be wrapped in a custom IDictionary implementation:

public class NonThrowingDictionary<TKey, TValue> : IDictionary<TKey, TValue>
{
  private readonly Dictionary<TKey, TValue> _dictionary;

  public NonThrowingDictionary(Dictionary<TKey, TValue> dictionary)
  {
    _dictionary = dictionary;
  }

  public TValue this[TKey key]
  {
    get
    {
      TValue value;
      _dictionary.TryGetValue(key, out value);
      return value;
    }
    set
    {
      _dictionary[key] = value;
    }
  }
  
  // Implement other members by passing calls to _dictionary
}

Now this dictionary could be used in the view model instead of the original one. To reduce the amount of extra code, only the indexer property could be implemented:

public class DictionaryWrapper<TKey, TValue>
{
  private readonly Dictionary<TKey, TValue> _dictionary;

  public DictionaryWrapper(Dictionary<TKey, TValue> dictionary)
  {
    _dictionary = dictionary;
  }

  public TValue this[TKey key]
  {
    get
    {
      TValue value;
      _dictionary.TryGetValue(key, out value);
      return value;
    }
    set
    {
      _dictionary[key] = value;
    }
  }
}

Either way requires more code than the IValueConverter approach and has greater impact on the view model. On the other hand XAML markup remains unchanged and there are some performance benefits since the converter isn’t called every time the binding is refreshed.

Depending on your requirements and values you can choose whichever approach best fits your case.

Get NuGet 2 Essentials for just $10

$
0
0

As you might have already noticed, Packt Publishing is currently celebrating 10 years of its existence. During this time they have managed to publish over 2000 books. For this very special occasion they have decided to offer a significant discount on their complete catalog of eBooks and videos – until July 5th all their titles can be purchased for just $10. If you’ve been considering buying one of their titles, now is the right time to do it. If you haven’t, why not browse through their catalog; you might find something that piques your interest.

10 days 10 years - All eBooks & videos for $10

NuGet 2 EssentialsSince I’m also the author of NuGet 2 Essentials, one of the books in their line-up, I encourage you to take a closer look at this book. If you’re a .NET developer, it should be an interesting read to you, no matter how much previous experience you already have with NuGet:

  • If you haven’t used it at all, you need to change that as soon as possible. This book can serve as a great introduction to it.
  • If you already know NuGet basics, the book will teach you, how to take better advantage of it.
  • And if you’re already a proficient user, it will probably still show you a new trick or two.

You can read more about the book in my previous blog post about it or from the publisher’s official page. Of course I’m at least a bit biased as the author, so don’t just take my word for its quality. The reviews have been quite positive as well; feel free to read them here, here, and here. Just don’t take too long: you only have until Saturday to get it at a discounted price.


Exposing FluentValidation Results over IDataErrorInfo

$
0
0

IDataErrorInfo interface is really handy when implementing data validation in WPF. There’s great built in support in XAML for displaying validation information to the user when DataContext implements IDataErrorInfo - only ValidatesOnDataErrors property needs to be set to True on the Binding:

<Grid><Grid.RowDefinitions><RowDefinition Height="Auto" /><RowDefinition Height="Auto" /><RowDefinition Height="Auto" /></Grid.RowDefinitions><Grid.ColumnDefinitions><ColumnDefinition Width="Auto" /><ColumnDefinition /></Grid.ColumnDefinitions><TextBlock Text="Name" Grid.Row="0" Grid.Column="0" /><TextBox Text="{Binding Name, ValidatesOnDataErrors=True}"
           Grid.Row="0" Grid.Column="1" /><TextBlock Text="Surname" Grid.Row="1" Grid.Column="0" /><TextBox Text="{Binding Surname, ValidatesOnDataErrors=True}"
           Grid.Row="1" Grid.Column="1" /><TextBlock Text="Phone number" Grid.Row="2" Grid.Column="0" /><TextBox Text="{Binding PhoneNumber, ValidatesOnDataErrors=True}"
           Grid.Row="2" Grid.Column="1" /></Grid>

By default, controls with validation errors are rendered with red border, but they don’t show the actual error message. This can be changed with a custom style applied to them:

<Style TargetType="TextBox"><Style.Triggers><Trigger Property="Validation.HasError" Value="true"><Setter Property="Background" Value="Pink"/><Setter Property="Foreground" Value="Black"/><Setter Property="ToolTip" 
              Value="{Binding RelativeSource={RelativeSource Self}, 
                              Path=(Validation.Errors)[0].ErrorContent}"/></Trigger></Style.Triggers><Setter Property="Validation.ErrorTemplate"><Setter.Value><ControlTemplate><Border BorderBrush="Red" BorderThickness="1"><AdornedElementPlaceholder /></Border></ControlTemplate></Setter.Value></Setter></Style>

Of course there are many different ways to implement IDataErrorInfo for a DataContext. But since I’ve recently become quite fond of FluentValidation library for implementing validators, I’m going to focus on using it for the rest of this post. Creating a basic validator in FluentValidation usually takes only a couple of lines of code:

public ContactValidator()
{
  RuleFor(login => login.Name).NotEmpty();
  RuleFor(login => login.Surname).NotEmpty();
  RuleFor(login => login.PhoneNumber).NotEmpty();
  RuleFor(login => login.PhoneNumber).Length(9,30);
  RuleFor(login => login.PhoneNumber).Must(phoneNumber => phoneNumber == null || 
                                                          phoneNumber.All(Char.IsDigit))
                                     .WithMessage("'Phone number' must only contain digits.");
}

The easiest way of using it from IDataErrorInfo, would be calling Validate from the indexer and filtering the results by the requested property:

public string this[string columnName]
{
  get
  {
  var result = _validator.Validate(this);
  if (result.IsValid)
  {
    return null;
  }
  return String.Join(Environment.NewLine, 
                     result.Errors.Where(error => error.PropertyName == columnName)
                                  .Select(error => error.ErrorMessage));
  }
}

Since there can be more than one ValidationFailure for a single property, I’m joining them together into a single string with each ErrorMessage in its own line.

This approach causes the Validate method to be called for every binding with ValidatesOnDataErrors enabled. If your validator does a lot of processing, this can add up to a lot of unnecessary validating. To avoid that, the Validate method can instead be called every time a property on the DataContext changes:

private string _name;

public string Name
{
  get { return _name; }
  set
  {
    _name = value;
    Validate();
  }
}

private void Validate()
{
  var result = _validator.Validate(this);
  _errors = result.Errors.GroupBy(error => error.PropertyName)
                         .ToDictionary(group => group.Key, 
                                       group => String.Join(Environment.NewLine,
                                                            group.Select(error => error.ErrorMessage)));
}

The indexer now only needs to retrieve the cached validation results from the _errors Dictionary inside the DataContext:

public string this[string columnName]
{
  get
  {
    string error;
    if (_errors.TryGetValue(columnName, out error))
    {
      return error;
    }
    return null;
  }
}

The only code that doesn’t really belong in the DataContext is now inside the Validate() method. Instead of just calling the Validator, it also parses its results and caches them in a Dictionary for future IDataErrorInfo indexer calls. This can be fixed by extracting the parsing logic into an extension method that can be used from any DataContext:

public static Dictionary<string, string> GroupByProperty(this IEnumerable<ValidationFailure> failures)
{
  return failures.GroupBy(error => error.PropertyName)
                 .ToDictionary(group => group.Key,
                               group => String.Join(Environment.NewLine,
                                                    group.Select(error => error.ErrorMessage)));
}

This makes DataContext’s Validate method much simpler:

private void Validate()
{
  _errors = _validator.Validate(this).Errors.GroupByProperty();
}

The same pattern can be applied for any DataContext with a corresponding Validator. With minor modifications it can be used even in cases when DataContext wraps a model class with its own validator or composites multiple such model classes. This is a quite common scenario when using MVVM pattern.

DateTime Values in SQLite When Using MvvmCross

$
0
0

In my spare time I’m developing an application in MvvmCross, using SQLite for local data storage. I’m taking advantage of MvvmCross SQLite-Net plugin. Recently I stumbled across a very strange behavior. The issue involved a fairly simple table with a DATETIME column:

CREATE TABLE [Session] (
  [Id] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, 
  [StartedAt] DATETIME NOT NULL, 
  [ClosedAt] DATETIME, 
  [Remark] NVARCHAR(100));

I had a corresponding model class in my code:

public class Session
{
  [PrimaryKey, AutoIncrement]
  public int Id { get; set; }
  public DateTime StartedAt { get; set; }
  public DateTime? ClosedAt { get; set; }
  public string Remark { get; set; }
}

I used the following code for inserting new records:

var session = new Session
{
  StartedAt = DateTime.Now
};
connection.Insert(session);

Re-reading the value from the database also behaved as expected:

var result = connection.Table<Session>.ToList()
                       .Single(s => s.Id == session.Id);
// no error here
Assert.AreEqual(session.StartedAt, result.StartedAt);

Everything seemed okay, until I tried to take a look at the table using SQLite Expert. This is how the above inserted record looked like:

Inserted record, as seen in SQLite Expert

Of course, this called for further investigation.

It turns out, SQLite’s handling of DateTime values is quite strange. There is no separate storage class for DateTime values. Instead, they can be stored as TEXT, REAL or INTEGER values. Built-in date and time functions transparently support either one of those.

To make matters even more complicated, SQLite supports DATETIME data type inside CREATE TABLE statements. Such a column is assigned a NUMERIC type affinity which isn’t a storage class, either. It can contain values of any supported storage class, but tries to convert the inserted text to INTEGER or REAL if it can be done in a lossless manner.

Now is the time to include SQLite-Net into the equation. This library supports storing of DateTime values either as TEXT or as INTEGER values. The mode is selected by the storeDateTimeAsTicks flag when creating a new SQLiteConnection. When creating a table with the library, the column data type is either DATETIME (for TEXT values) or BIGINT (for INTEGER values, i.e. ticks).

By default the library doesn’t store DateTime values as ticks, which should still make everything work fine in my case. But here’s, where the third player enters the stage: MvvmCross SQLite-Net plugin. It is actually a fork of SQLite-Net library with only minor changes to make it work easily with MvvmCross. But the one thing it does change, is the default storage mode for DateTime values: when using the plugin, by default they are stored as ticks, i.e. INTEGER values. This finally explains the strange behavior I observed: obviously SQLite Expert incorrectly interpreted the stored value since it didn’t match its expectations based on the column data type.

I decided to avoid the issue by switching the DateTime storage mode in SQLite-Net back to TEXT. MvvmCross abstracts platform specifics by providing a different SQliteConnectionFactory for each one of them. These factories implement two interfaces: ISQLiteConnectionFactory and ISQLiteConnectionFactoryEx. Platform-specific application setup registers the correct factory which implements both factory interfaces.

Usually you’ll obtain the correct factory in your view model by adding a constructor parameter of the required type. As long as you’re satisfied with the default connection settings, you can use the ISQLiteConnectionFactory interface:

public abstract class ViewModelBase
{
  private readonly ISQLiteConnectionFactory _sqliteConnectionFactory;

  private string _filename = "db.sqlite";

  protected ViewModelBase(ISQLiteConnectionFactory sqliteConnectionFactory)
  {
    _sqliteConnectionFactory = sqliteConnectionFactory;
  }

  protected ISQLiteConnection CreateConnection()
  {
    return _sqliteConnectionFactory.Create(_filename);
  }
}

To change the default setting, ISQLiteConnectionFactoryEx interface needs to be used:

public abstract class ViewModelBase
{
  private readonly ISQLiteConnectionFactoryEx _sqliteConnectionFactory;

  private string _filename = "db.sqlite";

  protected ViewModelBase(ISQLiteConnectionFactoryEx sqliteConnectionFactory)
  {
    _sqliteConnectionFactory = sqliteConnectionFactory;
  }

  protected ISQLiteConnection CreateConnection()
  {
    return _sqliteConnectionFactory.CreateEx(_filename, 
      new SQLiteConnectionOptions { StoreDateTimeAsTicks = false });
  }
}

No other code needs to be changed.

I chose TEXT storage class over INTEGER because of convenience. Having the column data type of DATETIME instead of BIGINT makes it much easier to work with data outside my application. This choice does have a disadvantage, though: less precision in stored values. In particular, the following test will now most likely fail:

var session = new Session
{
  StartedAt = DateTime.Now
};
connection.Insert(session);

var result = connection.Table<Session>.ToList()
                       .Single(s => s.Id == session.Id);
// will fail most of the time:
// milliseconds are stripped when stored to database
Assert.AreEqual(session.StartedAt, result.StartedAt);

In my case this wasn’t an issue. Seconds are enough precise for me. You’ll have to decide for yourself, which mode of storage is more suitable in your situation, though.

Removing a View from BackStack in MvvmCross for Windows Store Apps

$
0
0

Navigation in Windows Store apps is strongly based on the browser model, i.e. the application is keeping a back stack of previously shown pages which will be traversed again when navigating back. For most applications this approach works well, at least most of the time.

But there are some cases in which you don’t want the user to navigate back to a specific page in the history. A typical scenario would be a starting page for creating a new instance of something, e.g. a new game session: a user would navigate to it from the main menu, set some starting parameters there and then continue to the actual game session page. When navigating back, we don’t want the user to see the intermediate session setup page, but to to return directly to the main menu instead.

Desired navigation behavior

Unfortunately there’s no cross-platform way to achieve the desired behavior, therefore there’s also no such built-in functionality available in MvvmCross. Still, the navigation model in MvvmCross is very straight-forward and extensible, making it really simple to add such functionality, if you approach it the right way.

The key to navigation in MvvmCross are ViewPresenters. They handle two main types of messages emitted by view models: ViewModelRequests and PresentationHints. The former are used for switching between views and not applicable to our case; the latter are used for everything else related to navigation and are the ones that we will be taking advantage of.

The basic plan is as follows:

  • Define a new PresentationHint for removing the current (top) view from the navigation back stack
  • Extend the built-in platform-specific ViewPresenter to intercept this navigation hint, modify the back stack accordingly, and delegate all other navigation events to its base class.
  • Use this new ViewPresenter in our application instead of the built-in one.
  • Send the PresentationHint from the view model, we want to see removed from the back stack.

The complete process for a very similar scenario is thoroughly described in Ed Snider’s blog post. I strongly encourage you to read it, as it provides some additional insight into the inner workings of MvvmCross navigation, which will prove useful when you find yourself customizing it for your needs.

As already mentioned, we’ll start out with creating a new PresentationHint class:

public class RemoveTopViewFromBackStackHint
  : MvxPresentationHint
{ }

Our custom ViewPresenter will need to handle this new PresentationHint, as it will be completely ignored by the built-in ViewPresenter:

public class ExtendedViewPresenter : MvxStoreViewPresenter
{
  private readonly Frame _rootFrame;

  public ExtendedViewPresenter(Frame rootFrame) 
    : base(rootFrame)
  {
    _rootFrame = rootFrame;
  }

  public override void ChangePresentation(MvxPresentationHint hint)
  {
    if (hint is RemoveTopViewFromBackStackHint)
    {
      if (_rootFrame.BackStackDepth > 0)
      {
        _rootFrame.BackStack.RemoveAt(_rootFrame.BackStack.Count - 1);
      }
    }

    base.ChangePresentation(hint);
  }
}

For this new ViewPresenter to actually be used, we need to register it instead of the built-in one in our application’s setup class:

public class Setup : MvxStoreSetup
{
  protected override IMvxStoreViewPresenter CreateViewPresenter(Frame rootFrame)
  {
    var presenter = new ExtendedViewPresenter(rootFrame);
    Mvx.RegisterSingleton(presenter);
    return presenter;
  }
  // other setup code...
}

Now, everything is ready for sending our new PresentationHint from the view model. Because our code acts on the back stack, we need to wait until the page we want to remove is already in the back stack, therefore we first need to navigate to the next page, and only then send the new navigation hint:

private void OnStartSession()
{
  // code for initializing the session...
  ShowViewModel<SessionViewModel>();
  ChangePresentation(new RemoveTopViewFromBackStackHint());
}

When navigating back from SessionView, the intermediate view with the above ChangePresentation call will be skipped, and the previous page from the navigation back stack will be shown instead.

In a previous blog post I described how to unit test navigation between view models in MvvmCross. I used the following MockDispatcher class for logging navigation events and then asserting them:

public class MockDispatcher : MvxMainThreadDispatcher, IMvxViewDispatcher
{
  public readonly List<MvxViewModelRequest> Requests = new List<MvxViewModelRequest>();
  public readonly List<MvxPresentationHint> Hints = new List<MvxPresentationHint>();
  public bool RequestMainThreadAction(Action action)
  {
    action();
    return true;
  }
  public bool ShowViewModel(MvxViewModelRequest request)
  {
    Requests.Add(request);
    return true;
  }
  public bool ChangePresentation(MvxPresentationHint hint)
  {
    Hints.Add(hint);
    return true;
  }
}

This class doesn’t store all the required information to properly test our new type of navigation with page removal from back stack: because ViewModelRequests are stored separately from PresentationHints, there is no way to check whether the order of the two calls was correct. To make that possible the MockDispatcher class needs to be changed, so that both types of events will be stored in a single collection and the order will therefore be preserved:

public class MockDispatcher : MvxMainThreadDispatcher, IMvxViewDispatcher
{
  public class NavigationEvent
  {
    public MvxPresentationHint PresentationHint { get; private set; }
    public MvxViewModelRequest ViewModelRequest { get; private set; }

    public NavigationEvent(MvxPresentationHint presentationHint)
    {
      PresentationHint = presentationHint;
    }

    public NavigationEvent(MvxViewModelRequest viewModelRequest)
    {
      ViewModelRequest = viewModelRequest;
    }
  }

  public readonly List<NavigationEvent> NavigationEvents = new List<NavigationEvent>();

  public bool RequestMainThreadAction(Action action)
  {
    action();
    return true;
  }

  public bool ShowViewModel(MvxViewModelRequest request)
  {
    NavigationEvents.Add(new NavigationEvent(request));
    return true;
  }

  public bool ChangePresentation(MvxPresentationHint hint)
  {
    NavigationEvents.Add(new NavigationEvent(hint));
    return true;
  }
}

The refactoring only slightly changes the assertions for standard back and forward navigation (with and without parameters). At the same time it makes it very simple to assert for our new navigation type as well:

[TestMethod]
public void NavigationWithRemovalFromBackStackTest()
{
  var viewModel = new SetupSessionViewModel();
  viewModel.StartCommand.Execute(null);

  Assert.AreEqual(2, MockDispatcher.NavigationEvents.Count);
  Assert.AreEqual(typeof(SessionViewModel), 
                  MockDispatcher.NavigationEvents[0].ViewModelRequest.ViewModelType);
  Assert.AreEqual(typeof(RemoveTopViewFromBackStackHint), 
                  MockDispatcher.NavigationEvents[1].PresentationHint.GetType());
}

If the navigation type described in this blog post is not exactly what you are looking for, don’t get discouraged. It shouldn’t be difficult to use the same approach to achieve the behavior you require.

Book Review: OUYA Game Development by Example

$
0
0

OUYA Game Development by ExampleThe latest book from Packt Publishing that I got a chance to review, was Jack Donovan’s OUYA Game Development by Example. One could easily say that it’s more than a bit from my usual field of expertise. I’m no game developer (the stuff I was doing while still in school doesn’t count) and I don’t own an OUYA. Well, at least I know a lot about C# which was used as the scripting language in the book. One could also say that because of that I am the target audience for the book.

Still, even with my lack of previous experience, the book starts out really slow; too slow in my eyes. Though this should make it all that more suitable for complete beginners who have never programmed before. C# is explained from the very basics which is quite a challenge considering the small number of pages dedicated to this topic. As a side effect, there are a couple of inaccuracies and over-simplifications, but hopefully readers will grab a more in/depth book about C# and programming afterwards.

Of course, most of the book focuses on Unity and gives a quite thorough overview of the basics through examples which make a lot of sense by the end of the book. The author’s experience with game development definitely shines through in these sample games. There’s also not much OUYA or Android specifics, except for the obvious setup of development environment and instruction on publishing and monetization options. I did get the feeling though, that the book focuses too much on step by step instructions and lacks a bit on the bigger picture, explaining why we’re actually doing all these things and how it works under the cover. Obviously, the reader will again have to find this information in a more advanced book.

I liked a lot how throughout the book there are many calls to action, giving the reader challenges to complete on his own. But still, this book can really serve only as the first step on the path to becoming a game developer, albeit a good one. The author is aware of that and therefore concludes with a couple of more advanced topics, such as development methodologies, source control, and architectural patterns; probably hoping to make the reader craving for more. I can sincerely recommend the book to anyone, trying to get a glimpse into the world of game development. It’s enough to see if that’s something for you and worth exploring further.

The book is available on Amazon and sold directly by the publisher.

Install .NET Windows Service with a Different Name

$
0
0

Creating a Windows service in .NET could hardly be any easier. There’s a template in Visual Studio which sets everything up and there’s even a detailed walkthrough published on MSDN which leads you through the whole process from creating a new project to actually installing the service.

Installing multiple instances of such a service on a single computer is not that easy. You could do it by using Sc.exe instead of InstallUtil.exe, or you could modify the installer in your Windows service project to support configurable names. I prefer the latter approach, but there’s not much documentation about it, which is probably the reason for manyarticles on the web describing over-complicated custom solutions instead of taking advantage of the APIs that are already available.

InstallUtil.exe has built-in support for passing the arguments to Windows service’s Installer class. The Installer base class parses them and puts them in a handy StringDictionary, which can be accessed through InstallContext. This means, you can read these values inside your Installer and use them to change the default service name:

private void SetServiceName()
{
  if (Context.Parameters.ContainsKey("ServiceName"))
  {
    serviceInstaller1.ServiceName = Context.Parameters["ServiceName"];
  }

  if (Context.Parameters.ContainsKey("DisplayName"))
  {
    serviceInstaller1.DisplayName = Context.Parameters["DisplayName"];
  }
}

Of course, it’s important, where this code is called from. If you try putting it in the class constructor, the installer will fail, because Context is not yet initialized. Fortunately the base class provides many virtual methods which you can override to get access to the Context after initialization. In our case we need to override OnBeforeInstall and OnBeforeUninstall:

protected override void OnBeforeInstall(IDictionary savedState)
{
  SetServiceName();
  base.OnBeforeInstall(savedState);
}

protected override void OnBeforeUninstall(IDictionary savedState)
{
  SetServiceName();
  base.OnBeforeUninstall(savedState);
}

The desired service name can now be passed as an argument to InstallUtil.exe:

installutil /ServiceName=A.Service /DisplayName="A Service" .\WindowsService1.exe

The same goes for uninstall:

installutil /u /ServiceName=A.Service .\WindowsService1.exe

To use the default name, the parameters can simply be omitted.

Viewing all 640 articles
Browse latest View live