Jon Thysell

Father. Engineer. Retro games. Ukuleles. Nerd.

Tag: .net

The State of Mzinga, February 2018

Mzinga is my open-source AI for the board game, Hive, and I can’t believe that the last time I posted it was all the way back in Summer 2016.

So much has happened since then!

At the time, my focus was on the Mzinga Trainer, a tool for improving my board evaluation function, ie. if I look at a given board, how accurately can I answer: who is winning and by how much?

The Mzinga Trainer does this through a genetic evolution algorithm, whereby a population of AIs (each with their own evaluation function) endlessly battle one another, and the winners earn the right to “breed” new AIs (passing on a mix of both parents’ functions) while the losers get culled from the population.

So in July 2016, I was just getting AIs to battle one another. Fast forward to July 2017, and I had:

  • A feature-rich Trainer with dozens of options, including support for running multiple battles concurrently on different processors
  • A refactored AI, making it much easier to validate that I was implementing the algorithms properly
  • Some key performance tests to evaluate changes to the core code

I had also spent several hundreds of hours of computing time in running experiments with the Trainer. Hundreds of AIs lived and died as I tweaked parameters and followed the most successful AIs (and their children). But by July 2017 I stepped away from Mzinga and started spending my spare time on other projects.

Then around December of last year, having not touched Mzinga for months, I suddenly had the inspiration for how to fix a particular performance bug I’d had, and by the start of the new year, I was back on the Mzinga bandwagon.

Since I’d spent so much time on the Trainer, from a regular user’s perspective, Mzinga hadn’t changed much since 2016. The Viewer (where you can actually play games) had seen some minor bug fixes, and while the Engine (with the core of the game and the AI), was technically faster and stronger, I doubt it was noticeable to most players. I regularly lose to people playing for the first time, and I could still beat the Mzinga AI.

I spent most of January focused on code performance – nothing AI-specific, nothing that needed any study or research, just analyzing the code and removing every little bottleneck I could find.

Then, after nearly eight years on my venerable, dual-core laptop, I finally bit the bullet and bought a new PC. A custom, fancy, rocking, eight-core beast with horsepower to spare. And wouldn’t you know, suddenly I had an interest in hitting the old chess AI sites and researching how the best chess AIs take advantage of multiple processors.

Now Mzinga’s AI is starting to get fancy:

  • It spreads its search across multiple processors, via “Lazy SMP”
  • It searches during your turn instead of waiting, aka “Pondering”
  • When it finds the “best move”, it peeks just a little further ahead to make sure, via “Quiescence Search”
  • It keeps you updated each time it finds a better move

On top of all that, everything is all nice and asynchronous, meaning I can cancel a search when I want to. I don’t have to say, “search 2 moves deep” or “search for 5 seconds” and then sit and wait for it to stop. If I want to stop it early, I can. If I want to just let it search, I can, and stop it when I’m good and ready.

Now sometimes the AI beats me.

I’ve also started addressing the Mzinga Viewer:

  • It’s faster and more responsive
  • It’s finally got some options, so you can tweak your experience:
    • Do you like your pieces with the flat or the pointy side up?
    • Do you want pieces that can’t move to be “grayed-out”?
    • Do you want to see certain things highlighted, like your opponents last move or what the valid moves are?

Some of these features are new, but some already existed in the Viewer and were always on because I liked them. Now you can turn them off if you don’t.

These are just a taste of how much Mzinga has changed, and there’s still more to come! Check it out today if you haven’t and send your feedback my way!

Stay tuned,

/jon

Making some retro games with LÖVE


I was in the mood to make some games this past week, when I discovered the wonderfully powerful yet lightweight LÖVE game framework.

After watching some videos on YouTube highlighting games others have made*, I set to work on one of my own. Within a couple of hours I had Pong running on my phone!

I decided I’d keep exploring the APIs by building a suite of simple retro games, the source of which I’ve uploaded to a new GitHub repo named RetroLove.

So far I have decent clones of Pong and Breakout. I’m thinking maybe Asteroids next. It’s so much fun, and the framework couldn’t be easier to use.

Check it out,

/jon

*: The story on how I found LÖVE is actually a little bit longer and shows the circuitous way by which I find myself doing these kinds of projects. I was working on another project, my first Universal Windows Platform app (that is, an app that runs on all Windows 10 devices). The project itself is something I’ve had on the back-burner for years, and I was primarily using it as an excuse to practice using XAML, which is the UI framework for UWP apps. (My day-job is on the XAML team at Microsoft, so using it myself only helps me to better understand our customers.)

Anyway, my app needed an embedded scripting language, and I wanted to use Lua, but I needed something that would work within a UWP app, which, because it needs to work on lots of different kinds of devices, meant I couldn’t use the native Lua libraries easily.

This led me to the very cool MoonSharp project, which is a Lua interpreter implemented entirely in C#. After I got that up and running, the app also needed a kind of visual editor, so I needed to do some 2D graphics work. It was then that I discovered the Win2D project, which makes it easier to do GPU-powered 2D graphics within XAML.

Then an idea hit me: what if I had a 2D game engine designed similar to retro consoles? Where the engine would be responsible for maintaining backgrounds, sprites, and drawing to the screen, and the game developer would provide the game in the form of a package of Lua code and game assets? I could make a UWP app that used MoonSharp to interpret the Lua and used Win2D to draw to the screen.

It had been some 15 years since the last time I’d made a game engine. In college I made a decent Mario-style platformer in Java; it had realistic physics, collision detection, music, sprite animations, scrolling, power-ups and enemies. I only ended up making one tech-demo of a level, but in the process I learned a ton about game engine design. (Though looking at that old code now… yikes.)

I spent a weekend trying to implement my idea for a sprite-based 2D game engine for Windows 10. It took me the two days to realize the true scope of such a project, and unfortunately I couldn’t get MoonSharp and Win2D to cooperate. Knowing that my idea probably wasn’t a unique one, I started searching online for other Lua-powered 2D game engines.

That’s how I found the free and open-source LÖVE. After seeing the size of its community, and how mature it was, I quickly abandoned my own broken code and started work on Pong.

Automatically generating version numbers in Visual Studio

Having intelligible version numbers is one of the easiest ways for developers to keep track of their software in the wild. However, having to maintain version numbers manually across multiple projects can be an annoying, error-prone process, especially if you’re trying to build and release often.

So how can we automate this task away?

The Goal

I have a Visual Studio solution with multiple projects, that therefore generates multiple assemblies for my app. In no particular order, here’s what I want:

  1. Every assembly has the exact same version number
  2. The option to manually specify a version number (say for infrequent builds that I release to the general public)
  3. The option for VS to auto-generate a build number (say for frequent builds that are released to beta testers)

So, what are my options?

Sharing one version number

Let’s tackle the first requirement. By default, every code project in a Visual Studio solution has its own AssemblyInfo.cs, and each specifies their own version number that needs to be updated and maintained.

But there’s no reason we have to stick with that. A better idea is to:

  1. Remove the AssemblyVersion attribute from AssemblyInfo.cs in all of your projects
  2. Create a new file, say SharedInfo.cs in one of your projects (typically the one the builds first) and put the AssemblyVersion attribute there.
  3. For all of your other projects, simply add that SharedInfo.cs as a link. (In the “Add Existing Item…” dialog, click the arrow next to the “Add” button and choose “Add As Link”.)

Now you only have one file to update your version information in, making it much more manageable. Bonus, you can move all common assembly tags you want in your SharedInfo.cs, like Copyright, Product, etc.

Automatic versions

Being able to manually specify a version number is easy enough. Whether you keep the default Visual Studio AssemblyInfo.cs paradigm or the SharedInfo.cs method above, it’s just a matter of you picking and setting a new version number when you feel like it.

But now let’s see what we can do about having automatic versions.

Option 1: Use the built-in wildcards

The first option is given to us right in the default AssemblyInfo.cs, letting us know that we can simply set our version to Major.Minor.* and let Visual Studio auto-generate the Build and Release numbers.

Visual Studio then sets the Build based on the number of days that have passed since January 1st, 2000, and sets the Release to the number of two-second intervals that have elapsed since midnight. Sounds good so far, and of course you can always switch back to a manual version at any time. But there are a couple of caveats:

  1. This only works with AssemblyVersion, not AssemblyFileVersion. To make this work for both, you have to specify just the AssemblyVersion (comment out or delete the AssemblyFileVersion line) and then Visual Studio is smart enough to use the same value for AssemblyFileVersion.
  2. Both numbers are generated at the exact time the particular project was built. So if you have multiple projects in your solution and any one takes more than two seconds to build, you will end up with different versions for different assemblies.

Option 2: Use an extension

The next option, one I used for years, is to simply hand over the responsibility of generating versions to the VS extension Automatic Versions.

If features a very nice GUI and there are lots of styles of automatic versions you can specify for your projects. It resolves both of the issues with using the built-in wildcards, letting you specify AssemblyVersion and AssemblyFileVersion, and also making sure that every project has the same version number (whether you share a linked file or not).

Downsides?

  1. You might not find a version style that you like. While there are lot of version number patterns to choose from, if you have a specific format you need to adhere to (say to match existing releases), you might be out of luck.
  2. You’ve just added a dependency to your code. Are you sharing your code with other developers? Now they have to install the extension too.

Adding dependencies means adding risk. For over a year, the VS Performance Profiler simply would not work for me, throwing an error and crashing whenever I tried to analyze my applications. I thought maybe my VS 2013 install was simply borked, but the problem followed me to VS 2015.

Turns out Automatic Versions was the culprit. Now, to be fair, once I reported the bug the developer very quickly issued a fix and now the Profiler is fine. But I searched for a solution to that Profiler error for a year, and at no point did it occur to me that an extension might be causing the problem.

Option 3: Use a T4 text template

The last option I want to talk about, is using a T4 template. What are T4 templates? From Code Generation and T4 Text Templates on MSDN:

In Visual Studio, a T4 text template is a mixture of text blocks and control logic that can generate a text file.

I’ve you’ve ever made a website in ASP.NET or PHP, then you’ll have no problem with T4 text templates. Basically, you can create a template file with some in-line blocks of C# that will get run when you build your project to generate your actual file.

So, instead of creating a straight SharedInfo.cs like we did before, now we create a SharedInfo.tt and write a little bit of code to handle generating new version numbers.

Simply add a new T4 text template to your project paste in the following to get started:

<#@ template debug="false" hostspecific="false" language="C#" #>
<#@ output extension=".cs" #>
<#
    int major    = 1;
    int minor    = 0;
    int build    = 0;
    int revision = 0;

    // TODO: Write code here to automatically generate a version

    string version = String.Format("{0}.{1}.{2}.{3}",
                                   major,
                                   minor,
                                   build,
                                   revision);
#>
// This code was generated by a tool. Any changes made manually will be lost
// the next time this code is regenerated.

using System.Reflection;

[assembly: AssemblyVersion("<#= version #>")]
[assembly: AssemblyFileVersion("<#= version #>")]

Now, if you’re having trouble reading this, basically, the first couple lines are directives that the template is going to create a .cs file. The code in the <# #> blocks will be run when the project is built, and where you’re going to need to decide how you want to generate your build numbers. After that is the static template text of the output file, and you can see where you see <#= version #>, that’s where the value of the version string will be inserted.

As it stands, if you created a SharedInfo.tt with the above template, you’ll get a SharedInfo.cs with the following:

// This code was generated by a tool. Any changes made manually will be lost
// the next time this code is regenerated.

using System.Reflection;

[assembly: AssemblyVersion("1.0.0.0")]
[assembly: AssemblyFileVersion("1.0.0.0")]

And just like before, you can then add links to that SharedInfo.cs (not the .tt file!) into your other projects. So all you need to do is write the code that generates the automatic versions. What does that code look like? It’s up to you!

In my projects, I usually have a “manual” mode whereby the version numbers I entered are used for public production builds, and overwritten by something based on DateTime.UtcNow for private or test builds. If you want plain, auto-incrementing numbers, you might go with reading in your current SharedInfo.cs to parse out the previous version number, increment, and save it back out.

Well, there you have it, completely customizable automatic versioning in Visual Studio solutions. Like it? Have a different approach? Sound off in the comments!

/jon

P.S. If you want to see a “real-world” example of how I pick version numbers check out What the Chordious version numbers mean.

Chordious 2.0 is here

It’s been two years since I started work on Chordious 2.0, and today is the first official release.

Read more about it in the announcement blog post.

/jon

Creating an AI to play Hive with Mzinga, Part IV

Mzinga is my attempt at an open-source AI for the board game Hive. So far in this series of posts, I’ve given an overview of the project, a dive into Mzinga’s AI, and then a deeper dive into Mzinga’s board evaluation function. All together, I have quite the framework for executinga decent Hive AI, but the question remains: How do I make it better?

How do people get better at games? Study and competition. Trial and error. Strategies that win us games are remembered and repeated while plays that lose us games are remembered so that we don’t repeat them. But in a game with so many things to consider, where there is no “right” move each turn, you can’t just memorize the answer, you have to actually play.

What are the things to consider? The metrics I talked about in the last post, like how many moves are available and what pieces are surrounded or pinned. Mzinga looks at some ~90 different metrics. For each there is a corresponding weight, measuring how much each metric should impact the board’s score.

How an AI plays is largely determined by what their metric weights are. AIs with different weights will make different decisions on which move to play. Creating the best AI then, is an exercise in determining the best metric weights. Since we don’t know what are the best numbers, our best option is to:

  1. Create a bunch of AIs with different metric weights
  2. Have them play a ton of games against one another
  3. Pick the one that proves that it’s the best

To pick the best one, I need a way of rating each AI. So I followed in chess’s footsteps and adopted the Elo rating system. It does a few of things I like:

  1. Each player has only one number to keep track of, so it’s easy to see who’s the “best”
  2. A player’s rating is just an estimate of how strong the player is:
    1. It goes up and down as they win and lose and their rating is recalculated
    2. Points are taken form the loser and given to the winner, so there’s a finite amount of “rating points” to go around
    3. More games mean a more accurate estimate of their strength, so no one game can make or break a player’s rating
  3. When recalculated a player’s rating takes into consideration who was playing:
    1. A good player beating a bad player is not news, so the winner only takes a few points from the loser, ie. player’s can’t easily inflate their ratings by beating awful players
    2. A bad player beating a good player is an upset, so to reflect that the ratings were wrongly estimated, the winner get lots of points from the loser
    3. Two players with similar scores are expected to draw, so if one wins, it’s a medium amount of points transferred to separate their ratings a little more accurately

Now the Elo system isn’t perfect. The biggest being that it only shows the relative strength of the players involved – you need lots and lots of players participating for the ratings to mean anything. There are lots of criticisms and variants when talking about real people playing, but it’s fine for what we need it for.

So now we have a method for finding the best player in a population of AIs. Create a bunch of AIs, have them fight one another to improve the accuracy of their Elo rating, and pick the one with the highest rating. But like I said earlier, Elo ratings only show relative strength in a given population. What if I don’t have a good population? What if 99% of them are so awful that an AI with some basic strategy completely dominates? That’s no guarantee that the winning AI is actually all that good.

How do we improve a population of players so that we can be sure that we’re getting the best of the best?

It turns out we’re surrounded by an excellent model on how to make a population of something better and better: natural selection and evolution.

In the real world, living things need to fight to survive. The creatures that can’t compete die off, while those with the best traits survive long enough to pass on their traits to their offspring. Offspring inherit both a mix of their parents’ traits, but they’re more then the sum of their parts. New DNA comes into the population as new members join or mutations occur.

We can absolutely simulate evolution in our population of AIs. The life-cycle goes something like this:

  1. Create a bunch of different AIs with different weights (traits)
  2. Have them all fight each other to sort out a ranking of who’s the strongest
  3. Kill off the weakest AIs
  4. Have some of the remaining AIs mate and produce new AIs to join the population
  5. Repeat 1-4

Now the first three steps seem pretty straight forward, but AI mating? It’s actually not that hard to implement. For each metric weight, simply give the offspring the average value from each parent. To make sure that AI’s children aren’t all identical, add a little variance, or mutation, by tweaking the result a little bit.

For example, if parent A has a particular weight of 5, and parent B a weight of 3, rather than giving every child the weight of 4, give them each something random between 3.9 and 4.1. Just like in real life, we don’t necessarily know which traits were the most important in making sure that the parent survived, and we don’t get to pick which traits get passed on. So we pass on everything, good and bad, and let life (in this case lots of fighting) determine who’s better overall.

Now we can start a new generation and have them all start fighting again, so we can suss out everyone’s rankings in the presence of these (presumably better) offspring. Add in some new AIs every now and then to prevent too much inbreeding (where one AI’s traits, good and bad, start to dominate and everyone starts looking more and more like clones of one another) and we now have a true population of AIs, all fighting to be the best.

Now, how exactly am I doing all of this?

With the Mzinga Trainer.

It’s a simple, yet highly configurable little tool for creating, maintaining, and evolving a population of AIs. I started with several seeds of simple AI weights that I handcrafted, plus a slew of randomly generated ones. Then I set up different populations on different computers with different parameters, and have them fighting, dying, and mating with each other for over a week.

It has made Mzinga into one of my more interesting projects, as I’ve made improvements to the tool, I’ve spun up new populations, mixed existing ones, and run life-cycles with different parameters. Some run under really harsh conditions, where so many get killed that the re-population comes from the offspring of very few survivors. When I started noticing that one small population had become so inbred as to appear like a clones of one another, I added in some new blood, so to speak. Then I reduced the harshness to give different AIs, and ultimately different play styles and strategies, a chance to survive and procreate without getting mercilessly killed off.

It’s an ongoing process, and like life itself, something that takes a lot of time to happen.

The Mzinga Trainer tool is included with the install of Mzinga. Now, not only can you try your hand against the Mzinga AI I’ve created, but you can even try to make a better AI of your own. So come on and try out Mzinga today!

/jon