Gamer Footprint Update and E1337 Entertainment Merger

Hey guys! I know it's been about a month since the last update, but things have been pretty busy lately. I am in the process of merging Gamer Footprint with E1337 Entertainment in order to bring tournaments to the Gamer Footprint community and statistics tracking to the E1337 Entertainment community. We are very excited to be joining forces! Over the next couple of months, we will be supporting tournament brackets, an interactive calendar for events, and statistics tracking for tournaments as part of the merger.

I managed to get automated builds working with TeamCity for both the development and production sites for Gamer Footprint. This was a fun task to finish. I setup a build for both environments to pull down latest sources from Bitbucket and walk through the build steps. This includes getting latest source, downloading Nuget packages, building the source, and minifying the JavaScript (on the production environment). Within a minute or two of pushing changes to Bitbucket, they are live on their respective environments.

Since the last update, I have implemented a few new features and fixed a couple of bugs:

I finished the first iteration of timelines on both the global scale and the personal scale. For now, the timelines are generated manually. I am still working on incremental updates to be run automatically. I expect to have that finished in the next couple of weeks. The timeline on both global and personal includes games played on PlayStation Network and Xbox Live and trophies or achievements earned on each network. I'm playing with the idea of subscribing or following another user to get updates to your subscription feed. This way, you don't have timeline events from everyone on Gamer Footprint. After some time, this could be quite large. As development progresses, I will add other events to the timeline including games/achievements from Steam, games/achievements from Battle.net, and potentially statistics from other games such as Halo 4.

I've been experimenting with getting push notifications setup via SignalR to send near realtime updates of gamers online presence information for Xbox Live and PlayStation Network. This is still in active development and is not considered finished. I will look at adding presence information from other networks such as Steam in the near future. Through push notifications, we can also send messages to all actively connected users on the website or specific users on the website. Messages can include maintenance notifications and estimated downtime, notifications of when your friends are online from a specific network, achievement/trophy unlocks, and much more.

The goal for Gamer Footprint and E1337 Entertainment is to provide a place for gamers to meet, collaborate, and participate in tournaments from around the world. Without Gamer Footprint, each gaming community is separated by platform or console. We want to bridge that gap and promote organic connections with players from a plethora of gaming backgrounds.

Please keep informed on the latest updates for both Gamer Footprint and E1337 Entertainment. The name/branding from the merger is still being decided, but we will keep you updated with the news.

OWIN Self-Hosted Test Server for Integration Testing of OData and Web API

A co-worker of mine and I were recently given a task to perform integration testing on OData and Web API services. You can view his posts on the subject in his series: Part 1, Part 2, and Part 3. Traditionally, one might mock the web requests and responses, but by using the TestServer found in Microsoft.Owin.Testing namespace, we can start an in-memory HTTP server for doing full integration tests. You can get the NuGet package here.

To start create a new Unit Testing project with MS Test and a new ASP.NET MVC / Web API project. In the ASP.NET MVC / Web API project install Web API 2.2 and Web API 2.2 for OData v4.0 and OData v1-3. For the unit test project, install Web API 2.2, Web API 2.2 for OData v4.0 and OData v1-3, Web API 2.2 OWIN Self Host, Web API 2.2 OWIN, and Microsoft.Owin.Testing. Below is a sample of a unit/integration test for setting up an OWIN test server for in-memory integration testing:

namespace SelfHosting.Test
{
    using Microsoft.Owin.Testing;
    using Microsoft.VisualStudio.TestTools.UnitTesting;
    using Owin;
    using SelfHosting.Test.Models;
    using System;
    using System.Linq;
    using System.Net.Http;
    using System.Threading.Tasks;
    using System.Web.Http;
    using System.Web.Http.Dispatcher;
    using System.Web.OData.Builder;
    using System.Web.OData.Extensions;
    using WebApp.Models;

    [TestClass]
    public class SelfHostingTest
    {
        protected TestServer server;

        [TestInitialize]
        public void Setup()
        {
            server = TestServer.Create(app =>
            {
                HttpConfiguration config = new HttpConfiguration();
                WebAppFacade.WebApiConfig.Register(config);
                app.UseWebApi(config);
            });
        }

        [TestCleanup]
        public void TearDown()
        {
            if (server != null)
                server.Dispose();
        }

        [TestMethod]
        public async Task TestODataMetaData()
        {
            HttpResponseMessage response = await server.CreateRequest("/odata/?$metadata").GetAsync();

            var result = await response.Content.ReadAsAsync<ODataMetaData>();

            Assert.IsTrue(result.value.Count > 0, "Unable to obtain meta data");
        }

        [TestMethod]
        public async Task TestWebApi()
        {
            HttpResponseMessage response = await server.CreateRequest("/api/values").GetAsync();

            var result = await response.Content.ReadAsStringAsync();

            Assert.AreEqual("\"Hello from foreign assembly!\"", result, "/api/values not configured correctly");
        }

        [TestMethod]
        public async Task TestODataPeople()
        {
            HttpResponseMessage response = await server.CreateRequest("/odata/People").GetAsync();

            var result = await response.Content.ReadAsAsync<ODataResponse<Person>>();

            Assert.AreEqual(result.value.Count, 3, "Expected 3 people to return from /odata/People");
        }
    }

}

The OData meta data is serialized into a POCO (Plain old C# object):

namespace SelfHosting.Test.Models
{
    using System.Collections.Generic;

    public class ODataMetaData
    {
        public string odatacontext { get; set; }
        public List<Value> value { get; set; }
    }

    public class Value
    {
        public string name { get; set; }
        public string kind { get; set; }
        public string url { get; set; }
    }
}

By using a generic ODataResponse class, we can deserialize our OData response into any POCO:

namespace SelfHosting.Test.Models
{
    using System.Collections.Generic;

    public class ODataResponse<T>
        where T : class, new()
    {
        public string odatacontext { get; set; }
        public List<T> value { get; set; }

        public ODataResponse()
        {

        }
    }
}

The beauty about using the TestServer is that it is self-contained and the HTTP server is inaccessible outside of the process. Once the tests complete, the server is shutdown. The WebApiConfig registered with the TestServer determines which controllers and routes to load for testing. No production code needs to be changed in order to test existing Web API and OData controllers. The only problem that I have found is that attribute routes don't seem to register correctly. Perhaps I have not found the correct method of registering the attribute routes for the TestServer.

Here is the Visual Studio 2013 solution with both a web project and a unit testing project:

SelfHostingUnitTest.zip (1.39 mb)

QUnit, JSCoverage and Jenkins

24. September 2013 10:09 by Cameron in Continuous Integration, Jenkins, PhantomJS  //  Tags: , , , , , , ,   //   Comments

A few years back I worked with a colleague of mine on getting JSCoverage integrated with QUnit to generate code coverage reports for QUnit. The process was fairly simple. We learned a ton about how to setup JSCoverage from this blog: http://www.eviltester.com/index.php/2008/06/08/test-driven-javascript-code-coverage-using-jscoverage/ You can either build the sources from JSCoverage's Subversion repository or if you are running Ubuntu, you can use the apt package manager to install JSCoverage. After obtaining JSCoverage, we were able to generate html coverage reports from JSCoverage for our unit tests that we wrote using QUnit, a unit testing framework for jQuery.

To generate coverage with JSCoverage, we used the command:

jscoverage [source_dir] [destination_dir]

Understand that the all files in the source directory will be copied to the destination directory and will be instrumented. We had to be careful not to instrument jQuery or QUnit so we put our tests in a separate folder. 

After that command has been run, one can browse to jscoverage.html and run the tests through the browser. JSCoverage also has a nice API that can accept querystrings to open test pages that will intrument the javascript. JSCoverage provides a summary for all instrumented files in the Summary tab and with each file in the summary, one can view a detailed breakdown of the coverage. 

As far as integrating this into Jenkins, there are a number of possible options, but it is not critical since JSCoverage provides its own reporting. Integration with Jenkins might be possible in the future however.

When you are working with JavaScript it's great to do unit testing, but integrating these tests into an automated system can be tricky.  Your code is made to be run in a browser, not on the command prompt.  Furthermore, you are probably manipulating the DOM and are dependant on HTML pages, so how could you automate something like this without opening up a browser? 

Enter PhantomJS and QUnit. 

PhantomJS is a headless browser with JavaScript API that will print any console.log() call to the command line. 

QUnit is a unit testing framework from the guys at JQuery.  It is made to display the results of your tests in a browser window.  Fortunately, QUnit provides callbacks that allow us to hook into the tests and provide proper output to PhantomJS.  In our case, this format is JUnit XML format. 

I followed someone who was doing something similar, but unfortunately his code was incomplete and also weird/broken.  Niltz's blog is: http://www.niltzdesigns.com Unfortunately Niltz's website is no longer online.

He did provide a nice .js file that hooked into QUnit's callbacks and stored the tests in a nice JS object.  From there, I edited his QUnit result printer to be able to output the correct XML.  These files were working great and they did their work without changing the way that our QUnit tests displayed when we viewed the tests manually in a browser.

The real trouble was the driver js file.  This Niltz guy had a driver, but it didn't properly use PhantomJS' interface and broke on execution.  I was able to use some of his code as a guide, but I was forced to almost completely re-write my own driver file. 

var page = require('webpage');

(function() {
    page.onConsoleMessage = function (msg) { 
        if(msg === 'phantomexit')
            phantom.exit(0);
        console.log(msg);
    };

    page.onLoadFinished = function(status){
        var PAGE_FAILED = 1;
        var TEST_FAILED = 2;
        var EXCEPTION = 3;

        if (status === 'success') {
            page.evaluate(function() {
                var testDone = function() {
                    var r = QUnitLogger.results;
                    var msg = qunitResultPrinter.print();
                    console.log(msg);
                    console.log('phantomexit');
                };

                if (QUnitLogger.done) {
                    return testDone();

                } else {
                    QUnitLogger.addEventListener('done', testDone);
                }
            });
        } else {
            console.log('Page failed to load: ' + phantom.args[0]);
            phantom.exit(PAGE_FAILED);
        }
    };

    try {
        if (1 !== phantom.args.length) {
            console.log('Usage: phantomjs-driver.js URL');
            phantom.exit();
        } else {
            phantom.state = 'phantomjs-driver-running';
            page.open(phantom.args[0]);
        }
    } catch(e) {
        console.log('Unhandled exception: ' + e);
        phantom.exit(EXCEPTION);
    }
})();

The tricky part here is the page.evaluate() function.  This allows me to execute code within the context of the loaded page, but the problem here is that it is run in a completely different scope, unaware of the variable 'status' for example.  The only way I can communicate with the code inside this function is either by a return or a console.log().  PhantomJS provides a nice callback for any console.log() inside of this phantom.evaluate() function called page.onConsoleMessage().

Using this we were able to fully integrate with QUnit and output the results of my tests in an XML format using the command:

./phantomjs phantomjs-qunit-driver.js ../test/index.html

An example of output is:

<testsuites><testsuite name="utilities" assertions="32" failures="0" time="38"><testcase name="setupValItems and cleanupValItems" assertions="5" time="20">....

From there, integration into Jenkins is fairly trivial.

Attached is the modified QUnit PhantomJS driver. I have left the tests out since your unit tests will be different.

qunit.zip (3.37 mb)

Month List

Tag cloud