Gamer Footprint Update and E1337 Entertainment Merger

Hey guys! I know it's been about a month since the last update, but things have been pretty busy lately. I am in the process of merging Gamer Footprint with E1337 Entertainment in order to bring tournaments to the Gamer Footprint community and statistics tracking to the E1337 Entertainment community. We are very excited to be joining forces! Over the next couple of months, we will be supporting tournament brackets, an interactive calendar for events, and statistics tracking for tournaments as part of the merger.

I managed to get automated builds working with TeamCity for both the development and production sites for Gamer Footprint. This was a fun task to finish. I setup a build for both environments to pull down latest sources from Bitbucket and walk through the build steps. This includes getting latest source, downloading Nuget packages, building the source, and minifying the JavaScript (on the production environment). Within a minute or two of pushing changes to Bitbucket, they are live on their respective environments.

Since the last update, I have implemented a few new features and fixed a couple of bugs:

I finished the first iteration of timelines on both the global scale and the personal scale. For now, the timelines are generated manually. I am still working on incremental updates to be run automatically. I expect to have that finished in the next couple of weeks. The timeline on both global and personal includes games played on PlayStation Network and Xbox Live and trophies or achievements earned on each network. I'm playing with the idea of subscribing or following another user to get updates to your subscription feed. This way, you don't have timeline events from everyone on Gamer Footprint. After some time, this could be quite large. As development progresses, I will add other events to the timeline including games/achievements from Steam, games/achievements from Battle.net, and potentially statistics from other games such as Halo 4.

I've been experimenting with getting push notifications setup via SignalR to send near realtime updates of gamers online presence information for Xbox Live and PlayStation Network. This is still in active development and is not considered finished. I will look at adding presence information from other networks such as Steam in the near future. Through push notifications, we can also send messages to all actively connected users on the website or specific users on the website. Messages can include maintenance notifications and estimated downtime, notifications of when your friends are online from a specific network, achievement/trophy unlocks, and much more.

The goal for Gamer Footprint and E1337 Entertainment is to provide a place for gamers to meet, collaborate, and participate in tournaments from around the world. Without Gamer Footprint, each gaming community is separated by platform or console. We want to bridge that gap and promote organic connections with players from a plethora of gaming backgrounds.

Please keep informed on the latest updates for both Gamer Footprint and E1337 Entertainment. The name/branding from the merger is still being decided, but we will keep you updated with the news.

QUnit, JSCoverage and Jenkins

24. September 2013 10:09 by Cameron in Continuous Integration, Jenkins, PhantomJS  //  Tags: , , , , , , ,   //   Comments

A few years back I worked with a colleague of mine on getting JSCoverage integrated with QUnit to generate code coverage reports for QUnit. The process was fairly simple. We learned a ton about how to setup JSCoverage from this blog: http://www.eviltester.com/index.php/2008/06/08/test-driven-javascript-code-coverage-using-jscoverage/ You can either build the sources from JSCoverage's Subversion repository or if you are running Ubuntu, you can use the apt package manager to install JSCoverage. After obtaining JSCoverage, we were able to generate html coverage reports from JSCoverage for our unit tests that we wrote using QUnit, a unit testing framework for jQuery.

To generate coverage with JSCoverage, we used the command:

jscoverage [source_dir] [destination_dir]

Understand that the all files in the source directory will be copied to the destination directory and will be instrumented. We had to be careful not to instrument jQuery or QUnit so we put our tests in a separate folder. 

After that command has been run, one can browse to jscoverage.html and run the tests through the browser. JSCoverage also has a nice API that can accept querystrings to open test pages that will intrument the javascript. JSCoverage provides a summary for all instrumented files in the Summary tab and with each file in the summary, one can view a detailed breakdown of the coverage. 

As far as integrating this into Jenkins, there are a number of possible options, but it is not critical since JSCoverage provides its own reporting. Integration with Jenkins might be possible in the future however.

When you are working with JavaScript it's great to do unit testing, but integrating these tests into an automated system can be tricky.  Your code is made to be run in a browser, not on the command prompt.  Furthermore, you are probably manipulating the DOM and are dependant on HTML pages, so how could you automate something like this without opening up a browser? 

Enter PhantomJS and QUnit. 

PhantomJS is a headless browser with JavaScript API that will print any console.log() call to the command line. 

QUnit is a unit testing framework from the guys at JQuery.  It is made to display the results of your tests in a browser window.  Fortunately, QUnit provides callbacks that allow us to hook into the tests and provide proper output to PhantomJS.  In our case, this format is JUnit XML format. 

I followed someone who was doing something similar, but unfortunately his code was incomplete and also weird/broken.  Niltz's blog is: http://www.niltzdesigns.com Unfortunately Niltz's website is no longer online.

He did provide a nice .js file that hooked into QUnit's callbacks and stored the tests in a nice JS object.  From there, I edited his QUnit result printer to be able to output the correct XML.  These files were working great and they did their work without changing the way that our QUnit tests displayed when we viewed the tests manually in a browser.

The real trouble was the driver js file.  This Niltz guy had a driver, but it didn't properly use PhantomJS' interface and broke on execution.  I was able to use some of his code as a guide, but I was forced to almost completely re-write my own driver file. 

var page = require('webpage');

(function() {
    page.onConsoleMessage = function (msg) { 
        if(msg === 'phantomexit')
            phantom.exit(0);
        console.log(msg);
    };

    page.onLoadFinished = function(status){
        var PAGE_FAILED = 1;
        var TEST_FAILED = 2;
        var EXCEPTION = 3;

        if (status === 'success') {
            page.evaluate(function() {
                var testDone = function() {
                    var r = QUnitLogger.results;
                    var msg = qunitResultPrinter.print();
                    console.log(msg);
                    console.log('phantomexit');
                };

                if (QUnitLogger.done) {
                    return testDone();

                } else {
                    QUnitLogger.addEventListener('done', testDone);
                }
            });
        } else {
            console.log('Page failed to load: ' + phantom.args[0]);
            phantom.exit(PAGE_FAILED);
        }
    };

    try {
        if (1 !== phantom.args.length) {
            console.log('Usage: phantomjs-driver.js URL');
            phantom.exit();
        } else {
            phantom.state = 'phantomjs-driver-running';
            page.open(phantom.args[0]);
        }
    } catch(e) {
        console.log('Unhandled exception: ' + e);
        phantom.exit(EXCEPTION);
    }
})();

The tricky part here is the page.evaluate() function.  This allows me to execute code within the context of the loaded page, but the problem here is that it is run in a completely different scope, unaware of the variable 'status' for example.  The only way I can communicate with the code inside this function is either by a return or a console.log().  PhantomJS provides a nice callback for any console.log() inside of this phantom.evaluate() function called page.onConsoleMessage().

Using this we were able to fully integrate with QUnit and output the results of my tests in an XML format using the command:

./phantomjs phantomjs-qunit-driver.js ../test/index.html

An example of output is:

<testsuites><testsuite name="utilities" assertions="32" failures="0" time="38"><testcase name="setupValItems and cleanupValItems" assertions="5" time="20">....

From there, integration into Jenkins is fairly trivial.

Attached is the modified QUnit PhantomJS driver. I have left the tests out since your unit tests will be different.

qunit.zip (3.37 mb)

Automatic builds in Jenkins from Git

11. August 2011 03:05 by Cameron in Continuous Integration, Git, Indefero, Jenkins  //  Tags: , , , , ,   //   Comments

Today I discovered how to run automated builds from Git post-receive hooks. Git has different hooks that you can trigger at various stages in the commit/push cycle. A full list of git hooks can be found here: http://www.kernel.org/pub/software/scm/git/docs/githooks.html

I found a very nice ruby script that does the trick of triggering an automatic build in Jenkins here: http://lostechies.com/jasonmeridth/2009/03/24/adding-a-git-post-receive-hook-to-fire-off-hudson-ci-server/

Here is the script:

#!/usr/bin/env ruby
#
while (input = STDIN.read) != ''
   rev_old, rev_new, ref = input.split(" ")
   if ref == "refs/heads/master"

       url="http://yourhudsondomain.com/job/job_name_here/build?delay=0sec"

       puts "Run Hudson build for job_name_here application"
       `wget #{url} > /dev/null 2>&1`
   end
end

I'm sure you could write a bash script to do the same thing if you wanted to, but the original author preferred to use Ruby.

I'm glad that automatic builds finally work. I struggled for quite some time on this issue. I was looking in the wrong place. The web interface that I use for git, Indefero, has a place for post-commit hook web urls, but the problem was that post-commit hooks don't behave the same in git as they do in subversion. I didn't want to trigger builds on post-commit in git but rather when someone pushes their commits to the server. If you have scm polling enabled for your job, you no longer need this after you've configured post-receive hooks.

My thoughts on continuous integration

8. August 2011 17:29 by Cameron in Continuous Integration  //  Tags: , , , ,   //   Comments

Whether the choice is SVN or git or another version control system, I believe it is vital for software development groups to have a central source repository. My preferred version control system is git as it provides huge improvments over Subversion and you can work with a local repository without needing to affect the remote repository. With each commit or push to a team's central repository, it is important to check that the latest commit/commit doesn't break the main build. This is where continuous integration comes into play.

Continuous integration servers can attempt to build the source committed/pushed to the central repository and if the build passes, it can then integrate the changes into the main development branch. Some continuous integration servers will push the code to production once the code passes a series of unit tests and builds with the rest of the main development branch. However, if the build fails, the code will not be integrated into the main build and the failed build will be logged for reviewing. The event logging helps immensely with finding and fixing software bugs quickly, allowing the main branch to accept the changes made by the original user. This feature saves developers time and frustration in trying to sift through thousands of lines of code. Why should anyone manually check for a software bug if a computer can analyze the source code and find it for you?

With various continuous integration servers, project maintainers can view statistics such as commit/build success rate and code redundancy. In general practice, developers should never just copy and paste code. This is not coding. It's laziness. Usually, if you are copying and pasting code, you can probably refactor your code to use only one set of the code you were originally going to copy. There's no point having duplicate code in a code project if you can help it. One of the reasons it's a bad idea to copy and paste code is that the code base becomes harder to maintain. Another reason is that copied code doesn't necessarily work everywhere you paste it to. Just because it works in one place doesn't mean it will work as expected in another place.

With all of the benefits of continuous integration, I believe that every software development team should have some sort of continuous integration to track their projects whether they are an open source shop or Microsoft shop. Continuous integration really promotes the agile software development cycle and everyone should enjoy the advantages that it provides. I can definitely say that with all collaborated personal software projects that I work with I will make sure that continuous integration is key part of the development process.

 

Month List

Tag cloud