Hackintosh Computer Build

23. August 2011 16:27 by Cameron in Hackintosh  //  Tags:   //   Comments

Last summer I built my first gaming rig with a somewhat substantial budget of about $1400. This was not my first computer build, but it was the first build that I had spent my own money on. My goal was to build a powerful machine that was hardware compatible with Mac OS X Snow Leopard and later. Initially I had some issues with my graphics card as the Fermi line wasn't supported by Apple until about halfway into Snow Leopard's life. In addition to OS X, I also boot Windows 7 x64 enterprise and Ubuntu 64bit 11.04, Natty Narwhal. As a developer, I enjoy using different operating systems to code for various projects.

Here's my original build from last summer:
Mac OS X version 10.6 Snow Leopard
Intel Core i7 875K 2.93GHz Quad Core CPU
MSI P55 GD65 Motherboard
ADATA XPG Gaming Series 4GB (2 x 2GB) 240-Pin DDR3 SDRAM DDR3 1600 (x 2)
http://www.amazon.com/nVidia-GeForce-1280-PCI-Express-Video/dp/B003EM68MK (I originally bought this from Newegg. It has been discontinued there however)
1TB Samsung Spinpoint F3
Seagate Momentus 5400.6 ST9500325AS 500GB 5400 RPM 8MB Cache 2.5" SATA 3.0Gb/s Internal Notebook Hard Drive -Bare Drive
80GB Excelstore  (already owned)

60GB Hitachi (already owned)

650W Thermaltake PSU
Samsung Blu-Ray Combo

http://www.newegg.com/Product/Product.aspx?Item=N82E16827151199 (this drive has been discontinued on newegg)
Cooler Master Hyper 212 Plus CPU Cooler
Antec 200 Mid Tower case

This summer, I made a few upgrades:
850W Thermaltake PSU
Bought this in CompUSA (Tiger Direct) store
2 x OCZ 60GB SSD
Mac OS X Lion
Mac App Store

Installing Snow Leopard
I used iBoot supported (10.3 kernel) to boot the retail Snow Leopard 10.6.0 installer and Multibeast to install the DSDT. I installed the ALC889 kext from Multibeast and the legacy AppleHDA to get audio. I had to use my 9500 GT to install SL because I was getting kernel panics with my GTX 470 without the proper enabler installed. After SL installed, I installed the modified Fermi Chameleon bootloader to support my GTX 470. I installed the JMicron 36xxx kext from kexts.com to get PATA support.

Installing Lion

Once Lion was released, I bought Lion from the Mac App Store and I used tonymacx86's xMove to create a USB installer for installing Lion. I did a clean install of Lion on one of my OCZ SSDs since I no longer needed my Snow Leopard installation, I wanted to start fresh and I wanted to have the performance boost of my new SSD. Also, my GTX 470 works without a hitch in Lion as Apple has been supporting Fermi cards since about Snow Leopard 10.6.4.

Here are some older photos of my computer when I first built the system. I'll upload some newer photos soon.

Gitting started with Git

15. August 2011 00:25 by Cameron in Git  //  Tags: , , , , , , , , ,   //   Comments

Git, created by Linus Torvalds, is a very high quality version control system. It was created with the task to manage the source tree of the Linux kernel. Torvalds didn't believe that pre-existing version control systems could give justice to the Linux kernel's source code given its massive size and collaborators so Torvalds created Git. If you are using other version control systems for your projects, consider reading this: http://whygitisbetterthanx.com/

This website explains the advantages in full of why Git is better than other version control systems available. 

Git is free and open source and is available for all platforms: Linux, Mac, Windows, Solaris, you name it

First, be sure to install git for your platform and then you can start playing around with different commands. Once you've installed git, here are a few references to get you started:  



Setting Up Git

In order to setup your environment for using a remote git repository, be sure to run these commands:

$ ssh-keygen -t rsa -C "youremail@site.com"

This command creates a public/private key pair for SSH. SSH is used by git to encrypt the connection to remote servers. When asked to where to save your public key, press enter. Then, when asked for a passphrase, leave it empty. Your screen should look like this:

Generating public/private rsa key pair.

Enter file in which to save the key (/home/cameron/.ssh/id_rsa): 

Created directory '/home/cameron/.ssh'.

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /home/cameron/.ssh/id_rsa.

Your public key has been saved in /home/cameron/.ssh/id_rsa.pub.

After your public/private key have been setup, add your global user information:

$ git config --global user.name "Firstname Lastname"

$ git config --global user.email "your_email@youremail.com"

Now you are ready to clone a repository. If you run:

$ git clone git@git.tinksoft.net:test.git

A new directory will be created for the git repository, test, and all of the remote files in the repository will be downloaded into that directory.

Git Command Basics

A few common commands to git are cloning repositories, committing to repositories, pushing to repositories, and pulling from repositories. If you've worked with subversion before, "git clone" is like subversion checkout. It literally clones the remote repository in its current state to your local repository. However, "git commit" is not like subversion commit. When you commit to a git repository, you are only committing to your local repository until you push to the remote repository. Using "git push" is like subversion commit and will push your changes to the remote repository. On the first push, you need to run the command "git push origin <branch name>". This tells git to push the origin to the branch that you specify. After that first push, you can run "git push" thereafter. If you choose to switch branches later on, you simply need to run the original command and specify your origin branch. Similarly, "git pull" behaves like subversion update and pulls down changes from your remote repository into your locally cloned repository. The same  applies to the first "git pull" as does the first "git push". Git needs to know which branch to pull from.  

One thing about pushing and pulling is that if you are working in a team and multiple people are pushing and pulling to the remote repository, you may be required to pull before you push out your changes. Don't worry though. If you have a conflict with your changes, your code will not be overwritten. Git has a conflict resolution tool where you can choose which changes to accept. Another thing that is good practice is to always run "git status" before committing and pushing to your repository. This will allow you to confirm that you are indeed committing files that should be committed to your repository. Also, whatever shows up in a commit log will be pushed to your remote repository when you push our your changes. Be sure to only push out working code and not break the build for your team.

A few advanced commands include "git branch <branch name>" (branches the repository at its current state), "git merge -s ours <branch>" (merges a branch with current branch), and "git checkout <branch name>" (changes current working branch). Please be sure to read up on these commands so that you know how to use them correctly. In a project repository, you don't want to create unnecessary branches, merge branches incorrectly, or lose changes when switching branches. Another advanced topic is to create a .gitignore file for your repository and put all files that git should ignore into this file. Each file should be on a separate line. This can be helpful if you don't want files such as database configurations to be pushed to your remote repository. 

For more information about git, be sure to read the references I listed above and also check out some books on git for a more in depth discussion.

Automatic builds in Jenkins from Git

11. August 2011 03:05 by Cameron in Continuous Integration, Git, Indefero, Jenkins  //  Tags: , , , , ,   //   Comments

Today I discovered how to run automated builds from Git post-receive hooks. Git has different hooks that you can trigger at various stages in the commit/push cycle. A full list of git hooks can be found here: http://www.kernel.org/pub/software/scm/git/docs/githooks.html

I found a very nice ruby script that does the trick of triggering an automatic build in Jenkins here: http://lostechies.com/jasonmeridth/2009/03/24/adding-a-git-post-receive-hook-to-fire-off-hudson-ci-server/

Here is the script:

#!/usr/bin/env ruby
while (input = STDIN.read) != ''
   rev_old, rev_new, ref = input.split(" ")
   if ref == "refs/heads/master"


       puts "Run Hudson build for job_name_here application"
       `wget #{url} > /dev/null 2>&1`

I'm sure you could write a bash script to do the same thing if you wanted to, but the original author preferred to use Ruby.

I'm glad that automatic builds finally work. I struggled for quite some time on this issue. I was looking in the wrong place. The web interface that I use for git, Indefero, has a place for post-commit hook web urls, but the problem was that post-commit hooks don't behave the same in git as they do in subversion. I didn't want to trigger builds on post-commit in git but rather when someone pushes their commits to the server. If you have scm polling enabled for your job, you no longer need this after you've configured post-receive hooks.

Separation of Concerns

9. August 2011 17:57 by Cameron in Programming  //  Tags: , ,   //   Comments

A good programmer makes sure to provide proper separation of concerns while coding applications. This makes maintaining the application's source code much more manageable and it also prevents the application's source code from becoming one large function that does everything. Back in the days before object oriented programming and even procedural, it was difficult to separate functionality of one part of an application from another.

With procedural programming in a language like BASIC, many of you might remember the GOTO statement, quite possibly the worst programming language mechanism ever conceived. Using GOTO statements made application maintenance quite a challenge. People should never have to manage program flow manually through GOTO statements. They behave essentially like a JUMP instruction in assembly. However, in assembly, using constructs such as GOTO or JUMP is required as there is no other way to control program flow in assembly.

In languages such as C or later versions of Microsoft Quick Basic or QBASIC, the languages provide the ability to call functions from a main function, presenting a huge improvement in programming history. This made it possible to separate business logic from database/filesystem logic and thus was the beginning of better code.

With the continuing popularity of object oriented programming, separation of concerns is improved exponentially beyond what procedural programming had done previously. Programmers have the ability to separate their application's functions into objects that represent various parts of the application. For instance, in part of a user authentication system, one might create a user class that can then be instantiated and passed to the user data access object, the object that handles all the low level database interactions.

Using object oriented design, applications are clearly divided up into objects that serve their own individual purpose, while achieving the same end goal: a finished product. While there may be better approaches than object oriented design that become evident in the future, it is clearly one of the best way of modelling the real world in a virtual environment. Also, people think in terms of tangible items and enjoy representing application parts with objects. It will be interesting to see how the industry develops in the next 10 years and how design paradigms shift.

My thoughts on continuous integration

8. August 2011 17:29 by Cameron in Continuous Integration  //  Tags: , , , ,   //   Comments

Whether the choice is SVN or git or another version control system, I believe it is vital for software development groups to have a central source repository. My preferred version control system is git as it provides huge improvments over Subversion and you can work with a local repository without needing to affect the remote repository. With each commit or push to a team's central repository, it is important to check that the latest commit/commit doesn't break the main build. This is where continuous integration comes into play.

Continuous integration servers can attempt to build the source committed/pushed to the central repository and if the build passes, it can then integrate the changes into the main development branch. Some continuous integration servers will push the code to production once the code passes a series of unit tests and builds with the rest of the main development branch. However, if the build fails, the code will not be integrated into the main build and the failed build will be logged for reviewing. The event logging helps immensely with finding and fixing software bugs quickly, allowing the main branch to accept the changes made by the original user. This feature saves developers time and frustration in trying to sift through thousands of lines of code. Why should anyone manually check for a software bug if a computer can analyze the source code and find it for you?

With various continuous integration servers, project maintainers can view statistics such as commit/build success rate and code redundancy. In general practice, developers should never just copy and paste code. This is not coding. It's laziness. Usually, if you are copying and pasting code, you can probably refactor your code to use only one set of the code you were originally going to copy. There's no point having duplicate code in a code project if you can help it. One of the reasons it's a bad idea to copy and paste code is that the code base becomes harder to maintain. Another reason is that copied code doesn't necessarily work everywhere you paste it to. Just because it works in one place doesn't mean it will work as expected in another place.

With all of the benefits of continuous integration, I believe that every software development team should have some sort of continuous integration to track their projects whether they are an open source shop or Microsoft shop. Continuous integration really promotes the agile software development cycle and everyone should enjoy the advantages that it provides. I can definitely say that with all collaborated personal software projects that I work with I will make sure that continuous integration is key part of the development process.


Month List

Tag cloud