Demystifying The Code

The SOLID Principles Video Series

I have just released the final video in my 5-part series on the SOLID Principles of software development. These videos are free to watch at You simply have to create an account with your email address and a password. That is all.

SOLID is an acronym coined by Robert C. Martin (Uncle Bob). SOLID Stands for:

Each of these principles provides developers with guidance that will allow them to develop more maintainable software. While some of these principles are seemingly universally understood (SRP, for instance), others are not. The goal of this series is to explain each principle in layman’s terms and clearly outline the guidance each provides. The videos provide real-world(ish) demos to hammer home the points.

The following is an extremely high-level summary of what you will learn in each video:

Single Responsibility Principle

This video discusses what "responsibilities" are, providing concrete examples of classes that contain multiple responsibilities. It further discusses the dangers of violating the SRP from the following perspectives: 1) Coupling, 2) Simplicity, 3) Maintainability, 4) Testability and 5) The ability to respond quickly to change.

A tightly-coupled File Parser is used in the demos to illustrate violating the SRP and the challenges therein. A refactored version of the File Parser is used to illustrate adhering to the SRP. The session ends with discussing the benefits of adherence to this principle, as well as outlining some challenges that still exist that are addressed by other principles.

Open Closed Principle

This video clearly articulates what it means to be "Open for Extension" and "Closed for Modification". It discusses how, by programming to abstractions, we can prepare for predicted variations in our code. By predicted variation, I mean points in your code where you know change will occur. The session further discusses strategies for ensuring that you don’t over-engineer your solution and break other principles like YAGNI or KISS.

The session then discusses how the Strategy Pattern and Template Method Pattern can be used to help adhere to this principle. The File Parser from the SRP session is used to illustrate how we can take a solution that adheres to SRP, but not OCP can be refactored. It clearly outlines the benefits of adhering to OCP from maintainability and testability standpoints.

Liskov Substitution Principle

This video illustrates that when we program to abstractions (like we are directed to in the OCP), we have a reliance that types are substitutable for their base types. Unfortunately, the rules of subtyping are not strict enough to guarantee this substitutability. A more stringent set of rules is required, or the code that is programmed against the abstractions can (and will) fail.

LSP provides these stricter rules. The session discusses the 4 areas in which the LSP provides guidance: 1) Preconditions and Postconditions, 2) Covariance and Contravariance, 3) The History constraint and 4) Exceptions. Demonstrations are used to illustrate violations and dangers therein. We illustrate that violations often result in:

  • Code that compiles
  • Runtime errors
  • Runtime type checking
  • Violations in OCP

Interface Segregation Principle

This video discusses the fact that some classes that adhere to the SRP have non-cohesive interfaces. The ISP provides us guidance in these cases. It directs us to create client-specific interfaces. The video discuss the ISP from the perspective of both implementers of an interface, as well as from the consumer of an interface.

The session discusses the challenges of violating the ISP from the following perspectives:

  • Recompile / Redeployment
  • Coupling
  • Maintenance
  • Testability

A demo is used to illustrate the disadvantages of exposing a fat, non-cohesive interface to clients. The demo then illustrates extracting cohesive, client-centric interfaces and points out the advantages.

Dependency Inversion Principle

This video discusses the 2 distinct pieces of guidance provided by the DIP. The first is that high-level, policy modules should not depend upon low-level modules. Rather, the high-level modules should depend upon abstractions that the low-level modules implement. The second is the guidance of who owns the interface (the high-level, policy modules).

The session uses a demo to illustrate a discount calculator that does not adhere to either piece of guidance. The demo first refactors the solution to program to abstractions. It then illustrates that significant challenges remain even though we are programming to interfaces. These challenges stem from improper ownership of the interfaces. The code is then refactored to adhere to the second piece of guidance.

Throughout the session, the following areas are covered when discussing challenges to violating and benefits of adhering to the DIP:

  • Tight Coupling
    • Class-level
    • Library-level
  • Reusability
  • Sensitivity to changes in low-level modules


I spent a lot of time trying to provide concrete examples of the benefits of understanding and adhering to these principles. I hope this is of some help to you, as Uncle Bob’s guidance was to me. Remember, the videos are free to watch!

My Experience with EventDay

As many of you know, I am hosting a developer event in March.  After I agreed to host the event came the scramble.  I needed to find a venue and build a website to promote it and handle registration.  After a few disappointments, I was able to secure the Pointe Hilton at Squaw Peak as my venue.  Now on to the registration site.

Initial Experience

As any developer would do, before building anything I looked to see what others were using.  Everyone I spoke with pointed me to EventDay (, a SAAS offering from Scott Cate.  So, I gave Scott a call to talk EventDay.  Scott relayed to me that for small events like mine, EventDay is free.  He further told me to just sign up and create my event.  I could call him if I had any questions.

I normally find most web sites frustrating and confusing, especially ones where I have to enter a load of information.  They are rarely focused on usability.  As such, I assumed that I would be on the phone with Scott within 10 minutes, asking for some kind of clarification.  Just the opposite was true.  After about 10 minutes, I had completed the creation of my event and the registration site was up and running (see below).



EventDay gave me every feature I needed:

  • Event Description and other Metadata like location, time, etc.
  • Tickets – allowing users to register and print tickets with QR codes
  • Speakers – capturing speaker bios, photos, etc.
  • Sessions – defining sessions
  • Scheduling – a simple and user-friendly screen to create the schedule
  • Email and Email templates – Pre-defined, yet editable Email templates for reminders and confirmation, as well as automatically sending confirmations


Large Events

There were a ton of features of EventDay I did not use, due to the size and scope of my event.  There are things like printer integration, evaluations and a host of subfeatures for speakers, sessions, etc.  The software allows you to use what you need, while offering features required for huge events.  It appears to me that EventDay was designed for folks managing huge events, but due to the design, is equally great for guys like me hosting one-off events.


I had a great experience with EventDay and would highly recommend it to anyone hosting an event of any size.  Please contact me if you have any questions.

DDD Arizona – Event on March 21st in Phoenix

Mark your calendars!  On March 21 from 12:00 – 5:00, I will be hosting a developer / architect event at The Pointe Squaw Peak.  Click here to Register.  (DDD stands for Developer! Developer! Developer!)

For those of you that don’t know, I have recently returned to Phoenix, after a 4 year stint in London.  There, I was developing high-risk, high-profile applications in the energy trading sector while  working at Digiterre, the leading European software consultancy in the energy trading space.  On March 21, myself and two of my colleagues from Digiterre will be hosting DDD Arizona.

In Europe, “DDD” events were very popular and focused primarily on software craftsmanship.  This event will be no exception.  The following is an except from the events abstract on event day:

Creating excellent software is as much about what not do to as it is what to do by exploring the most common and problematic “anti-patterns” that many developers fall prey to. Learn how companies can take complex systems and use proven approaches to dramatically simplify the development process and implement a continuous delivery software model. Learn how to eliminate dependencies that lead to the “ripple effect,” whereby changes in one part of the system negatively affect other. Discover patterns that allow you to easily scale and test software at a higher level of granularity, paving the way for teams of software developers to de-compose large problems into manageable and testable components.

The following are the abstracts for the sessions:

Anti-Patterns (and Patterns)

Design patterns are proven solutions to commonly occurring problems.  Anti-patterns are design choices that initially  appear to solve a problem, but are usually ineffective and counterproductive.  In this session, Rob Bagby will discuss a multitude of anti-patterns he and his team encountered in the field.

A thorough understanding of anti-patterns will help ensure that you do not become one of their victims.  While some anti-patterns are obvious, others are more subtle.  The session will cover architectural, development, OO and organizational anti-patters.

From 0 to Production in 60 minutes

In this session Gareth Evans will explain how continuous deployment and incremental change will shorten the feedback loop between you and your users and reduce the risk of delivering the wrong product to your customers.

Automating the deployment process and creating a “walking skeleton” of your application early affords a number of benefits including architecture validation, quicker deployment of bug fixes and new features and also gives stake holders early visibility of a working product.

Using Visual Studio Online and Windows Azure, Gareth will demonstrate how to quickly develop a walking skeleton, introduction small improvements in each iteration.

Efficient Coding

Learn how to save time in the mechanics of coding and become a more efficient developer by taking advantage of productivity tools and practices.  As software developers, we are craftsmen and a craftsman is only as good as his tools.  In this session, Ben Arroyo, the master of developer productivity, will discuss:

  • Windows, Visual Studio and Resharper shortcuts,
  • Mechanical keyboards and other devices
  • Resharper advanced features
  • NCrunch
  • Techniques such as Code Katas, Pomodoro and Golden Hour

Implementing Messaging in Your Applications

Building solutions for complex problems is hard.  The key is breaking the complex system down into smaller, more manageable components.  Small, decoupled components can be developed and tested independently.

In this session, Rob Bagby will discuss how to take advantage of messaging to integrate these components together to form the larger system.

You will learn, through demos, how to implement messaging in your .net solution with RabbitMQ.  You will further see how messaging will result in a massively scalable, reliable and maintainable solution.

Acceptance Tests Don’t Replace Unit Tests

I read a post last week that was celebrating (among other things) one developers epiphany that unit tests were a waste of time. He focused on acceptance tests and that was all that mattered. His case was that his boss and customers only care about the end result and not the unit test suite. He further gave evidence that even with his unit tests, his functions broke.

The Big Misconception

There is seemingly a big misconception that automated acceptance tests and unit tests perform the same functions and have the same goals. I don’t see it that way. Automated acceptance tests and unit tests compliment each other but have very different goals.  The following table illustrates how acceptance and unit tests relate on a few levels:

  Unit Tests Acceptance Tests
Written By: Developers The business or Q/A
Written For: Developers The business
Ensures: The code does what the
developer expects it to
The application does what
the business expects it to
  Modular / decoupled code The code solves the business problem
Tests: Individual units The system as a whole
Tells you: Where the code is failing The code is failing

How They Compliment Each Other

Both automated acceptance tests and unit tests allow developers to implement change.  Change is inevitable and we, as professional software developers, need to build software that can adapt to change.  In fact, as professional developers, we need to initiate change.  Refactoring to better design is / should be a common activity.

The Role of Automated Acceptance Tests

If you stand back and think about what we, in IT, are trying to achieve it is this:

  1. The business defines the requirements of the system
  2. The architects / developers build a system that meets those requirements

The challenge we have historically had is in unifying the requirements with the system actually being built.  The requirements may be living in several places:

  • A high-level requirements document written by a business user
  • Technical specifications written by a business analyst
  • Design documents written by an architect or developer
  • etc

These documents can easily become out of sync with the software. How many times have you started a project with a requirements or specifications document as your guide. As development progresses, we learn more and the requirements naturally change.  Unfortunately, the documents don’t.  They quickly become obsolete.  It requires a PM with obsessive compulsive tendencies to keep everything synchronized.

With automated acceptance tests, we can capture features and specifications of the system within the system.  Best of all, we have acceptance criteria for those specifications directly within the system and they are executable as tests.  The tooling today is getting better and better.  Tools like Specflow are leaps and bounds over where we were years ago.

What Don’t Automated Acceptance Tests Do?

Referring back to the table above, you can see that the specifications are written by the business and the acceptance criteria are there to ensure that the code does what the business intends it to do.  What they are not all that good at are telling you where the code is failing or providing fast feedback.  They also don’t help ensure that the design is decoupled and maintainable.  Those are things that unit and integration tests are good at.

The Role of Unit Tests

To my thinking, unit tests provide 4 services:

  1. Guides us toward writing decoupled, testable, maintainable code
  2. Proves that the subject under test does what it is designed to do
  3. Allows us to refactor code
  4. Allows us to adapt to change

Perhaps the biggest  benefit of unit tests is that they guide us toward writing testable, decoupled code.  I’m sure most of us have been forced to maintain highly coupled code where a change to one method has rippling effects.  The kind of code where a seemingly simple change will take days or weeks and not minutes.

Numbers 2, 3 and 4 talk to the other main purpose of unit tests: to ensure that the code does what you, as the developer, expect it to.  With adequate test coverage, you have the confidence to refactor code, knowing that you won’t introduce a regression. 

While automated acceptance tests will (hopefully) tell you that you have introduced a regression, it is less likely that it will tell you where.  Furthermore, automated acceptance tests normally take much longer run than unit tests.  Unit tests are about fast feedback.

My Unit Tests Passed, but My Code Is Broken

What happens when all of your unit tests pass, but there is a bug in the application?  One of two things is going on.  The issue has to do with the integration between components and you are missing an integration test or your unit tests are not testing the correct thing.  It is possible, even likely, to have high unit test coverage, yet not be testing the right behaviour.  Testing is every bit an art form as software development.  It takes time and thought to get it right.

Both are Tools in the Toolbox

I doubt very much that if you spoke with a good carpenter you would hear him/her say anything like “I used to use claw hammers, but now I only use sledge hammers.  They get the nails in much quicker.  Claw hammers just slow me down.”  No.  They keep both in their toolbox and use each for the job it is best at.

Helper Classes Are A Code Smell

I recently made one of those sweeping statements to some of my colleagues that almost always come back to bite you on the …: It was something to the effect of: "Having the word ‘Helper’ in a class name is a code smell." A pretty big debate ensued and I challenged them (and you) to give me a good example where it is appropriate (outside of testing classes).

So, why do I think it is a code smell (aside: I’m actually getting really tired of the term code smell)? It seems that all "helper" classes violate at least one of the two following principles / practices: 1) SRP and/or 2) Good Naming. Let’s take a look at these one at a time.


Hopefully, you are familiar with the "S" in Uncle Bob’s SOLID Principles of Object Oriented Design. If not, you have some great reading ahead of you: Clean Code: A Handbook of Agile Software Craftsmanship and Agile Principles, Patterns, and Practices in C#.  Uncle Bob’s definition of SRP is: "There should never be more than one reason for a class to change". I think of SRP as saying that a class should do one thing and do it well (and as a result should be testable).

Most often, when I see a "Helper" class, and I’ve seen a lot of them, they are usually classes jam packed with a host of differing and unrelated utility methods. The motivation for naming them {Something}Helper is, generally, that the class does a variety of things and it is therefore impossible to give it a distinct name. That is a clear violation of SRP.

How can you tell if your class violates SRP? See if you can describe what your class does in one sentence. Were you able to do it? Did your sentence have an "and" or an "or" in it? If you cannot describe what your class does in one sentence without using "and" or "or", it is a good indicator that your class is doing too much. Most helpers require generous amounts of "ands" and "ors" to describe their functionality.

Good Naming

I cannot emphasize enough the importance of well named classes, properties and methods in your code. Names that describe the functionality and purpose of your code make that code understandable and maintainable by others (and yourself when you are reading it in 3 months). Names that don’t reflect the purpose (like Helper) force you or others to read the code in the class (or method) to understand it’s purpose.

Naming classes appropriately is hard. If a class does more than one thing, it is nearly impossible. The inability to come up with a good name for a class or method is, again, a good indicator of this.

An Example

Take, for example, the omnipresent "XmlHelper" class. Yes, I’m sure you’ve seen one or two in your time. This flavour of helper class might have some methods that do the following:

    • Read Xml and return an object like Customer, Order or Trade
    • Serialize objects like Customer, Order or Trade to Xml
    • Parse Xml and return certain element or attribute values
    • Validates Xml against a schema

Logically, if the class does more than one thing, it violates SRP. I would argue that the name XmlHelper is inadequate. It does not tell the developer what it does. Consider the name XmlHelper vs. TradeSerializer or TradeDeserializer. Which one tells you more about the purpose of the class? Are you trying to help Xml?

What is the Big Deal?

So we have broken a design principle or two. What is the big deal? It may not seem to be all that bad. But it is!

Breaking SRP yields unmaintainable code. Suppose that your XmlHelper (among other things) contained code to read Xml and return a Customer. Now suppose that the Customer schema changed (as they do). Where does that code live? You might know if you wrote the helper… if you remembered that it was sitting next to some code that validated Trade Xml and parsed attribute values from an Order.

Now suppose that, instead of the schema changing, the service serving you customers now is serving Json. If you had followed SRP and used dependency injection appropriately, you likely would have had something like an ICustomerDeserializer and an XmlCustomerDeserializer (or maybe just a CustomerDeserializer) that implemented it. You could now create a JsonCustomerDeserializer and use that instead (just a simple change in your IoCContainer).  In either case, the change was isolated.

What about the naming? XmlHelper tells, at most, half the story… that the class has something to do with Xml. But what? This isn’t a mystery novel. The name of the class should tell you exactly what the single purpose of the class is.

Books Every Programmer Should Read

I love reading technical books and I have read alot of them!  A handful of these books have had a profound impact on the way in which I write code.  I thought I would share these rare gems with you.  I highly recommend each of the following books!

Clean Code: A Handbook of Agile Software Craftsmanship

Where most technical books fall short is providing value judgements. This book is a clear exception to that rule. In fact, the purpose of the book is to disseminate Uncle Bob’s opinion on how to develop “Clean Code”.

Clean Code offers advice on nearly every aspect of software development. From seemingly simple concepts such as naming, formatting and designing functions through object design, testing, exception handling and concurrency, there is industry proven advice at every level.

The thing I found most surprising about this book is how much I learned about topics I thought I had down. I picked up dozens of useful bits of advice I use every day. While the examples in this book are written in Java, they are crystal clear to C# developers (and I’m not just saying that). Furthermore, Uncle Bob’s advice applies equally to C# and Java devs.

Uncle Bob is one of the giants in our industry. Do yourself a favor and learn from him!

Dependency Injection in .NET

This book is as much about proper object modeling and OO design as it is about Dependency Injection. Mark Seemann, a former Architect at Microsoft, clearly illustrates how to write extensible, maintainable and testable code by building loosly coupled systems. The book covers the SOLID principles and a wide variety of design patterns throughout.

Obviously, the book covers Dependency Injection in depth. Mark covers the 4 main DI patterns and provides concrete guidance on which to use when. He covers so-called DI anti-patterns like service locator and control freak, as well as advanced topics like object composition (how and where to wire up your object graphs) as well as managing object lifetimes. Later chapters cover, in depth, how to work with some of the most popular IoC containers such as Castle Windsor, StructurMap, Spring.Net, AutoFac and Unity.

This is one of the best programming books I have ever read – no doubt about it!

Growing Object-Oriented Software, Guided by Tests

I can typically judge the value of a technical book by the amount of notes I make in the margins. This one is covered! GOOS (as this book is fondly known in the industry) is the guide to effective agile development, as well as proper OO design.

If you are not familiar with the value of BDD or TDD, or maybe you don’t know how they relate or how to apply them, GOOS covers these topics with crystal clarity. However, GOOS goes well beyond intruducing the topics. It provides clear guidance, by example, on how to implement a successful software project in an agile manner.

The second reason I love this book is that it provides clear (and expert) guidance on how to properly develop an OO model. It follows up this guidance with an example project to illustrate their points. While the sample code is in Java, it is clear for C# developers.

This is perhaps my favorite programming book ever.

The Art of Unit Testing: With Examples in .Net

The Art of Unit Testing covers the topic of unit testing from cradle to grave, offering guidance for both the beginner and the seasoned tester. Beginners will find the book approachable, while veterans will learn countless techniques and best practices.

The book starts with a healthy intro into unit testing, covering topics such as best practices, stubs, mocks and isolation frameworks. The second half of the book is dedicated to more advanced topics like automated testing in builds, writing maintainable tests, testing legacy code and much more…

While it is easy to write a test, writing a good, maintainable test is an art form. If you are looking for a book to help you master this art, look no further.

C# in Depth, Second Edition

Jon Skeet is the undisputed heavyweight champion of C#. If you don’t believe me, ask Stack Overflow. In C# In Depth, Jon imparts his knowledge of the C# language with you.

As developers, we sometimes use language features without really understanding how they work, and thus don’t understand their impact. C# in depth covers most, if not all of the important language features in C# 2 and 3, er… well… in depth. Through C# in Depth, you will gain a deep understanding of these features and how to exploit them in your code.

Jon expertly covers topics such as Generics, Delegates, Lambdas, Expression Trees and much more.

If you want to gain a deeper understanding of the language of C#, I strongly urge you to read this book.

OpenDBDiff – OS tool for SQL Server Schema Comparisons

I can’t think of the last project I was involved in where I didn’t have to do schema comparisons between differing databases to generate diff scripts.  Sometimes I have a local copy of the DB on my laptop that I need to synch with a dev database.  Other times I have to create the scripts that we will use to deploy schema changes between dev and staging.  Sometimes I just want to see if there are any differences between my local DB and another to see why some tests aren’t passing.  Regardless, I find myself consistently managing this task..

There are a variety of tools available to help with this task: RedGate and Data Dude to name a couple.  Recently a couple of colleagues put me on to OpenDBDiff.  I say a couple of colleagues because the lot I work with tell me about “helpful” OS tools and projects almost daily.  When I hear 2 or more folks use the same tool, my antennas go up.

OpenDbDiff is an open source database schema comparison tool for SQL Server 2005 / 2008.  In my opinion, the beauty of OpenDBDiff is the simplicity.  Without looking at ANY documentation, you can do a full schema comparison between 2 databases, choose the object types you want to script and generate schema comparisons.  Let’s have a look:

Getting Started

Here are the simple steps to get started with OpenDBDiff:

  1. Download OpenDBDiffl  Simply go to the codeplex site and click download.  You will download a zip file. 
  2. Unzip the contents into a folder in your Tools directory (or wherever you keep your tools). 
  3. Run DbDiff.exe.  It is that easy…

The Demo

I have both SQL Server and SQL Server Express running on my box and this will do for our simple demo.  On my SQL Server instance, I have a database called HelpDeskDB that I used in my Entity Framework Training Series.  The first step is for me to is to create an empty HelpDeskDB on my express instance and spin up DBDiff.  I’ll use it to copy the schema to my new database.  Here is a screenshot:


The Options gives you typical SQL Compare options.  Here is a view of the filter options:


Once you press “Compare”, you can see a summary of the differences in the left hand list box:


Or you can have a look at the sync script:


In my case, I am going to copy the script to the clipboard and run it on my target database to sync them. 

Syncing Changes

With that done, I now have 2 databases with the same schema.  Next, I want to show you an example of syncing changes between 2 databases.  The first thing I will do is make a schema change to my new database.  I am going to add a non-nullable** column to my Contacts table.  Here is the change:


Now, I will reverse the databases in OpenDBDiff, setting my “new” database (with the updated schema) on SqlExpress as the source and my original database as the destination and press “Compare”.  Here is the SQL Script that was generated (omitting the dropping and creating constraints for brevity sake):


As you can see, the SQL generated does the following:

  • (not shown here) Drop the Constraints
  • Creates a temp table with the new, non-nullable column
  • Inserts the data from the original table into our new table (it is even smart enough to default the datetime with GetDate())
  • Drop the original table
  • Rename the new table
  • (not shown here) Create the Constraints


As you can likely see, OpenDBDiff is a handy tool for your toolbox.  It is open source, easy to use and does a great job at  schema comparisons.


**Note: you might not want to introduce a new column as non-nullable in one release.  This is a topic for a future post.

Do You NCrunch Yet?

If you haven’t heard of NCrunch, you are in for a great surprise.  Over the past few years, a friend and colleague of mine, Remco Mulder, has been slaving away on NCrunch.  What is NCrunch?  In my opinion, it is the most powerful Visual Studio plugin available.  It is designed to help developers with TDD and automated testing in general. The plug-in runs automated tests in the background while a developer writes code, and shows code coverage inline in real time as this code is written.

You may be wondering if you read that correctly.  Does NCrunch run your tests while you are writing code?  Yes it does.  NCrunch sandboxes the entire process of building, executing and testing a transient snapshot of a .NET solution – without a developer even needing to save their code to the hard disk.  Feedback and code coverage from the tests is presented tidily and unobtrusively alongside the source code with any thrown exceptions or test failures shown inline.  Let’s have a look at a snapshot of one of my projects in VS 2010 (with NCrunch).


Notice the green dots on the left hand side.  They tell me that the line of code next to it is covered by at least one test and all of the tests covering that line are passing.  If it wasn’t covered, the dot would be black.  Let’s see what it would look like for a failing test.  I am going to change the last line in IncrementIndex() to decrement the index by one instead of increment it.  Have a look:


As you can see, NCrunch is now letting me know that all the lines of code with red dots are covered by failing tests.  By the way, all I did was to replace the “+” with a “-“.  I did not compile, or even save my change.

I have some options now.  If I click on a red dot, the following context menu shows up:


From here, I can see the tests that are covering the line.  I can navigate to the specific tests by simply clicking on it in this menu. 

If I right click on a red dot, the following menu shows up for me:


As you can see, I can run the tests that are covering the code (if I don’t believe NCrunch), I can choose to ignore any failing tests (shame on me) or I can pin them to the tests window.

Why use NCrunch?

Some of you, like me, practice TDD.  Others may not.  However, all of you should be writing automated tests!  NCrunch helps both groups.  Before I started using NCrunch, I never realized how disruptive running my unit tests was.  It was just a way of life.  Write a test… Stop… Run my test… Stop…  Write Code to make test pass… Stop…  Run my test…  Refactor… Repeat…  Now, I almost never run my unit tests manually.  They are always running behind the scenes.  I don’t need to stop developing and do a mental context shift.  I’m guessing that this saves me at least an hour of productivity a day.

The other beauty of NCrunch is the real-time feedback you get.  First of all, it is very apparent where you have holes in your test coverage.  Secondly, you know immediately if the code you are writing broke any tests.  The nice thing is that you get this feedback when you are most prepared to address the issue.  The code is fresh in your mind (you just wrote it)

Where do you get it and how much?

You can download NCrunch at  Right now, it is in a beta state and Remco has decided to make it freely available in exchange for feedback from the .NET community.  Good luck and get NCrunching.

Using the Builder Pattern in tests

One of the most painful parts of writing tests is creating test data.  Whether you use mocking frameworks or write your own fakes, one thing is constant… you will find yourself creating object instances over and over.  You have to create inputs to the methods you are testing, return values for stubs or mocks, data in fake repositories, etc.  The Builder Pattern can greatly ease this burden.  In this post, through example, I will show how.

The Scenario

The following is  a simplified version of Coupon and Order types from

public class Coupon


    public int Id { get; private set; }

    public string CouponCode { get; set; }

    public decimal Discount { get; set; }

    public DateTime ExpirationDate { get; set; }

    public string Description { get; set; }

    public int ExecuteMaxTimes { get; set; }

    public decimal MinimumOrderAmt { get; set; }

    public CouponType CouponType { get; set; }




public enum CouponType
    PercentOffOrder = 1,
    AmountOffOrder = 2
public interface IOrder
    decimal SubTotal { get; }
public class Order : IOrder
    public decimal SubTotal

In this simplified scenario, we have a Coupon that can represent a percent off an order (25% off orders of $200 or more) or a fixed amount off an order ($25 off orders $100 or  more).  Notice that we can use the MinimumOrderAmt property to ensure the order is of a certain value.  We can also use ExecuteMaxTimes to limit the number of people that can cash in on the coupon and we can set an ExpirationDate to limit the lengh of time the coupon is executable.

Now, suppose we want to start implementing the CalculateDiscount method for our Coupon type.  One test may look something like this:

public void CalculateDiscount_TenPercentOffOrderOfTwoHundred_ReturnsTwenty()
    var coupon = new Coupon()
                         CouponCode = "TestCode",
                         CouponType = CouponType.PercentOffOrder,
                         Discount = .10m,
                         ExecuteMaxTimes = 100,
                         ExpirationDate = DateTime.Today.AddYears(1),
                         MinimumOrderAmt = 0
    var order = MockRepository.GenerateStub<IOrder>();
    order.Stub(x => x.SubTotal).Return(200m);
    var result = coupon.CalculcateDiscount(order);
    Assert.AreEqual(20, result);


The object under test here is obviously the Coupon, specifically the CalculateDiscount method.  The first thing I do here is to create an instance of our Coupon.  That is quite a bit of code just to create an instance of our Coupon.  You might argue that I have set some properties here that are unnecessary for the test (CouponCode, for example) just to make the example look worse.  However, it is likely that I would have to set most of those properties. 

What if instead of all of that code I could have done the following:

    var coupon = new CouponBuilder()

Ore even:

    var coupon = new CouponBuilder().AsValid().WithPercentageDiscount(.10m).Build();


The Builder Pattern

The Builder Pattern is a construction pattern.  It’s purpose is to abstract away the construction of (complex) objects and enable you to easily construct / build an object in a step-by-step process.  And, if done just so, like above, it can yield a nice, fluent interface (notice how I simply chain methods together).  So let’s get started implementing our CouponBuilder:

A Simple Builder

public class CouponBuilder
    private readonly Coupon _coupon;
    public CouponBuilder()
        _coupon = new Coupon();
    public Coupon Build()
        return _coupon;
    public CouponBuilder WithCouponCode(string couponCode)
        _coupon.CouponCode = couponCode;
        return this;
    public CouponBuilder WithDiscount(decimal discount)
        _coupon.Discount = discount;
        return this;
    public CouponBuilder WithExpirationDate(DateTime expirationDate)
        _coupon.ExpirationDate = expirationDate;
        return this;
    public CouponBuilder WithDescription(string description)
        _coupon.Description = description;
        return this;
    public CouponBuilder WithExecuteMaxTimes(int executeMaxTimes)
        _coupon.ExecuteMaxTimes = executeMaxTimes;
        return this;
    public CouponBuilder WithMinOrderAmount(decimal minOrderAmount)
        _coupon.MinimumOrderAmt = minOrderAmount;
        return this;
    public CouponBuilder WithCouponType(CouponType couponType)
        _coupon.CouponType = couponType;
        return this;


In it’s simplest form, it is that simple.  Here is the breakdown:

  • You start by creating a readonly field representing the type you want to construct
  • You initialize that field in the constructor of your builder
  • You create a series of methods that act on the instance stored in that field.  Each method returns an instance of the Builder.  This allows for that fluent interface.

More complex builder methods

I mentioned that this is the simplest form – a one-to-one relationship between builder method and property I want to set.  You need not do that.  You can add builder methods that logically bundle up the setting of multiple properties.  Some simple examples for our example would be adding an AsValid() and WithDiscount(decimal discount).  Let’s see that:

    public CouponBuilder AsValid()
        return WithExecuteMaxTimes(100)
    public CouponBuilder WithPercentageDiscount(decimal discount)
        return WithCouponType(CouponType.PercentOffOrder)


Smart Defaults

In most of my builders, I set some smart defaults in the constructor.  That is, I set the defaults to be those most commonly used in my tests.  Here is the updated CouponBuilder ctor:

public CouponBuilder()
    _coupon = new Coupon();
        .WithDescription("Test Description")


Private Setters

Notice how my Coupon class has a private setter.  It is a common and good practice to make ids immutable.  However, you may want to have the ability to set your id for testing purposes.  Our builder class is a perfect place for that.  (*** remember that this will just be used in our tests).  Have a look at my WithId builder method:

public CouponBuilder WithId(int id)
    _coupon.GetType().GetProperty("Id").SetValue(_coupon, id, null);
    return this;



The Builder Pattern is a great tool for, among other things, constructing objects for your tests.  Writing them up front will save you loads of time throughout the lifetime of your project.  Personally, I have a simple ReSharper template I use to create the Builder methods.  Feel free to email me and I will send it to you…

Twas the Release Before Christmas

A geekier rendition of Twas the Night Before Christmas was just posted on the devEducate blog.


Next Page »

Demystifying The Code