Demystifying The Code

The SOLID Principles Video Series

I have just released the final video in my 5-part series on the SOLID Principles of software development. These videos are free to watch at You simply have to create an account with your email address and a password. That is all.

SOLID is an acronym coined by Robert C. Martin (Uncle Bob). SOLID Stands for:

Each of these principles provides developers with guidance that will allow them to develop more maintainable software. While some of these principles are seemingly universally understood (SRP, for instance), others are not. The goal of this series is to explain each principle in layman’s terms and clearly outline the guidance each provides. The videos provide real-world(ish) demos to hammer home the points.

The following is an extremely high-level summary of what you will learn in each video:

Single Responsibility Principle

This video discusses what "responsibilities" are, providing concrete examples of classes that contain multiple responsibilities. It further discusses the dangers of violating the SRP from the following perspectives: 1) Coupling, 2) Simplicity, 3) Maintainability, 4) Testability and 5) The ability to respond quickly to change.

A tightly-coupled File Parser is used in the demos to illustrate violating the SRP and the challenges therein. A refactored version of the File Parser is used to illustrate adhering to the SRP. The session ends with discussing the benefits of adherence to this principle, as well as outlining some challenges that still exist that are addressed by other principles.

Open Closed Principle

This video clearly articulates what it means to be "Open for Extension" and "Closed for Modification". It discusses how, by programming to abstractions, we can prepare for predicted variations in our code. By predicted variation, I mean points in your code where you know change will occur. The session further discusses strategies for ensuring that you don’t over-engineer your solution and break other principles like YAGNI or KISS.

The session then discusses how the Strategy Pattern and Template Method Pattern can be used to help adhere to this principle. The File Parser from the SRP session is used to illustrate how we can take a solution that adheres to SRP, but not OCP can be refactored. It clearly outlines the benefits of adhering to OCP from maintainability and testability standpoints.

Liskov Substitution Principle

This video illustrates that when we program to abstractions (like we are directed to in the OCP), we have a reliance that types are substitutable for their base types. Unfortunately, the rules of subtyping are not strict enough to guarantee this substitutability. A more stringent set of rules is required, or the code that is programmed against the abstractions can (and will) fail.

LSP provides these stricter rules. The session discusses the 4 areas in which the LSP provides guidance: 1) Preconditions and Postconditions, 2) Covariance and Contravariance, 3) The History constraint and 4) Exceptions. Demonstrations are used to illustrate violations and dangers therein. We illustrate that violations often result in:

  • Code that compiles
  • Runtime errors
  • Runtime type checking
  • Violations in OCP

Interface Segregation Principle

This video discusses the fact that some classes that adhere to the SRP have non-cohesive interfaces. The ISP provides us guidance in these cases. It directs us to create client-specific interfaces. The video discuss the ISP from the perspective of both implementers of an interface, as well as from the consumer of an interface.

The session discusses the challenges of violating the ISP from the following perspectives:

  • Recompile / Redeployment
  • Coupling
  • Maintenance
  • Testability

A demo is used to illustrate the disadvantages of exposing a fat, non-cohesive interface to clients. The demo then illustrates extracting cohesive, client-centric interfaces and points out the advantages.

Dependency Inversion Principle

This video discusses the 2 distinct pieces of guidance provided by the DIP. The first is that high-level, policy modules should not depend upon low-level modules. Rather, the high-level modules should depend upon abstractions that the low-level modules implement. The second is the guidance of who owns the interface (the high-level, policy modules).

The session uses a demo to illustrate a discount calculator that does not adhere to either piece of guidance. The demo first refactors the solution to program to abstractions. It then illustrates that significant challenges remain even though we are programming to interfaces. These challenges stem from improper ownership of the interfaces. The code is then refactored to adhere to the second piece of guidance.

Throughout the session, the following areas are covered when discussing challenges to violating and benefits of adhering to the DIP:

  • Tight Coupling
    • Class-level
    • Library-level
  • Reusability
  • Sensitivity to changes in low-level modules


I spent a lot of time trying to provide concrete examples of the benefits of understanding and adhering to these principles. I hope this is of some help to you, as Uncle Bob’s guidance was to me. Remember, the videos are free to watch!

My Experience with EventDay

As many of you know, I am hosting a developer event in March.  After I agreed to host the event came the scramble.  I needed to find a venue and build a website to promote it and handle registration.  After a few disappointments, I was able to secure the Pointe Hilton at Squaw Peak as my venue.  Now on to the registration site.

Initial Experience

As any developer would do, before building anything I looked to see what others were using.  Everyone I spoke with pointed me to EventDay (, a SAAS offering from Scott Cate.  So, I gave Scott a call to talk EventDay.  Scott relayed to me that for small events like mine, EventDay is free.  He further told me to just sign up and create my event.  I could call him if I had any questions.

I normally find most web sites frustrating and confusing, especially ones where I have to enter a load of information.  They are rarely focused on usability.  As such, I assumed that I would be on the phone with Scott within 10 minutes, asking for some kind of clarification.  Just the opposite was true.  After about 10 minutes, I had completed the creation of my event and the registration site was up and running (see below).



EventDay gave me every feature I needed:

  • Event Description and other Metadata like location, time, etc.
  • Tickets – allowing users to register and print tickets with QR codes
  • Speakers – capturing speaker bios, photos, etc.
  • Sessions – defining sessions
  • Scheduling – a simple and user-friendly screen to create the schedule
  • Email and Email templates – Pre-defined, yet editable Email templates for reminders and confirmation, as well as automatically sending confirmations


Large Events

There were a ton of features of EventDay I did not use, due to the size and scope of my event.  There are things like printer integration, evaluations and a host of subfeatures for speakers, sessions, etc.  The software allows you to use what you need, while offering features required for huge events.  It appears to me that EventDay was designed for folks managing huge events, but due to the design, is equally great for guys like me hosting one-off events.


I had a great experience with EventDay and would highly recommend it to anyone hosting an event of any size.  Please contact me if you have any questions.

DDD Arizona – Event on March 21st in Phoenix

Mark your calendars!  On March 21 from 12:00 – 5:00, I will be hosting a developer / architect event at The Pointe Squaw Peak.  Click here to Register.  (DDD stands for Developer! Developer! Developer!)

For those of you that don’t know, I have recently returned to Phoenix, after a 4 year stint in London.  There, I was developing high-risk, high-profile applications in the energy trading sector while  working at Digiterre, the leading European software consultancy in the energy trading space.  On March 21, myself and two of my colleagues from Digiterre will be hosting DDD Arizona.

In Europe, “DDD” events were very popular and focused primarily on software craftsmanship.  This event will be no exception.  The following is an except from the events abstract on event day:

Creating excellent software is as much about what not do to as it is what to do by exploring the most common and problematic “anti-patterns” that many developers fall prey to. Learn how companies can take complex systems and use proven approaches to dramatically simplify the development process and implement a continuous delivery software model. Learn how to eliminate dependencies that lead to the “ripple effect,” whereby changes in one part of the system negatively affect other. Discover patterns that allow you to easily scale and test software at a higher level of granularity, paving the way for teams of software developers to de-compose large problems into manageable and testable components.

The following are the abstracts for the sessions:

Anti-Patterns (and Patterns)

Design patterns are proven solutions to commonly occurring problems.  Anti-patterns are design choices that initially  appear to solve a problem, but are usually ineffective and counterproductive.  In this session, Rob Bagby will discuss a multitude of anti-patterns he and his team encountered in the field.

A thorough understanding of anti-patterns will help ensure that you do not become one of their victims.  While some anti-patterns are obvious, others are more subtle.  The session will cover architectural, development, OO and organizational anti-patters.

From 0 to Production in 60 minutes

In this session Gareth Evans will explain how continuous deployment and incremental change will shorten the feedback loop between you and your users and reduce the risk of delivering the wrong product to your customers.

Automating the deployment process and creating a “walking skeleton” of your application early affords a number of benefits including architecture validation, quicker deployment of bug fixes and new features and also gives stake holders early visibility of a working product.

Using Visual Studio Online and Windows Azure, Gareth will demonstrate how to quickly develop a walking skeleton, introduction small improvements in each iteration.

Efficient Coding

Learn how to save time in the mechanics of coding and become a more efficient developer by taking advantage of productivity tools and practices.  As software developers, we are craftsmen and a craftsman is only as good as his tools.  In this session, Ben Arroyo, the master of developer productivity, will discuss:

  • Windows, Visual Studio and Resharper shortcuts,
  • Mechanical keyboards and other devices
  • Resharper advanced features
  • NCrunch
  • Techniques such as Code Katas, Pomodoro and Golden Hour

Implementing Messaging in Your Applications

Building solutions for complex problems is hard.  The key is breaking the complex system down into smaller, more manageable components.  Small, decoupled components can be developed and tested independently.

In this session, Rob Bagby will discuss how to take advantage of messaging to integrate these components together to form the larger system.

You will learn, through demos, how to implement messaging in your .net solution with RabbitMQ.  You will further see how messaging will result in a massively scalable, reliable and maintainable solution.

Acceptance Tests Don’t Replace Unit Tests

I read a post last week that was celebrating (among other things) one developers epiphany that unit tests were a waste of time. He focused on acceptance tests and that was all that mattered. His case was that his boss and customers only care about the end result and not the unit test suite. He further gave evidence that even with his unit tests, his functions broke.

The Big Misconception

There is seemingly a big misconception that automated acceptance tests and unit tests perform the same functions and have the same goals. I don’t see it that way. Automated acceptance tests and unit tests compliment each other but have very different goals.  The following table illustrates how acceptance and unit tests relate on a few levels:

  Unit Tests Acceptance Tests
Written By: Developers The business or Q/A
Written For: Developers The business
Ensures: The code does what the
developer expects it to
The application does what
the business expects it to
  Modular / decoupled code The code solves the business problem
Tests: Individual units The system as a whole
Tells you: Where the code is failing The code is failing

How They Compliment Each Other

Both automated acceptance tests and unit tests allow developers to implement change.  Change is inevitable and we, as professional software developers, need to build software that can adapt to change.  In fact, as professional developers, we need to initiate change.  Refactoring to better design is / should be a common activity.

The Role of Automated Acceptance Tests

If you stand back and think about what we, in IT, are trying to achieve it is this:

  1. The business defines the requirements of the system
  2. The architects / developers build a system that meets those requirements

The challenge we have historically had is in unifying the requirements with the system actually being built.  The requirements may be living in several places:

  • A high-level requirements document written by a business user
  • Technical specifications written by a business analyst
  • Design documents written by an architect or developer
  • etc

These documents can easily become out of sync with the software. How many times have you started a project with a requirements or specifications document as your guide. As development progresses, we learn more and the requirements naturally change.  Unfortunately, the documents don’t.  They quickly become obsolete.  It requires a PM with obsessive compulsive tendencies to keep everything synchronized.

With automated acceptance tests, we can capture features and specifications of the system within the system.  Best of all, we have acceptance criteria for those specifications directly within the system and they are executable as tests.  The tooling today is getting better and better.  Tools like Specflow are leaps and bounds over where we were years ago.

What Don’t Automated Acceptance Tests Do?

Referring back to the table above, you can see that the specifications are written by the business and the acceptance criteria are there to ensure that the code does what the business intends it to do.  What they are not all that good at are telling you where the code is failing or providing fast feedback.  They also don’t help ensure that the design is decoupled and maintainable.  Those are things that unit and integration tests are good at.

The Role of Unit Tests

To my thinking, unit tests provide 4 services:

  1. Guides us toward writing decoupled, testable, maintainable code
  2. Proves that the subject under test does what it is designed to do
  3. Allows us to refactor code
  4. Allows us to adapt to change

Perhaps the biggest  benefit of unit tests is that they guide us toward writing testable, decoupled code.  I’m sure most of us have been forced to maintain highly coupled code where a change to one method has rippling effects.  The kind of code where a seemingly simple change will take days or weeks and not minutes.

Numbers 2, 3 and 4 talk to the other main purpose of unit tests: to ensure that the code does what you, as the developer, expect it to.  With adequate test coverage, you have the confidence to refactor code, knowing that you won’t introduce a regression. 

While automated acceptance tests will (hopefully) tell you that you have introduced a regression, it is less likely that it will tell you where.  Furthermore, automated acceptance tests normally take much longer run than unit tests.  Unit tests are about fast feedback.

My Unit Tests Passed, but My Code Is Broken

What happens when all of your unit tests pass, but there is a bug in the application?  One of two things is going on.  The issue has to do with the integration between components and you are missing an integration test or your unit tests are not testing the correct thing.  It is possible, even likely, to have high unit test coverage, yet not be testing the right behaviour.  Testing is every bit an art form as software development.  It takes time and thought to get it right.

Both are Tools in the Toolbox

I doubt very much that if you spoke with a good carpenter you would hear him/her say anything like “I used to use claw hammers, but now I only use sledge hammers.  They get the nails in much quicker.  Claw hammers just slow me down.”  No.  They keep both in their toolbox and use each for the job it is best at.

Helper Classes Are A Code Smell

I recently made one of those sweeping statements to some of my colleagues that almost always come back to bite you on the …: It was something to the effect of: "Having the word ‘Helper’ in a class name is a code smell." A pretty big debate ensued and I challenged them (and you) to give me a good example where it is appropriate (outside of testing classes).

So, why do I think it is a code smell (aside: I’m actually getting really tired of the term code smell)? It seems that all "helper" classes violate at least one of the two following principles / practices: 1) SRP and/or 2) Good Naming. Let’s take a look at these one at a time.


Hopefully, you are familiar with the "S" in Uncle Bob’s SOLID Principles of Object Oriented Design. If not, you have some great reading ahead of you: Clean Code: A Handbook of Agile Software Craftsmanship and Agile Principles, Patterns, and Practices in C#.  Uncle Bob’s definition of SRP is: "There should never be more than one reason for a class to change". I think of SRP as saying that a class should do one thing and do it well (and as a result should be testable).

Most often, when I see a "Helper" class, and I’ve seen a lot of them, they are usually classes jam packed with a host of differing and unrelated utility methods. The motivation for naming them {Something}Helper is, generally, that the class does a variety of things and it is therefore impossible to give it a distinct name. That is a clear violation of SRP.

How can you tell if your class violates SRP? See if you can describe what your class does in one sentence. Were you able to do it? Did your sentence have an "and" or an "or" in it? If you cannot describe what your class does in one sentence without using "and" or "or", it is a good indicator that your class is doing too much. Most helpers require generous amounts of "ands" and "ors" to describe their functionality.

Good Naming

I cannot emphasize enough the importance of well named classes, properties and methods in your code. Names that describe the functionality and purpose of your code make that code understandable and maintainable by others (and yourself when you are reading it in 3 months). Names that don’t reflect the purpose (like Helper) force you or others to read the code in the class (or method) to understand it’s purpose.

Naming classes appropriately is hard. If a class does more than one thing, it is nearly impossible. The inability to come up with a good name for a class or method is, again, a good indicator of this.

An Example

Take, for example, the omnipresent "XmlHelper" class. Yes, I’m sure you’ve seen one or two in your time. This flavour of helper class might have some methods that do the following:

    • Read Xml and return an object like Customer, Order or Trade
    • Serialize objects like Customer, Order or Trade to Xml
    • Parse Xml and return certain element or attribute values
    • Validates Xml against a schema

Logically, if the class does more than one thing, it violates SRP. I would argue that the name XmlHelper is inadequate. It does not tell the developer what it does. Consider the name XmlHelper vs. TradeSerializer or TradeDeserializer. Which one tells you more about the purpose of the class? Are you trying to help Xml?

What is the Big Deal?

So we have broken a design principle or two. What is the big deal? It may not seem to be all that bad. But it is!

Breaking SRP yields unmaintainable code. Suppose that your XmlHelper (among other things) contained code to read Xml and return a Customer. Now suppose that the Customer schema changed (as they do). Where does that code live? You might know if you wrote the helper… if you remembered that it was sitting next to some code that validated Trade Xml and parsed attribute values from an Order.

Now suppose that, instead of the schema changing, the service serving you customers now is serving Json. If you had followed SRP and used dependency injection appropriately, you likely would have had something like an ICustomerDeserializer and an XmlCustomerDeserializer (or maybe just a CustomerDeserializer) that implemented it. You could now create a JsonCustomerDeserializer and use that instead (just a simple change in your IoCContainer).  In either case, the change was isolated.

What about the naming? XmlHelper tells, at most, half the story… that the class has something to do with Xml. But what? This isn’t a mystery novel. The name of the class should tell you exactly what the single purpose of the class is.

Next Page »

Demystifying The Code