Web Applications: N-Tier vs. N-Layer

Rocky Lhotka has a nice post describing n-tier vs. n-layer in .NET applications and the benefits and trade-offs associated with these practices.  You can read Rocky’s post here, called Should all apps be n-tier?


I thought I would piggy-back on his post as it seems to fall in line with my recent series of posts on High Performance and Scalable ASP.NET Websites Made Easy.



 


N-Tier and N-Layer Basics


Although everyone coins these terms differently, I use these terms similar to Rocky.  N-Tier is more about where the code / processes run (physical location), and N-Layer is more about how the code is logically grouped.  Let’s steal a picture from one of my previous posts, called Avoid Chatty Interfaces Between the Tiers in Your ASP.NET Web Application.  In the picture below, I am talking about a 3-Tier application.  The browser, IIS, and Database all run on 3 separate physical machines, each separated by a network.  Each separate resource running in a separate process is considered a tier.


 



 


Layers (N-Layer) are more about how you organize your code.  To make your applications more maintainable and extensible, you will want to divide your applications into highly cohesive (High Cohesion GRASP Pattern), loosely coupled (Low Coupling GRASP Pattern) areas of interest, which I call layers.  The picture below (via Jeffrey Palermo) is an example of a layering approach that follows the Domain-Driven Design patterns.


 



 


Tiers Should Be Minimized - Impact on Performance, Scalability, Security, Fault Tolerance, and Complexity


As I mentioned in my previous post, Avoid Chatty Interfaces Between the Tiers in Your ASP.NET Web Application, your application will take a performance hit when you distribute it across tiers, because there is an inherent delay associated with accessing resources across a network and/or in another process.  This is why I suggested that you avoid chatty interfaces between these tiers.  Try to keep round trips down to a minimum.


Per Rocky, the reasons for distributing your web application across tiers is to strike a “balance between performance, scalability, fault tolerance and security. While there are various other reasons for tiers, these four are the most common. The funny thing is that it is almost impossible to get optimum levels of all four attributes – which is why it is always a trade-off between them.


Tiers imply process and/or network boundaries. A 1-tier model has all the layers running in a single memory space (process) on a single machine. A 2-tier model has some layers running in one memory space and other layers in a different memory space. At the very least these memory spaces exist in different processes on the same computer, but more often they are on different computers. Likewise, a 3-tier model has two boundaries. In general terms, an n-tier model has n-1 boundaries.


Crossing a boundary is expensive. It is on the order of 1000 times slower to make a call across a process boundary on the same machine than to make the same call within the same process. If the call is made across a network it is even slower. It is very obvious then, that the more boundaries you have the slower your application will run, because each boundary has a geometric impact on performance.


Worse, boundaries add raw complexity to software design, network infrastructure, manageability and overall maintainability of a system. In short, the more tiers in an application, the more complexity there is to deal with – which directly increases the cost to build and maintain the application.


This is why, in general terms tiers should be minimized. Tiers are not a good thing, they are a necessary evil required to obtain certain levels of scalability, fault tolerance or security.“


 


Layers are Good – Loosely Coupled to Minimize Impact to Other Layers


Layers, which are more or less a way to organize your code into highly-cohesive areas of interest, are a good thing.  The key being here that making a change in a layer provides minimal impact to other layers and hence makes the application a lot more maintainable.  The layers also provide some abstraction to implementation detail, allowing you to hopefully change the underlying details of layer implementation and lower layers without having to change its interface and affect upper-layers in your application.  Hence, changing the way you persist data, from a relational database to XML, for example, will hopefully only affect the Data Access Layer and not so much the Business Layer and User Interface Layer.  I say hopefully :)  Check out the Dependency-Inversion Principle, the Strategy Design Pattern and Template Method for ways to vary implementation specifics and extend modules and applications.


 


Too Much Loose-Coupling in Layers - The Threat of Needless Complexity


The problem with layers and various other strategies for loosely-coupling your applications is the addition of needless complexity.  You run the risk of over-architecting an application to allow for various changes that are unlikely to happen and are not based on actual requirements.  I have talked about application-level design smells that are characteristic with rotting software:



  1. Rigidity – System is hard to change in even the most simple ways.
  2. Fragility – Changes cause system to break easily and require other changes.
  3. Immobility – Difficult to entangle components that can be reused in other systems.
  4. Viscosity – Doing things right is harder than doing things wrong.
  5. Needless Complexity – System contains infrastructure that has no direct benefit.
  6. Needless Repetition – Repeated structures that should have a single abstraction.
  7. Opacity – Code is hard to understand.

Layers and loose-coupling are a tricky thing if not grabbed by the reins when necessary.  Review the Open-Closed Principle to see how to provide extensibility to modules and applications, but do this where needed based on business requirements.


 


Test-Driven Development and Refactoring – Your Tools to Manage Needless Complexity and Provide Just-In-Time Extensibility to Your Application


Two overused terms, Test-Driven Development and Refactoring, are your way to fight off needless application complexity and yet maintain confidence that you can extend your application if necessary later.  TDD keeps things simple.  Refactoring to Patterns gives you a solid application framework that should allow you to add functionality later with minimal impact to your application.


 


Conclusion


Rocky put it best – “Tiers are not a good thing, they are a necessary evil required to obtain certain levels of scalability, fault tolerance or security.”  Layers are a good thing in that they provide highly-cohesive, loosely-coupled areas of interest that can help increase maintainability of your application.  However, over-architecting your application by providing high degrees of loose-coupling between layers when unnecessary can be just as harmful to performance and the complexity of an application as tiers.  Practice Test-Driven Development and Refactoring to Patterns to keeps things simple now and provide extensibility on demand later when new requirements pop-up.


 


CongratulationsLance Armstrong Won Time Trials


Listening To: Holding Out for a Hero by Frou Frou


Drinking: Imperial Pi Lo Chun Green Tea


 

This entry was posted in ASP.NET, Design Patterns, GRASP Patterns. Bookmark the permalink. Follow any comments here with the RSS feed for this post.

2 Responses to Web Applications: N-Tier vs. N-Layer

  1. dhayden says:

    Thanks, Sam!

    You make an excellent point about the “over-architecting” phrase. Like you, I am mainly referring to “Needless Complexity.” I agree that there are a segment of developers that have taken “over-architecting” to mean they shouldn’t do any planning in a project. If in doubt, I always like to err on the side of more planning, abstraction and extensibility, too.

    I think a future post on what it means to “over-architect” with examples is a great idea. A lot of the development trends, like Agile and Iterative Development, TDD, and Refactoring, are trying to help us avoid over-architecting an application and better adapt to new and unexpected requirements during the life of the project / application.

  2. Sam says:

    Very very nice post! This should be required reading.

    If I might suggest a possible followup if you’re open to suggestions?

    It’s not uncommon to see the phrase “over-architecting”, but I’ve yet to see a published definition or example that I can recall. To me it means basically the bullet points, mainly the one on “Needless Complexity”. I can easily recognize areas of projects I’ve written in the past that were over-achitected, my main fault being I think to write wrappers for simple operations when the wrapper was never going to be reused anywhere else.

    On the other hand I’ve seen programmers sometimes hear that phrase and seem take it as a license to do no design, no abstraction, and no layering. Which I think is a much bigger mistake. If you’re going to err, then err on the side of loose coupling as long as you can keep it all cohesive. In my opinion…

    Anyways, enough blabbing. I mainly wanted to congratulate you on a job well done!

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>