Rocky Lhotka has a nice post describing n-tier vs. n-layer in .NET applications and the benefits and trade-offs associated with these practices. You can read Rocky’s post here, called Should all apps be n-tier?
I thought I would piggy-back on his post as it seems to fall in line with my recent series of posts on High Performance and Scalable ASP.NET Websites Made Easy.
- Developing High Performance and Scalable ASP.NET Websites
- Avoid Chatty Interfaces Between the Tiers in Your ASP.NET Web Application
- ASP.NET Page Profiling – Page Tracing
N-Tier and N-Layer Basics
Although everyone coins these terms differently, I use these terms similar to Rocky. N-Tier is more about where the code / processes run (physical location), and N-Layer is more about how the code is logically grouped. Let’s steal a picture from one of my previous posts, called Avoid Chatty Interfaces Between the Tiers in Your ASP.NET Web Application. In the picture below, I am talking about a 3-Tier application. The browser, IIS, and Database all run on 3 separate physical machines, each separated by a network. Each separate resource running in a separate process is considered a tier.
Layers (N-Layer) are more about how you organize your code. To make your applications more maintainable and extensible, you will want to divide your applications into highly cohesive (High Cohesion GRASP Pattern), loosely coupled (Low Coupling GRASP Pattern) areas of interest, which I call layers. The picture below (via Jeffrey Palermo) is an example of a layering approach that follows the Domain-Driven Design patterns.
Tiers Should Be Minimized - Impact on Performance, Scalability, Security, Fault Tolerance, and Complexity
As I mentioned in my previous post, Avoid Chatty Interfaces Between the Tiers in Your ASP.NET Web Application, your application will take a performance hit when you distribute it across tiers, because there is an inherent delay associated with accessing resources across a network and/or in another process. This is why I suggested that you avoid chatty interfaces between these tiers. Try to keep round trips down to a minimum.
Per Rocky, the reasons for distributing your web application across tiers is to strike a “balance between performance, scalability, fault tolerance and security. While there are various other reasons for tiers, these four are the most common. The funny thing is that it is almost impossible to get optimum levels of all four attributes – which is why it is always a trade-off between them.
Tiers imply process and/or network boundaries. A 1-tier model has all the layers running in a single memory space (process) on a single machine. A 2-tier model has some layers running in one memory space and other layers in a different memory space. At the very least these memory spaces exist in different processes on the same computer, but more often they are on different computers. Likewise, a 3-tier model has two boundaries. In general terms, an n-tier model has n-1 boundaries.
Crossing a boundary is expensive. It is on the order of 1000 times slower to make a call across a process boundary on the same machine than to make the same call within the same process. If the call is made across a network it is even slower. It is very obvious then, that the more boundaries you have the slower your application will run, because each boundary has a geometric impact on performance.
Worse, boundaries add raw complexity to software design, network infrastructure, manageability and overall maintainability of a system. In short, the more tiers in an application, the more complexity there is to deal with – which directly increases the cost to build and maintain the application.
This is why, in general terms tiers should be minimized. Tiers are not a good thing, they are a necessary evil required to obtain certain levels of scalability, fault tolerance or security.“
Layers are Good – Loosely Coupled to Minimize Impact to Other Layers
Layers, which are more or less a way to organize your code into highly-cohesive areas of interest, are a good thing. The key being here that making a change in a layer provides minimal impact to other layers and hence makes the application a lot more maintainable. The layers also provide some abstraction to implementation detail, allowing you to hopefully change the underlying details of layer implementation and lower layers without having to change its interface and affect upper-layers in your application. Hence, changing the way you persist data, from a relational database to XML, for example, will hopefully only affect the Data Access Layer and not so much the Business Layer and User Interface Layer. I say hopefully :) Check out the Dependency-Inversion Principle, the Strategy Design Pattern and Template Method for ways to vary implementation specifics and extend modules and applications.
Too Much Loose-Coupling in Layers - The Threat of Needless Complexity
The problem with layers and various other strategies for loosely-coupling your applications is the addition of needless complexity. You run the risk of over-architecting an application to allow for various changes that are unlikely to happen and are not based on actual requirements. I have talked about application-level design smells that are characteristic with rotting software:
- Rigidity – System is hard to change in even the most simple ways.
- Fragility – Changes cause system to break easily and require other changes.
- Immobility – Difficult to entangle components that can be reused in other systems.
- Viscosity – Doing things right is harder than doing things wrong.
- Needless Complexity – System contains infrastructure that has no direct benefit.
- Needless Repetition – Repeated structures that should have a single abstraction.
- Opacity – Code is hard to understand.
Layers and loose-coupling are a tricky thing if not grabbed by the reins when necessary. Review the Open-Closed Principle to see how to provide extensibility to modules and applications, but do this where needed based on business requirements.
Test-Driven Development and Refactoring – Your Tools to Manage Needless Complexity and Provide Just-In-Time Extensibility to Your Application
Two overused terms, Test-Driven Development and Refactoring, are your way to fight off needless application complexity and yet maintain confidence that you can extend your application if necessary later. TDD keeps things simple. Refactoring to Patterns gives you a solid application framework that should allow you to add functionality later with minimal impact to your application.
Rocky put it best – “Tiers are not a good thing, they are a necessary evil required to obtain certain levels of scalability, fault tolerance or security.” Layers are a good thing in that they provide highly-cohesive, loosely-coupled areas of interest that can help increase maintainability of your application. However, over-architecting your application by providing high degrees of loose-coupling between layers when unnecessary can be just as harmful to performance and the complexity of an application as tiers. Practice Test-Driven Development and Refactoring to Patterns to keeps things simple now and provide extensibility on demand later when new requirements pop-up.
Congratulations: Lance Armstrong Won Time Trials
Listening To: Holding Out for a Hero by Frou Frou