Happy New Year!

Instead of a technical post, today I just want to wish a happy new year to everyone! The promised follow-up post of my AOP series will go online on January 4 in the next decade.

All the Best and I hope to see you again next year!


Spring.NET AOP - behind the scenes (1)

On the forums I see one topic coming up quite often: "Can I advise XY?". In this multipart post series I will describe the motivation for AOP and the way Spring.NET AOP (and most others like Castle.DynamicProxy and LinFu) technically work in .NET to give you a better understanding and provide you the knowledge to help answering such questions yourself.

The example: Retrying operations

Instead of the usual logging example I'd like to show you another useful feature: Retrying operations. For the sake of simplicity let's assume we're calling a webservice method for calculating the sum of two integers.

CalculatorWebService calc = new CalculatorWebService("http:/...");
int sum = calc.Add(2, 5);

Since webservices usually involve calling over the network, they are inherently unsafe and might fail for various reasons. In our application, we would like to retry webservice operations 3 times before giving up, with a 1 second delay between retries.

There are a couple of ways to implement this requirement, the most direct approach probably by deriving our own class from the webservice client class:

public class RetryingCalculatorWebService : CalculatorWebService
  public RetryingCalculatorWebService(string url):base(url) {}   

  public override int Add(int x, int y)
    int retries = 0;
    while (true)
        return base.Add(x, y);
      catch (Exception ex)
        if (retries >= 3)
          throw; // retry threshold exceeded, giving up// wait a second

There are of course a couple of issues with that approach, the 2 most important are:

  • The code for retrying the operation effectively buries our single line of actual business code. This makes it hard identifying what the code actually does:
public class RetryingCalculatorWebService : CalculatorWebService
  public RetryingCalculatorWebService(string url):base(url) {}   

  public override int Add(int x, int y)
    int retries = 0;
        return base.Add(x, y);
      catch(Exception ex)
        if (retries >= 3)
          throw; // retry threshold exceeded, giving up
        Thread.Sleep(1000); // wait a second
  • When implementing this requirement across all our webservice clients we not only end up scattering the same lines of code all over our codebase. A change in the requirement might cause us having to change all our webservice client classes causing a huge amount of work.


A manually implemented a solution

A structured way for non-intrusively adding behavior to exisiting code is described in the GoF book, the pattern is called "Decorator", the basic idea being wrapping the actual target method with additional code. Thus you do not need to modify any existing code, instead you "chain" the various required behaviours, each behaviour implemented in its own class. In contrast to the GoF-pattern, nowadays composition is favoured over inheritance, thus instead of using the inheritance approach, let's introduce an interface to easily chain our behaviors:

public interface ICalculator
  int Add(int x, int y);
ICalculator calc = ...;
int sum = calc.Add(2, 5);

Now we can easily implement our business logic and the infrastructure constraint separately and chain them later as needed:

public class CalculatorWebService : ICalculator
  public CalculatorWebService(string url) { ... }   

  public int Add(int x, int y)
    // perform actual webservice call
    return ...
public class CalculatorRetryDecorator : ICalculator
  private ICalculator next;

  public CalculatorRetryDecorator(ICalculator next) { this.next = next; }

  public int Add(int x, int y)
    int retries = 0;
    while (true)
        return next.Add(x, y);
      catch (Exception ex)
        if (retries >= 3)
          throw; // retry threshold exceeded, giving up
        Thread.Sleep(1000); // wait a second

Notice how the CalculatorRetryDecorator delegates the actual work to the next calculator in the chain. Now, whenever our requirements force us to retry calculator operations, we just "chain" the implementations together:

ICalculator calc = new CalculatorRetryDecorator( new CalculatorWebService( "http://..." ) );
int sum = calc.Add(2, 5);

Now, when we discover that our performance is to slow, well - implement a caching decorator that caches method results for a certain amount of time using the same approach and add it to the decorator chain:

ICalculator calc = new CalculatorCacheDecorator( new CalculatorRetryDecorator( new CalculatorWebService( "http://..." ) ) );
int sum = calc.Add(2, 5);

When we call the Add() method,  our call graph looks like this:


This is only the beginning

We already achieved a lot by separating concerns into different classes. Still our solution has some significant weaknesses:

  • When we add new methods to our ICalculator, we need to extend all of our decorators
  • We want to reuse the retry logic for other services

In my next post I will address those issues and show you how we can implement a more generic solution.

Here you can download the example code for this post.


A little Christmas present - for .NET

Just read Eberhard Wolff's newest blog post and simply couldn't resist to borrow the idea and show you, how the same things can be done using Spring.NET. First, here's the .NET version of Eberhard's example:

public class MyRepository
{ }

public class MyService
    public MyRepository MyRepository { get; set; }

Here it is, the smallest possible Spring.NET object. Notice, due to Java's lack of properties the .NET version is even smaller. Still, no attributes, no Spring dependencies - pure code.

How do you wire them? Spring.NET's XML configuration unfortunately lacks Spring.Java's component-scanning as demonstrated in Eberhard's post. But using the new code-based configuration I presented recently, you easily can wire your objects by writing

appContext.Configure(cfg => cfg
     .Scan(scan => scan
        .Include(t => t.FullName.EndsWith("Service") || t.FullName.EndsWith("Repository"))

The example code can be downloaded at SmallestSpringObject.zip

Merry Christmas!


NetMX & Spring.NET - configure your Apps at runtime

Ever wished you could change some settings of your application during runtime? Easily retrieve runtime information from your components? You tried WMI and Performance Counters?

Well I did and to put it polite: I don't like WMI. Performance Counters are a nice way to retrieve peformance information about an application, but when it comes to changing an application's settings or behavior at runtime, you need another way. WMI is still COM-based, although a WMI.NET binding is available via the System.Management components. Alas this binding is not complete and functions are spread across different namespaces for historical reasons. To put a long story short: It is nothing for the fainthearted.

Brief Introduction to Java Management Extensions (JMX)

In the Java universe for this purpose there is JMX. Basically it allows an application to expose management objects via a JMX server. A monitoring client can then connect to this server and dynamically obtain runtime information about the exposed management objects. In contrast to WMI there is no schema involved, everything is discovered dynamically. For instance the server application could expose the following management bean:

public interface SampleMBean
    public String getHello();
    public void setHello(String helloMessage);
    public String sayHello();

public class Sample implements SampleMBean {

    private String hello = "Hello World(s)!";

    public Sample() {

    public String getHello() {
        return hello;

    public void setHello(String helloMessage) {
      hello = helloMessage;

    public String sayHello() {
      return hello;

Using the JConsole tool, one can easily connect to this application's virtual machine and remotely retrieve/change the properties as well as invoke the exposed method(s):



The server application may expose its management objects over a variety of different connectors, as it is shown in this image (copied from the Wikipedia article about JMX)

And Now: .NET Management Extensions

Thanks to the amazing work of Szymon Pobiega, the power of JMX is available for the .NET platform as well. The NetMX implementation is available from CodePlex. The really nice part is, that it also comes with a working implementation of the JSR262 JMX connector, so that it is possible to connect to your .NET application using JConsole (yes, that's true and I'm going to proof that ;-)). Of course NetMX also comes with a .NET version of the console, please check out the project for more. I grabbed the sources and compiled them. After playing around a while, I decided to implement a simple usage scenario.

The Sample Scenario

Every now and then I wanted to increase decrease the logging level of my applications (among other things ...). Using NetMX it should be easy. And - well it was! Using Spring.NET, it was easy to weave a logging advice around my components, using NetMX it was easy to expose a management object for this logging advice, so that the log settings can be changed at runtime. The picture below illustrates the scenario setup I aimed for:

image The Logging Advice intercepts every call to the service object and logs the method calls to the log system. By default, logging is turned off, using JConsole I want to change those settings during runtime.

My sample application again is the MovieFinder application. To make the effect visible, the client code retrieves the movielist every second and prints the timestamp + the moviename on the console. Also the logging system is configured to write to the console, but disabled by default.

The Solution

The sources for my experiment can be fetched from my subversion repository. What you also need is the JSR262 enhanced version of JConsole, which is included in the samples of the JSR262 Connector download. Unpack the connector download and checkout $/jsr262-ri-ea4/samples/index.html for more.

So launching the application results in a screen similar to this:


Now launch JConsole and connect to our .NET application using the url "service:jmx:ws://":

imageExpanding the tree shown in the left pane, you should see this:

image  Now let's change some settings. Double-Click into the "Value" column and enter the following:

imageé voila! Our console has significantly changed now:


We just reconfigured the logging advice on the fly to also log all method calls to our service and also dumps the arguments passed in!

Here's the configuration snipped required to make this happen (using Spring.NET CodeConfig from my last post):

var appContext = new GenericApplicationContext(false);
    .EnableLogging(log =>
       log.TypeFilter = type => type.IsDefined(typeof(ServiceAttribute), true);
       log.Logger.LogLevel = LogLevel.Off;
   .EnableNetMX(netmx =>
     netmx.AddConnector<Jsr262ConnectorServerProvider>(new Uri(""));

Enjoy and again thanks to Szymon for his great work!


Merry XMLless! CodeConfig for Spring.NET

Straight to the point: XML is not meant for humans. Fullstop. The only way for humans to deal with XML is when it is hidden behind proper tooling support. Without a tool hiding XAML you wouldn't write XAML code by hand, would you? Currently being on a greenfield project with my team collegues not familar with Spring.NET it quickly turned out that XML configuration can be quite a hurdle, burying the gain of power in the pain of configuration. Using the simple MovieFinder example from the Spring.NET quickstart examples, I would like to introduce you to a new way to do familar things.

Introducing MovieFinder

In the following I will use the following - very simple - example to show you different ways of wiring the components. A MovieLister can be used to obtain a list of movies directed by a particular director. To access persistent storage, it uses the MovieFinder repository component: sample

Here's how the client code might look like:

IMovieLister lister = ...
var movies = lister.MoviesDirectedBy("Roberto Benigni")

The traditional way of wiring

The traditional way of configuring the Spring.NET container looks like this:

<objects xmlns='http://www.springframework.net'>

  <object id='movieLister' type='MovieFinder.Core.MovieLister, MovieFinder.Core'>
    <constructor-arg index='0' ref='movieFinder' />
  <object id='movieFinder' type ='MovieFinder.Data.SimpleMovieFinder, MovieFinder.Data' />    
Code Example: Traditional Configuration

There are a couple of problems with that approach, here are some:

  • not pleasant to read
  • much 'noise', xml syntax hiding the actual wiring structure
  • lack of refactoring support
  • unmanageable for large object numbers

You can imagine that in large applications without any tool support this can be become very painful. Already less known is the fact that Spring is able to autowire components. Here's the example to autowire components based on type-matching contructor arguments (see the reference documentation for other values of 'default-autowire'):

<objects xmlns='http://www.springframework.net' default-autowire='constructor'>

  <object id='movieLister' type='MovieFinder.Core.MovieLister, MovieFinder.Core' />  
  <object id='movieFinder' type ='MovieFinder.Data.SimpleMovieFinder, MovieFinder.Data' />    
Code Example: Using constructor autwiring

This saves you from having to manually specifiying all dependencies. Of course, renaming your classes still may break this configuration.

Mark Pollack blogged an extensive post about configuring the Spring.NET container, including the even less known container API for configuring the container.

Going fluent

A lot of frameworks out there have already introduced a configuration style known as fluent API. A nice example for the Windsor container can be found here. There's a lot more, Fluent NHibernate being another very populare example.

Recently, Tom Farnbauer released such a fluent configuration API for Spring.NET, Recoil for Spring.NET. Using Recoil, our MovieFinder example could look like this:

public class MyWiringContainer : WiringContainer
    public override void SetupContainer()
        Define( () => new MovieFinder() )

        Define( () => new MovieLister( Wire<IMovieFinder>() ) )
} var myContext = new GenericApplicationContext(); myContext.Configure() .With<MyWiringContainer>(); myContext.Refresh();
Code Example: Wiring using Recoil for Spring.NET

Unfortunately I quickly found while introducing Recoil into my current project, that the API has some flaws and - after all - when looking at fluent apis, they usually add as much codenoise to your configuration as xml already does. Also fluent APIs tend to be less extensible than other approaches. Imho Fluent NHibernate suffers this fate a lot (probably others too, but this is the latest example crossing my way).

Of course we already achieved at least 1 important goal: We are safe against refactoring. Any class moves or renames will automatically be reflected in our configuration - because it is code.

Back to the whiteboard

I didn't find any of those approaches really satisfying. After all, all I want to do is

var movieFinder = new MovieFinder();
var movieLister = new MovieLister( movieFinder );

So lets start with the least minimal configuration container I can think of. A simple class, who's member methods return the objects you are asking for:

public class MovieFinderConfiguration 
   public IMovieFinder MovieFinder() 
     return new MovieFinder();

  public IMovieLister MovieLister()
    return new MovieLister( MovieFinder() )
Code Example: Wiring using ... plain C#

This already does a lot of what we want:

  • it is readable
  • it doesn't cause more codenoise than your standard code would have
  • we have full refactoring support
  • we automatically get the whole object graph once we call MovieLister():
var container = new MovieFinderConfiguration();
var movieLister = container.MovieLister();
var movies = movieLister.MoviesDirectedBy("Roberto Bergnini");
Code Example: Obtaining the object graph from our C# "container"

Unfortunately our minimal container is lacking a couple of things:

  • no support for managing the object lifecycle - everytime we call MovieLister() it will return a new graph
  • no support for applying aspects to care for crosscutting concerns
  • no support for all the other features you get from the containers out there

New on stage: CodeConfig for Spring.NET

Rooting in a great idea of Rod Johnson, Chris Beams started working on bringing this idea to the Java world, the result is JavaConfig (and since yesterday's release of our mother-project integral part of Spring 3.0, congrats at this point to the Java team!). Mark Pollack implemented the first POC for Spring.NET, due to the needs in my current project, I decided to take that POC and continue develop it.

Yet again, Spring.NET surprised me. Due to being incredibly flexible, it was almost a breeze to merge the above mentioned concept with the container infrastructure. How does our MovieFinder look like using CodeConfig? Simple:

public class MovieFinderConfiguration 
   public virtual IMovieFinder MovieFinder() 
     return new MovieFinder();

  public virtual IMovieLister MovieLister()
    return new MovieLister( MovieFinder() )
Code Example: The CodeConfig configuration - not much difference!

Note the additional [Configuration] attribute and the "virtual" keyword added to the methods. Now feed this configuration into the Spring.NET application context and retrieve an instance of the IMovieLister:

var appContext = new GenericApplicationContext();

var movieLister = appContext.GetObject<IMovieLister>();
var movies = movieLister.MoviesDirectedBy("Roberto Bergnini");
Code Example: retrieving an IMovieFinder from an CodeConfig-configured application context

Neat, isn't it? This means you can write your configuration in the simplest possible way and still can leverage the full power of the IoC container. Enabling logging on your services? No problem:

.EnableLogging(cfg => { cfg.TypeFilter = type => type.IsDefined(typeof(ServiceAttribute), true); cfg.Logger.LogExecutionTime = true; cfg.Logger.LogMethodArguments = true; cfg.Logger.LogReturnValue = true; cfg.Logger.LogLevel = LogLevel.Trace; });

Using Conventions for Configuration

You don't want to configure each object manually in a large application? Use component-scanning, a feature that I shamelessly stole from StructureMap:

    .Scan(s => s                
        .Include(t => t.IsDefined(typeof (ComponentAttribute), true))

Sources, Documentation and Next Steps

At this moment, you can find the sources in my public SVN repository at XP-Dev. Note, that the code will move to SpringSource's CodeConfig for Spring.NET repository here at a later stage.

For a quick overview of what is already possible, check out the various configuration examples for the MovieFinder example there. Documentation is missing, but reading the JavaConfig reference will give you a good overview of the features. For the component-scanning feature read Jeremy Miller's introduction on assembly scanning.

Beware that this is still a moving target, consider it not being more than a preview yet. But I'd love you to grab the sources, play around and let me know what you think - keen on hearing your feedback!

The first milestone of CodeConfig is scheduled for end of January - and don't forget to check, later this day, Spring.NET 1.3.0 GA will be released.

Merry XMLless!



Due to my boss requesting an architecture review, I just find myself trying to put together all those pen&paper discussions I had with people on the team and rethinking the decisions I have made on how the project is set up and the application is structured.

While doing this, I find myself in a rather interesting situation: I realize that quite a lot of my decisions were based on intuition (which is of course likely driven by experience). Those decisions were not the result of a traditional, concious Analysis-Thesis-Synthesis approach. Much more, the process obviously happened somewhere in the back of my mind and at some point it comes back to the surface and presents me a solution that just "feels" right. Luckily most of those solutions proof useful over time and withstand all kinds of project challenges.

Nevertheless it leaves me thoughtfully. Should I break my habits and switch to e.g. famous +/- lists? How do I explain certain decisions? "I have a good feeling" is hardly an explanation my boss will eat...

I think Kevlin Hennley mentioned it in his talk "Five Considerations for Software Architects" that there is nothing wrong about making intuitive decisions. After all intuition comes a lot from experience. But "I had the idea under the shower" is likely an explanation that not all people will accept. People are different, Myers-Briggs spent a lot of time on this - and a lot of people don't take intuition as an argument. It always makes sense to rationalize your decisions afterwards, so that everyone can understand them. If your decisions proof right, it should be easy to rationalize them anyway.


Sniff HTTP traffic with ASP.NET Development Webserver

During developing webapplications or -services, every now and then you will face a strong desire to be able to see the HTTP traffic that is sent back and forth between a client and the webserver.

Tracing local HTTP traffic is not easy

You will quickly find out, that this task sounds easier than it actually is when your client and your server reside on the same machine. There are tools like Fiddler (and tons of others, but this is my favorite), but they all suffer the same problem: Requests to localhost/ cannot be captured because they are optimized by the windows network stack and bypass the usual hooks used by capturing tools. Thus you either need to use your network card's IP address for submitting requests or - if you don't have a local ipaddress when e.g. you use DHCP and are disconnected from the network - you need to install the MS Loopback Adapter and use this adapter's IP address (see installing MS Loopback Adapter)

ASP.NET Development Webserver is locked down on "localhost"

If you are like me and like the ASP.NET Development Webserver (aka "Cassini" or "WebDev.Webserver") that comes with Visual Studio (and recently also with the Windows 7 SDK), you can't use it to capture traffic. Probably due to legal issues, the ASP.NET Development WebServer (aka Cassini) contains code that binds the TCP socket to the address only and also contains a check, that the requesting client resides on localhost. In this case you have 2 choices:

a) setup a Webapplication in IIS this is possible but nasty when you want to do it in a build script

b) follow the instructions below to patch the WebDev.WebHost.dll on your system

Steps to patch WebDev.WebServer

Note: All steps below assume you have .NET 3.5 Service Pack 1 and the Windows 7 SDK installed. But with basic familarity of IL code you shouldn't have much problems applying those steps to e.g. the version of WebDev.WebServer that comes with VS 2005 (note, that this version resides under the installroot of VS!)

Step 1

Create a new directory "C:\patchcassini" that we will use to work and change to this directory on the commandline. You can create any directory you want, but I will refer to it as C:\patchcassini below.

Step 2

Copy the WebDev.WebHost.dll from the GAC to your working directory and disassemble into an IL script. The following batch script shows how this is done:

set SDKHOME=C:\Program Files\Microsoft SDKs\Windows\v7.0

cd C:\patchcassini

rem refresh copy from GAC
copy /Y C:\Windows\assembly\GAC_32\WebDev.WebHost\\WebDev.WebHost.dll .

REM create backup
copy WebDev.WebHost.dll WebDev.WebHost.original.dll

REM disassemble
"%SDKHOME%\bin\ildasm.exe" WebDev.WebHost.original.dll /out=WebDev.WebHost.il

This will create 2 new files in your folder: WebDev.WebHost.il and WebDev.WebHost.res. Note that the script also creates a backup of the original assembly

Step 3

Patch the generated IL script. There are 2 things you have to do:

1) Open the script in any editor and find & replace all occurrences of [System]System.Net.IPAddress::Loopback with [System]System.Net.IPAddress::Any

2) Find the method body of Connection::get_IsLocal() (just search for this string), you will see something like

.method assembly hidebysig specialname
        instance bool get_IsLocal() cil managed
} // end of method Connection::get_IsLocal

Replace the whole method body with the code as shown below:

.method assembly hidebysig specialname
        instance bool  get_IsLocal() cil managed
.maxstack  2
IL_0014:  ldc.i4.1
IL_0015:  ret
} // end of method Connection::get_IsLocal
Step 4

You need to recompile the IL script and reinstall the patched dll into the GAC. Since we also modified a signed dll, we need to turn off signature validation for this dll. The batch script below shows how this is done:

set FRAMEWORKHOME=C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727
set SDKHOME=C:\Program Files\Microsoft SDKs\Windows\v7.0

REM recompile the patched IL script into WebDev.WebHost.dll
%FRAMEWORKHOME%\ilasm.exe /output=WebDev.WebHost.dll /quiet /resource=WebDev.WebHost.res /debug /dll WebDev.WebHost.il

rem Remove validation
"%SDKHOME%\bin\sn.exe" -Vr WebDev.WebHost.dll

rem Install into GAC, forcing overriding any existing assembly
"%SDKHOME%\bin\gacutil.exe" /i WebDev.WebHost.dll /f


After patching and reinstalling the WebDev.WebHost.dll, hitting F5 in VS to launch your webapplication allows you to access the Webserver using any local IP address and thus allows tools like Fiddler to capture the traffic.

Hint: You can also make your life easier by registering the Webserver in the context menu of any folder. Just use the registry script below:

Windows Registry Editor Version 5.00

@="ASP.NET Webserver 2008"

@="\"C:\\Program Files\\Common Files\\Microsoft Shared\\DevServer\\9.0\\WebDev.WebServer.EXE\"  /port:81 /vpath:\"/\" /path:\"%1\""

This allows you to launch the Webserver using a rightclick of your mouse on any arbitrary folder:


Hope this helps!


Have a cup of coffee with a friend

A friend recently told me a great story about priorities in your life. Not directly related to software development, I still think it might apply to one or another and I simply like this story very much. When things in your life seem almost too much to handle, when 24 hours in a day just are not enough, remember the mayonnaise jar and 2 cups of coffee.

A professor stood before his philosophy class and had some items in front of him. When the class began, wordlessly, he picked up the very large, empty mayonnaise jar and proceeded to fill it with golf balls.
When no more golf balls would fit in the jar he asked the students if the jar was full. They agreed it was full.

Then the professor picked up a box of pebbles and poured them into the jar. He shook the jar lightly. The pebbles rolled into open areas between the golf balls. He asked his students again if the jar was full. They agreed it was.

The professor next picked up a box of sand and poured it into the jar. The sand filled up everything else. He once more asked the students if the jar was full. The students responded with a unanimous "yes".

The professor then produced two cups of coffee from under the table. He poured the coffee into the jar, effectively filling the empty spaces between the sand. The students laughed.

"Now" said the professor, as the laughter subsided, "I want you to recognize that this jar represents your life. The Golf balls are the important things - God, Family, Children, Health, Friends, Passions - things that if everything else was lost and only they remained, your life would still be full. The pebbles are other things that matter - like your Job, House, Car, etc. The sand is everything else - the small stuff. If you put the sand in the jar first," he continued, "There is no room for the pebbles or the golf balls. The same is true for life. If you spend all your time and energy on the small stuff, you will never have room for the things that are important to you.

So pay attention to the things that are critical to your happiness. Play with your children. Take time to get that check-up. Take your partner out to dinner. Play another 18 holes. There will always be time to clean the house and fix the disposal. Take care of the golf balls first - the things that really matter. Set your priorities. The rest is just sand."

One of the students raised her hand and asked what the coffee represented. The professor smiled. "I am glad you asked. It just goes to show that no matter how full your life may seem, there's always room for a couple cups of coffee with friends."


Parallelism for the Masses?

I just returned from an interesting full-day seminar on parallelizing applications using Intel's tool suite with the quite ambitous title "How to write bug free and performance optimized parallel (threaded ) applications ( Turning a serial into a parallel application)". A demonstration of the capabilities of Intel Parallel Studio was quite impressive.

Clever Tools

Let Parallel Studio analyze your C/C++ code to detect performance hotspots and parallelizable code fragments. Then use OpenMP and add #pragma directives to actually "annotate" your code with parallelization hints for the compiler:

#pragma omp parallel for reduction(+: sum) 
for (i = 0; i < 1000000; i++) { 
 sum = CalcSum(i, sum); 

Pretty neat, isn't it? I found it quite surprising how far one can already get nowadays by just using clever tools for (semi-)automatic parallelization of applications. Still: Can you prove the above code does not cause semantic errors? Remove the "for reduction(+: sum)" from the above and you will get a random results.

Required: a radically new approach

Introducing technlogies like OpenMP (or the new .NET Parallel Programming libraries to mention a more recent one) doesn't help much to really improve the situation. It is like fixing  your bathroom with duct tape. It works - but it is not a sustainable solution. In my opinion, going parallel - and thus make your code scalable, be it on multi-core or a service in the cloud - requires more than tools. Everyone, who has already written one piece of multithreaded code, also has already had to debug a concurrency issue. It doesn't matter, how good you are, how much or little experience you have.

Actors enter the future stage

What does it take to write better scalable and parallelizable code? The problem with parallel code is "shared state". Message Passing avoids this problem. The Actor Programming Model takes this even further. Systems like Erlang have a long history in successfully applying this totally different programming paradigm (at least totally different from today's mainstream only of course). Multicore processors and cloud computing revived this paradigm by increasing the need for parallel programming support. Microsoft's Axum and attempts like the "XCoordination Application Space" try to bring this paradigm to the masses on the .NET platform, Scala will likely be the next success on the Java platform (imho ironically by replacing the Java language on the JMV). I dare to predict, that lot of our programming future will be Actor-based. And the platform that makes implementing actors most convenient has good chances to become the next mainstream.

What are your thoughts on the future of programming? How will we handle challenges like debugging or orchestration of actor communication in this new world? Keen to hear your opinions and/or experiences!


What Software Architects can learn from History - a conceptual look at SOA and EDA

This blog post was inspired by the article "SOA through the looking glass" by Udi Dahan, recently published in the MSDN Architecture Journal and a following discussion with collegues. One thing that quickly popped up was the different understandings of the terms SOA and EDA (we really should care about our own ubiquitous engineering language first ...). Here are my thoughts on Services, Events and why I think those concepts allow us to build better solutions, thus better meeting business needs, which is - after all - what we build all applications for.

How Lou Gerstner got IBM to dance

I do remember times where a big monolithic enterprise called IBM acted like a centralized mainframe in the market all over the world. Due to its organizational structure the company wasn't able to respond to new and changing market challenges within a reasonable timeframe. So why does IBM still exist? Because in the early 90s Lou Gerstner decided to restructure the whole company and split it into lots of small, autonomous profit centers ("How Lou Gerstner got IBM to dance"). Those profit centers act like small companies embedded within a large company, being responsible for their own revenue as well as finding the right partners to connect with both within as well as outside the enterprise, while being able to leverage synergy effects from being part of a large enterprise. As a matter of fact, companies like Amazon and Google are such effective market players because they are organized exactly this way - service oriented. Service Orientation is a quite natural concept describing a set of autonomous actors collaborating with each other, exchanging information by sending messages and reacting to external stimuli, also known as events. In his article, Udi Dahan correctly points out that service orientation and events are just 2 sides of the same coin.

The world doesn't block because you have a coffee

In contrast to an all-to-common misconception, I do not see SOA and EDA as technologies. In fact both describe organizational and structural concepts that apply to business organizations in the first place. Thus I see SOA and EDA as a way to translate those natural concepts onto the way we build software. Instead of relying on leaky abstractions like synchronous remote method calls (crossing process boundaries may always fail for hundreds of possible reasons) and distributed transactions (what if something goes wrong during the commit-phase??), software engineers need to take a good look at how the world really works that they are trying to project into bits and bytes. If you look at the real world, it doesn't use 2-phase-commit. Instead it uses compensation actions (see Greg Hohpes awarded article "Starbucks does not use 2-phase-commit". Also the real world seldom uses synchronous commands. Instead we send messages and react on events when they occur. You don't get blocked when you receive an email - instead it is up to you to decide when to come back and read it later. Also the sender is not blocked from her work, waiting for response from you. The real worlds acts sometimes sequentially (you receive the email before you can answer it), but seldom synchronous. You might have forgotten to pay a bill? You will receive a friendly reminder letter and if that overlaps with the fact that you already paid the bill in the meantime, it will state to "safely ignore this letter" in this case. How does the real world stay agile and responsive? Use small groups of people concentrating on a particular responsibility instead of large monolithic blocks responsible for everything, effectively achieving nothing.

Business Partners do not share database schemata

What is important about the word "autonomous"? I see the importance in the evolvability and flexibility of a system. Companies do not simply change their interfaces to partners they collaborate with. And 2 collaborating companies do seldom use a shared database, they will even use different fields for their "customer" datastructure. Here is where I see the "Bounded Contexts" first described by Eric Evans coming into play. Instead of falling into the trap of the anti-patterns "shared database" and "shared datastructure", where one single dataschema/-structure tries to please several different needs, split e.g. editing information like your personal profile on facebook from searching the whole user database for a particular name. Those are different taks, requiring datastructures and processes optimized for that particular task. Sharing a database does not work in this case not only because of the size of facebooks userdatabase, but also because changes to to either of the tasks would become impossible. Instead of achieving simplicity in the database schema, one introduces complexity in the evolvability of a system because of strong dependencies between components due to shared structures ("The 'One Truth Above All Else' Anti-Pattern").

Cloud Computing requires us to take new look

Traditional software development strategies hit the wall even faster when it comes to develop solutions to run in the cloud. As Gianpaolo Carraro describes in his blog post "Head in the Cloud, Feet on the Ground" (and in MSDN Architecture Journal #17), business demands will sooner or later likely force us to move at least parts of the systems we build into a cloud environment. How do you build a system, where service instances may come and go as needed? Where services simply move from one box to another without prior notice, controlled by the cloud operating system? Aside from other important implications, simple conventional synchronous method calls do not work anymore in such an environment. Different strategies are needed and service-orientation, asynchronously sent messages and reacting to events are part of the solution, leading to a system of collaborating services, talking to each other in conversations.

It's always about balance

Of course building a software following those principles also means, that you pull all those previously hidden complexities to the surface and into your application domain instead of leaving them up to the infrastructure layer, where they stay buried until something goes wrong and they bites you from the back. Thus one has to make a careful decision, where it is ok to use easy to write synchronous remote calls and accept the (hidden) risk of lack of scalability and blocking callers due to a broken infrastructure, eating up webserver resources, while blocking calls pile up instead of making this fact explicit in the organizational structure of an application system, partitioning functionality into several autonomous services, collaborating with each other by sending messages and reacting to events. As usual, it is a matter of compromise and finding the right balance. You just need to be aware of the risks you accept when synchronously calling a webservice method using SOAP over HTTP in favour of having to write a simple method call vs. the explicit and therefore more complex communication patterns happening between application components, but providing better flexibility and scalability due to higher decoupling.

We've already been there

Those ideas are not new. IBM's restructuring is just one of the examples taken from reality. Even within the Software Engineering industrie those ideas have been around for quite a while and most of us even already used them. Probably not everyone is writing programs in Erlang, which is quite successful in applying those ideas since the 60s. But almost everyone has written GUI applications, haven't we? Well, there is this thing called "Window Message Queue" and a technique called "reacting to user events". It is time to undust this knowledge. Although IT history is rather short compared to other industries, Architects should learn from it.


Spring for .NET 1.3.0 Release Candidate available

Finally we made it: The next release of Spring for .NET is available for preview and brings a couple of new things as well as over 100 fixed bugs to the table. Grab the new release as usual from our website http://www.springframework.net and give it a test run. The final release is currently scheduled for the first week in September. Below I will give you a short introduction to the new features.

New Features

NVelocity Support

Erez Mazor joined the team in June and brought a nice integration of the NVelocity template library (part of the Castle project) with him. Let's asume you need to send a confirmation email after processing T-Shirt order to a customer. Most of this email will come from a template, just a few variables will be individual for each customer. Here's how such a template looks like using the Velocity Template Language: 

Dear#if ($sex == "F") Ms#else Mr#end $recipient,
We are happy to withdraw $1Mio from your account

Here's an example on how to use Spring.NET's integration to send an Email from your application. The configuration causes the NVelocityFactory to load templates from the NVelocityDemo assembly's "NVelocityDemo" namespace:

<objects xmlns="http://www.springframework.net" xmlns:nv="http://www.springframework.net/nvelocity">
  <nv:engine id="velocityEngine">
      <nv:spring uri="assembly://NVelocityDemo/NVelocityDemo/"/>
  <object id="emailSendService" type="NVelocityDemo.EmailSendService">
    <property name="VelocityEngine" ref="velocityEngine" />

Here's the C# Code of my EmailSendService class:

public class EmailSendService
  public VelocityEngine VelocityEngine { get; private set; }

  public void SendEmail(string recipientName, string sex, string emailAddress, string subject)
    Hashtable model = new Hashtable();
    model["email"] = emailAddress;
    model["recipient"] = recipientName;
    model["sex"] = sex;
    IContext modelContext = new VelocityContext(model);
    StringWriter sw = new StringWriter();
    this.VelocityEngine.MergeTemplate("confirmationEmail.vm", Encoding.UTF8.HeaderName, modelContext, sw);

    string emailContent = sw.ToString();

    // send mail
    Console.WriteLine("Sending email to {0} with subject '{1}':\n{2}", emailAddress, subject, emailContent);

Running this example will inform our customer about our pricing for T-Shirts:

Sending email to foo@world.com with subject 'Thank you!':
Dear Mr Smith,

we are happy to withdraw $1Mio from your account
TIBCO Enterprise Message Service

Similar to our support for MSMQ and ActiveMQ, consequently there is now also support for Tibco's EMS. If you are already familiar with Spring's ActiveMQ support, you will quickly feel comfortable. To get started, check out our reference documentation and the upcoming blog post at Mark Pollack's Blog.

MS Test

Some of us are forced to write their unit tests using Microsoft's own testing framework. To support them, Spring.NET now also integrates support for writing unit and integration tests similar to the one already available for NUnit. You can find the supporting stuff in the Spring.Testing.Microsoft assembly.

Noteworthy Improvements


In response to a lot of requests, we upgraded our NUnit testing environment to the latest NUnit 2.5.1. There is also new support for writing database integration tests. Use the SimpleAdoTestUtils class to execute arbitrary SQL scripts to setup/teardown your database. An example is Spring's NHibernate Integration tests. Here is the SQL script to recreate our integration database:

IF  EXISTS (SELECT name FROM sys.databases WHERE name = N'Spring')


IF  EXISTS (SELECT * FROM sys.server_principals WHERE name = N'springqa2')
    DROP LOGIN [springqa2]



USE Spring
CREATE USER [springqa2] FOR LOGIN [springqa2] WITH DEFAULT_SCHEMA=[dbo]
EXEC sp_addrolemember 'db_owner', 'springqa2'

Now, leveraging NUnit's [SetupFixture] feature, we can easily execute this and other scripts before the execution of each fixture:

public class Setup
  private const string DBConnection = "Data Source=SPRINGQA;Database=$DATABASE$;Trusted_Connection=False;User ID=springqa;Password=springqa";
  private readonly IResourceLoader resourceLoader = new ConfigurableResourceLoader();

  private IResource GetResource(object instance, string name)
    string resourcePath = TestResourceLoader.GetAssemblyResourceUri(instance, name);
    return resourceLoader.GetResource(resourcePath);

  public void InstallMSSQLDatabase()
    IDbProvider dbProvider = DbProviderFactory.GetDbProvider("SqlServer-2.0");
    AdoTemplate ado = new AdoTemplate(dbProvider);

    // (re-)create database(s)
    dbProvider.ConnectionString = DBConnection.Replace("$DATABASE$", "master");
    SimpleAdoTestUtils.ExecuteSqlScript(ado, GetResource(this, "RecreateDatabases.sql"));

    // create tables
    dbProvider.ConnectionString = DBConnection.Replace("$DATABASE$", "Spring");
    SimpleAdoTestUtils.ExecuteSqlScript(ado, GetResource(this, "Data.NHibernate/creditdebit.sql"));
    SimpleAdoTestUtils.ExecuteSqlScript(ado, GetResource(this, "Data.NHibernate/testObject.sql"));

Using the new NVelocity integration, you could even easily parameterize your SQL scripts for different environments.

Configuration Error Handling

Previously sometimes it was hard to read configuration errors. I put some effort into reducing this pain and make the error messages much more explicit. Now you get the exact filename + linenumber of the offending object definition and also a more detailled explanation, what is causing the problem.

AOP Performance

The performance of AOP proxies at runtime improved, but also *creating* such singleton proxies at startup time could cause some issues. You now get a bunch of different Auto-Proxy processor implementations to choose from to better suite your exact needs.

Sometimes using <tx:attribute-driven /> or <aop:config> could cause issues when an additional XXXAutoProxyCreator was defined in your configuration. This roots in the fact that those 2 configuration elements silently registered their own DefaultAutoProxyCreator under the hoods. This is now taken care of and it is guaranteed that infrastructure AOP elements will not interfere with user-defined ones anymore.

Feedback welcome

The release will be in ~ 3-4 weeks, so please grab a copy of the release candidate and give it a whirl - We'd be happy to get your feedback!


Common Logging 2.0 for .NET Released

Last week I released a new version of Common Logging 2.0 for the .NET platform. You can get the distribution as well as online API and User Reference documentation from the project website. For users of previous versions there is also a section about upgrading to 2.0.

What is Common.Logging?

Similar to Java's Apache commons-logging, Common.Logging is an ultra-thin brigde between different .NET logging implementations based on original work done by the iBatis.NET team. A framework or library using Common.Logging can be used with any logging implementation at runtime and thus doesn't lock a user to a specific framework or worse letting him struggle with different frameworks using different logging implementations.


Common.Logging comes with adapters for all popular logging implementations like log4net and Enterprise Library Logging.

Bridging between logging implementations

You might run into the problem, that libraries used by your application use different logging implementations. Common.Logging allows you to route from those implementations into any logging system of your choice:


Please see reference documentation for an example on how to configure such a bridging scenario.

What's new in 2.0?

Several extensions and improvements were made, namly

  • Dropped .NET 1.1 Support
  • Added support for Enterprise Library 4.1 Logging
  • Extended and improved ILog Interface
  • Convenience LogManager.GetCurrentClassLogger() Method
  • Bi-directional log event routing between logging implementations (see bridging above)
  • Improved performance

I already blogged about comparing Common.Logging performance to System.Diagnostics.Trace. According to a thread on stackoverflow, log4net seems even slower than System.Diagnostics.Trace.

Configuring Common.Logging

For examples on how to configure and use Common.Logging and bridge different logging implementations, please see the user refererence guide. The API documentation contains configuration and usage examples on each adapter implementation. Here is an example for configuring Common.Logging to write messages to the console:

 <sectionGroup name="common">
   <section name="logging"
            type="Common.Logging.ConfigurationSectionHandler, Common.Logging"
            requirePermission="false" />
   <factoryAdapter type="Common.Logging.Simple.ConsoleOutLoggerFactoryAdapter, Common.Logging">
     <arg key="level" value="ALL" />

Using Common.Logging

Using Common.Logging is as simple as shown below:

ILog log = LogManager.GetCurrentClassLogger();
log.DebugFormat("Hi {0}", "dude");

Hint: When using NET 3.5, you can leverage lambda syntax for logging to avoid any performance penalties:

log.Debug( m=>m("value= {0}", obj.ExpensiveToCalculateInformation()) );

This is the equivalent to writing

if (log.IsDebugEnabled)
 log.Debug("value={0}", obj.ExpensiveToCalculateInformation());

and ensures that you don't have to pay for calculating the debug information message in case level "Debug" is disabled.

Further Roadmap

The following features and improvements are planned for the next release, some of them already in the works:

Of course I would like to invite you to submit any feature request, bug reports or other improvement suggestions to the project's issue tracker.

Happy logging!


IoC != Resolve<>()

Thanks to Lars Corneliussen, I just ran across the new Horn Package Manager, promising a Maven-like automatic resolution of package dependencies to allow you building a whole OSS stack from scratch.

Not directly related to horn, but on horn architecture introduction I read:


Talking about Inversion of Control and showing a ServiceLocator - even worse: a static one (!) - piece of code always makes me very nervous. Imho SL is one of the worst patterns ever because it claims to be IoC, but still couples your code to the container - or at least the SL. I always wondered, why People keep asking for extended ServiceLocator-style support in Spring.NET - such bad examples are one of the major reasons I guess.

There has also been quite some talk about CommonServiceLocator as a means to make the container interchangable. Mark Pollack, Ayende and others (like Rinat Abdullin)  posted statements what CommonServiceLocator is for and what it is not.

Dear developers out there: IoC is about removing dependencies from your code, not replacing them with new ones!

P.S: NTL, horn sounds interesting and I will definitely check it out ;-)


NDoc is dead - long live NDoc3

While working on the next release of the Common.Logging library, I noticed that we didn't have an API documentation yet. On Spring.NET we use DocumentX (alas in the outdated version 2005) to generate the SDK documentation, thus I tried to set this up for Common.Logging. Quickly I faced some issues: First we do have a free Open Source project License for Spring.NET, but I'm not sure, if I can reuse it for another framework. Second, you can't run DocX from the commandline (except when you explicitely ask Innovasys for it). This is a no-go for continuous integration.

Any Open Source Alternatives?

Looking for alternatives - as an OSS guy of course I prefer an OSS tool, I couldn't find much. Being burned a while ago by Sandcastle and very irritated about MSFTs seriousness on this tool, the only tool I could find and that was at least close to what I expected was NDoc3. My expectations were low: I want to be able to integrate it in my build script and the documentation process should be tightly integrated with my daily development work.

Getting involved

Being the successor of my formerly beloved, but now dead, NDoc tool, thanks to Kim Christensen picking up the project, NDoc3 is capable of handling new .NET 2.0 features like generics, asymetric property accessibility etc.. For those familiar with NDoc, there won't be any surprises. Guides for using NDoc can e.g. be found here. Being not 100% satisfied with it yet, I did what OSS is all about: I involved myself in the further development of NDoc3.

What's new?

After a week of hard work, I refactored the codebase to allow for better development in the future as well as better testing. In addition, I introduced a new feature: MergeAssemblies. This means that similar to DocX, it is now possible to generate an extra hierarchy level for assemblies. By default, the flag is on, but you can turn it off:image

This results in a navigation hierarchy created as shown below:


This also allows for yet another nice feature: the assembly overview page. It provides a complete overview of your assembly's references, dependencies and all types. Here is an examples of how this looks like:


To allow for automatically generating the assembly summary text, I introduced a new AssemblyDoc class (similar to the NamespaceDoc feature). Just add a class named AssemblyDoc to the global namespace in your assembly:

   2:  /// <summary>
   3:  /// This assembly contains the adapter to the 
   4:  /// NLog library
   5:  /// </summary>
   6:  internal sealed class AssemblyDoc
   7:  {
   8:      // serves as assembly summary for NDoc3 (http://ndoc3.sourceforge.net)
   9:      private AssemblyDoc()
  10:      { }
  11:  }

and remember to activate the "UseNamespaceDocSummaries" setting! Hopefully you will soon see a full-fledged example project soon when we release Common.Logging 2.0 in the next couple of days.

Cool - I want it for my own projects!

First of all, all these features just work for the MSDN documenter yet. Since unfortunately there is no nightly build set up yet (any volunteers out there?), you need to grab the code from SVN and simply execute the nant script in the root folder of the trunk - see the readme.txt file for more info. Note, that you need a recent nightly build of NAnt, that supports .NET 3.5. Alternatively you can open the solution file and build the solution using VS2008.

... and keep the feedback coming

For any suggestions, feature request and bug reports please use the sourceforge tracker or subscribe to one of our mailing lists.

Hopefully we will again see more projects in the .NET world being well documented.


just for fun

A sun studio seen yesterday. I guess funny for geeks only



Thoughts on System.Diagnostics Trace vs. Common.Logging

While working on the final bits for Common.Logging 2.0 I just looked a bit closer into NET’s trace model and tried to compare it to Common.Logging. I must admit that, being a long-time user of first log4net and then Common.Logging, I never much cared about <system.diagnostics>. Revisiting assumptions from time to time is a good habit, thus I dived into .NET tracing. Here’s what I found…

Configure and use <system.diagnostics>

The .NET trace system basically works by using TraceSources (="Logger" in log4net) to write events to a list of TraceListeners (="Appenders") that decide whether a message to log by consulting a list of TraceFilters. This could be useful, alas there is no "batch" configuration possible, means you need to configure this chain for each TraceSource. Here's a minimum example what one needs to do:

        <add name="myListener"
             type="MyTestTraceListener, MyAssembly" />
        <source name="mySource" switchValue="All">
                <add name="myListener" />

Your code will then look like this:

class MyClass
    readonly TraceSource trace = new TraceSource("mySource");
    void Foo(object someArg)
        catch (Exception ex)
            trace.TraceEvent(TraceEventType.Information, -1,
            "error arg={0}:{1}", someArg, ex);

    void Bar(object someArg) { /* do something */}

A nice tutorial on tracing is given in this blog, some criticism is found here.

Performance Considerations

Of course you don’t want to pay much performance costs for your logging infrastructure. Fortunately tracing doesn’t cost you too much. I wrote some tests in Common.Logging to compare both and found Common.Logging twice as fast as Trace. Note, that we are talking about 2s vs. 4s for passing 100.000.000 log entries through the chain. I do not think that this is an issue for anyone except for applications generating an insane number of log entries.

Things get interesting when evaluating a log message becomes expensive. Take for example the line from the example above:

  trace.TraceEvent(TraceEventType.Information, -1,
"error arg={0}:{1}", someArg, ex);

Somewhere within the tracing framework, the passed string and arguments must be evaluated and string.Format() gets called. But you do not want this to happen unless that message *really* gets logged – string.Format() calls both, someArg.ToString() and ex.ToString(), which might be very expensive to calculate!

To be safe, you need to wrap all your calls by

if ( trace.Switch.ShouldTrace(TraceEventType.Information) )
trace.TraceEvent(TraceEventType.Information, -1,
"error arg={0}:{1}", someArg, ex);

Unfortunately this doesn’t help in all cases. Most often you want all messages from all (or at least most) modules but only the informational ones (not the verbose). You can configure this using a filter:

        <add name="myListener"
             type="MyTestTraceListener, MyAssembly">
            <filter type="System.Diagnostics.EventTypeFilter"
        <source name="mySource" switchValue="All">
                <add name="myListener" />

Using the configuration above, all messages will be passed into “myListener”, who’s filter will drop all messages above the information level. Unfortunately there is another issue: The framework automatically adds a DefaultTraceListener to all configured <source>s that will happily log all messages. To avoid this you need to write

<source name="mySource" switchValue="All">
        <!-- prevent DefaultTraceListener from being added -->
        <clear />
        <add name="myListener" />

I wrote some performance tests simulating the case where we only want to log Warnings. Messages are emitted at “Info” level, but the configuration should restrict messages to “Warning”s. On my box this resulted in

Time:00:00:01.9640000 - log.InfoFormat + NoOpLogger
Time:00:00:04.8410000 - traceSource.TraceEvent + unconfigured TraceSource
Time:00:00:03.6140000 - log.InfoFormat
Time:00:00:12.7380000 - traceSource.TraceEvent

As you can see, using tracing gets significant slower using the configuration above. Frankly I could not figure out what causes this. Things become unacceptable as soon as you forget to remove the DefaultTraceLogger:

Time:00:00:00.6670000 - log.InfoFormat + NoOpLogger 
Time:00:00:02.2950000 - traceSource.TraceEvent + unconfigured TraceSource 
Time:00:00:02.2250000 - log.InfoFormat 
Time:01:00:23.8650000 - traceSource.TraceEvent <= not kidding here!

For this reason, Common.Logging introduced a new signature for logging leveraging the power of lambda expressions:

  log.Info( m=>m("some logger info {0}", (object)myObj) )
Using this syntax you will always be on the safe side. Common.Logging will take care, that the message will only be evaluated in case it really gets logged.

What bothers me most is the pain of configuring & using the trace framework. Here are the major pain points I found with the built-in tracing framework:

  • It is possible to share listeners, but you need to configure the list of listeners for each TraceSource.
  • It is impossible to configure a default listener to be used by all TraceSources
  • There is no logging of exception objects possible
  • One can instruct a TraceListener to append the current callstack to the message, but this will include the System.Diagnostics stack(!)
  • No easy way for "batch" configuring log levels for multiple sources (like in log4net logger hierarchies), each source must be explicitly configured with level+listener
  • As soon as one configures the listeners for a TraceSource, a DefaultTraceListener will also be added to the list, causing each message to be logged to OutputDebugString as well (add a <clear/> element to the sample above.
  • It is incredible verbose to use, both tracing and configuring

If one insists on avoiding a dependency on Common.Logging or log4net, go with tracing – you’ve been warned.

For all others I recommend grabbing Common.Logging. It allows you to defer the decision of which logging framework to use until the moment you deploy your application. Simply plug in any other logging system you might want to use. If you need to turn off logging at all, configure NoOpLoggerFactoryAdapter and minimize the costs of your log statements to almost zero - guaranteed!