David Ross's Blog Random thoughts of a coder

Opensource .NET Exchange III Lineup

28. May 2009 23:13 by David in

Gojko has announced the lineup for the 3rd Open Source.NET Exchange 

The speaker list is as follows:

  • Ian Cooper: A First Look at Boo
  • Dylan Beattie: Managing Websites with Web Platform Installer and msdeploy
  • Scott Cowan: Spark View Engine
  • David: Introduction to MPI.NET
  • Gojko Adzic: Acceptance testing in English with Concordion .NET
  • Sebastien Lambla: What OpenRasta does other frameworks can’t
  • Phil Trelford: F# Units of Measure

This time I will be introducing MPI.NET and covering many of the topics that I have been blogging about over the last few weeks.  I’ve decided to do all the examples/slides using C# as opposed to F# which I will continue to cover in my blog posts.

Personally I am looking forward to seeing the session on Boo and the improvement Sebastien has made to OpenRasta.

MPI.NET – Distributed Computations with the Message Passing Interface in F# – Part 2

17. May 2009 12:17 by David in

In a previous post I described that it was possible to calculate PI using Monte Carlo

Using the same technique it is possible to price financial products such as insurance.  Customers pay a premium for heath cover.  If they get sick the insurance company is then obligated to pay any medical costs that “may” occur.  This means that the customer has a fixed “known” upfront cost meanwhile the insurance company’s possible costs range from almost nothing to that of an extremely expensive medical bill resulting from a surgical procedure.  It is possible to run thousands of “What if scenarios” and use the results to estimate the amount of capital that is needed to cover the costs for all of the customers at the organisation.

Another example is pricing an Option which is a simple version of Insurance and is used to lock in the price of a product that the company wants to buy or sell at a future date. 

Buying an option – going Long:

Buyer has no obligation to exercise the Option however the seller of the Option is legally obligated to provide the product at the price indicated in the contract.  Buyer pays a premium to the seller.

  • Call - Buyer assumes the price is going to rise and wants to lock in the price – Airline buys an option to lock in the price of fuel for its fleet as it fears that prices will rise in 6 months. If the price drops in six months the Airline buys Oil from the market at the cheaper price.
  • Put – Buyer assumes that the price is going to fall and wants to lock in the price – Oil manufacturer believes prices will be lower in 6 months as the economy is slowing and fears that oil prices will fall.  If the oil price increases the manufacturer sells the oil at the higher market price.

Selling an option – going Short:

Seller is obligated to provide the product at the price indicated in the contract if the buyer exercises the contract.  An exercised contract is a loss for the seller, in comparison, if the contract is not exercised the Seller makes a profit in the amount of the Premium.

  • Call – Seller assumes the price is going to fall
  • Put – Seller assumes that the price is going to rise

Simulating a European Option

The Financial Numerical Recipes in C++web site includes a number of tutorials using C++ to calculate Bond Prices, Option Prices etc.  I recently have been porting the code snippets to F# to gain more familiarity with the language.

The C++ code to simulate the price of an Option is here

The first part of the simulation is randomly generate what the price will be in the future.  The value has an equal probability of being higher or lower than the starting price.  The inputs to the simulation are:

  • Current Value (S)
  • Interest Rate (r)– Since we are simulating the price in the future we want to convert the price back into todays money
  • Time (time) – Duration to the contract’s exercise date
  • Volatility (sigma) – The magnitude of the random movements, at each point in time, that the price is expected to have – Best guess based on historical data and is the most problematic and difficult part of pricing Options. 
    • A low volatility implies the the final price WILL NOT have diverged far from the current value S
    • A high volatility implies the the final price WILL have diverged far from the current value S
  • Type of Walk – Stock prices are assumed to be “lognormal walk” which means that each price movement is a percentage change
    • S1 = S * 0.1 * randomChoiceOf(1 or –1)
    • S2 = S1 * 0.1 * randomChoiceOf(1 or –1)

The C++ simulation uses a library to generate a random normal distribution which in turn is used to create the lognormal walk.  Meanwhile the System.Random object in .NET provides random numbers where the generated values are equally spread across the range 0 to 1.  A random normal distribution meanwhile generates values that follow a Gaussian distribution with the “mean” being zero and the shape looking like a bell curve.   While it is easy to create a method that will generate a normal distribution the excellent Math.NET Project provides this capability all ready.

let logNormalRandom = new MathNet.Numerics.Distributions.NormalDistribution()
let next = logNormalRandom.NextDouble()

This leads to the following ported code:

   1: let R = (r - (0.5 * Math.Pow(sigma, 2.0))) * time
   2: let SD = sigma * Math.Sqrt(time)
   3: let FuturePrice = S * Math.Exp(R + SD * logNormalRandom.NextDouble())

The code above returns what the future price will be for a particular simulation.  Following the explanation of Options above the buyer only exercises the option if it will make a profit over buying the product directory from the market (In the Money) and programatically is as follows:

   1: let europe_call_payoff price exercise =  Math.Max(0.0, price - exercise)
   2: let europe_put_payoff price exercise =  Math.Max(0.0, exercise - price)

The final code is here

   1: #light
   2:  
   3: open System
   4:  
   5: // Financial Numerical Recipes in C
   6: // http://finance-old.bi.no/~bernt/gcc_prog/recipes/recipes/recipes.html
   7: let europe_call_payoff price exercise =  Math.Max(0.0, price - exercise)
   8: let europe_put_payoff price exercise =  Math.Max(0.0, exercise - price)
   9:  
  10: let option_price_call_european S X r sigma time payoff sims =
  11:let logNormalRandom = new MathNet.Numerics.Distributions.NormalDistribution()
  12:  
  13:let R = (r - (0.5 * Math.Pow(sigma, 2.0))) * time
  14:let SD = sigma * Math.Sqrt(time)
  15:  
  16:let option_price_simulation()  =
  17:let S_T = S * Math.Exp(R + SD * logNormalRandom.NextDouble())
  18:     payoff S_T X
  19:
  20:let rec futureValueIter i value = 
  21:     match i with
  22:     |0 -> value + option_price_simulation()
  23:     |_ -> futureValueIter (i-1) (option_price_simulation() + value)
  24:
  25:let futureValue = futureValueIter sims 0.0
  26:   System.Math.Exp(-r * time) * (futureValue / (double)sims)

And the test

   1: #light
   2:  
   3: open MbUnit.Framework
   4: open OptionPricingModel
   5:
   6: [<Test>]
   7: let simulate_call_option() = 
   8:let result = option_price_call_european 100.0 100.0 0.1 0.25 1.0 europe_call_payoff 500000
   9:   Assert.AreApproximatelyEqual(14.995, result, 0.03)

Once again the simulation is “close” to the correct value in this case within 3%.  The C++ code shows techniques to improve the accuracy of the simulation which I will do a in a future post and at the same time host the simulation within MPI.NET.

MPI.NET – Distributed Computations with the Message Passing Interface in F# – Part 1

6. May 2009 18:39 by David in

Supercomputing has officially reached the desktop it is now possible buy a Linux or Microsoft based cluster for a few thousand dollar.  Further compute clusters can easily be spun up in the cloud where they can perform some work and then by switched off once the CPU intensive task has been completed.  One of the great benefits of this migration from big iron to clusters of affordable commodity hardware is that the software libraries that were designed to help scientists to predict the weather or to find the correlation between diseases and genes are now available to use in line of business applications.

Desktop versus Applications Server versus Compute cluster

The main differences between the three main computer architectures (Desktop, Application Server and Compute Clusters) is no longer based on hardware differences (vector computers such as those from Cray are being slowly replaced with Intel/AMD x64 machines) but usage scenarios. 

Computer ArchitectureUsage
Desktop
  • General purpose computer
  • Multiple applications running concurrently
  • Optimised for GUI feedback and response
Application Server
  • Dedicated to run a particular program
  • Long uptime requirements
  • Optimised for network
Compute Cluster/Supercomputer
  • Batch processing – Cluster is dedicated to work on a problem
  • Typically low or no external input once problem starts
  • Single Program Multiple Data type problems
  • Often collaboration between nodes is more complex than problem that is being solved

Supercomputers are typically batch processing devices.  A university or government department that has spent millions of dollars on their cluster needs to ensure that the infrastructure is fully utilised throughout its lifetime (24x7x365).  This typically achieved by scheduling jobs weeks in advance and making sure that there are no periods where the cluster is idle.  Often when work is not completed within its allocated scheduled period the process are automatically killed so that the next job can execute.  Since the cluster is designed to run different pieces of software and there might be hundreds of servers involved the concept of a “Single System Image” becomes important where software that is “deployed” to the cluster is seamlessly deployed onto all the nodes within the cluster.

Data Sharing

There are two basic methods for sharing information between nodes within a cluster:

  • Shared Memory Model – Combine all the memory of all of the machines in the cluster into a single logical unit so that processes from any of the the machines is able to access the shared data on any of the other machines.
  • Message Passing Model – Use messaging to pass data between nodes.

Since the shared memory model is similar to software runs on a local machine it is very familiar to develop against.  Unfortunately while accessing the data is transparent the actual time to load data off the network is far slower than reading from local memory.  Further data contention can arise with different servers in the cluster trying to Update the same logical memory location.  For this reason Message Passing has gained prominence and the Message Passing Interface protocol has become a standard across the super computing industry.

MPI.NET

MPI.NET allows the .NET developer to hook into the MPI implementation by Microsoft and thus can be used to have code running on a cluster.

MPI.NET Installation steps

  1. Install the Microsoft implementation of MPI - download
  2. Install the MPI.NET SDK – download

Converting the Monte Carlo F# PI example to use MPI.NET

The following is the original F# code to calculate PI:

   1: let inUnitCircle (r:Random)  =
   2:let y = 0.5 - r.NextDouble()
   3:let x = 0.5 - r.NextDouble()
   4:
   5:   match y * y + x * x <= 0.25 with
   6:     |true -> 1.0
   7:     |_ -> 0.0
   8:  
   9: let calculate_pi_using_monte_carlo(numInterations:int)(r:Random) = 
  10:let numInCircle = List.sum_by (fun f -> (inUnitCircle r))[1 .. numInterations] 
  11:   4.0 * numInCircle / (float)numInterations 

To calculate PI after a 1000 tests have been executed calculate_pi_using_monte_carlo 1000 r.

To speed up the calculation process we want the F# code to run on our cluster.

Steps:

  1. Reference the MPI assembly from the GAC
  2. Pass the command line arguments to MPI
       1: let args = Sys.argv
       2:  
       3: using (new MPI.Environment(ref args)) (fun environment ->
       4:     // Code here
       5: )
  3. When MPI starts it gives each node an ID – Usually Node 0 is used for communication and the other nodes are used for processing.
  4. We want to insert the call to calculate PI at line 4 however once each node has completed the calculation the result needs to be passed back to Node 0 so that it can be combined with the other results at the same location
  5. Determine the Node ID
       1: using (new MPI.Environment(ref args)) (fun environment ->
       2:let comm = Communicator.world
       3:let nodeId = comm.Rank
       4: )
  6. Use the Node Id to seed the Random method
  7. Use the Reduce method to retrieve the results of each different cluster instance and return that value back to the client
       1: using (new MPI.Environment(ref args)) (fun environment ->
       2:let comm = Communicator.world
       3:let seed = DateTime.Now.AddDays((float)comm.Rank)
       4:let r = new Random((int)seed.Ticks)
       5:let pi:double = comm.Reduce(calculate_pi_using_monte_carlo 1000 r, Operation<double>.Add, 0)  / (double)comm.Size
       6:if (comm.Rank = 0) then
       7:     (
       8:       Console.WriteLine("Pi " + pi.ToString())
       9:     ) 
      10: )
  8. Execute the code
  9. “c:\Program Files\Microsoft HPC Pack 2008 SDK\Bin\mpiexec.exe" -n 15 PebbleSteps.MPI.exe
  10. MPI then spins up 15 processes and runs the F# application within each process and provides the environment settings so that MPI.NET can determine what each Nodes ID is.
  11. image
  12. The program will finally display the extracted value of PI

Behavior Driven Development with NBehave

26. April 2009 23:06 by David in

Last Wednesday I presented a talk on BDD for Skillsmatter.  The talk was a great opportunity to drill into my own assumptions about using TDD (Test Driven Development) and TFD (Test First Development). 

 

What is BDD

BDD is a fairly controversial subject as it promotes a Top Down approach to agile software development as opposed to the bottom up approach that is described by Test First Development.  For example TFD by its very nature advocates a very tight iterative approach of defining a methods output, implementing code that passes the test and finally refactoring to cleanup the developed code.

image

This approach is fantastic for developing well understood components.  Unfortunately it can become very difficult to build complex systems using TFD.  Often team members will write tests AFTER the core business logic has been written (Green, Red, Green or just Green).  Sometimes the problem space is not well understood and by using TFD the development effort is pushing towards writing code and not designing the system.  The debate also tends to center around the number of tests that have been written as opposed to how useful the developed software actually is.  Hence TFD can become biased towards low level coding practice as opposed to system wide design and elegance.

BDD tries to alleviate the situation by swapping the coding process to a classic Top Down or Step Wise Refinement approach.  Instead of trying to write production code from the start BDD advocates iteratively defining the API and then once the API is stable writing the actual code that implements the API.  The output of TDD are a suit of tests.  The output of BDD is a specification for a system’s API and a set of contracts that validate that the API has been implemented.

image

BDD can be summarised as:

  • A formalised template for expressing User Stories
  • A methodology that promotes “security” to being first call citizen in the analysis process
  • Promoting Domain Driven Design is one of the best methods for building enterprise applications
  • Advocating API validation specification based tests as opposed to classic TFD

Key to the success of the BDD approach is the formalisation of the humble User Story.

 

As a [User/Role]
I want [Behaviour]
so that [I receive benefit]

 

NBehave in turn takes the BDD template and converts it into a Fluent API

   1: var story = new Story("Create portfolio");
   2:  story.AsA(“Operations team member")
   3:     .IWant("To create a new portfolio on behalf of a portfolio manager")
   4:     .SoThat("the portfolio manager can configure the portfolio and the front office can trade"); 

For each story the developer and analyst creates a suite of scenarios that explore both common and edge cases.

 

Given [some initial context]
When [an event occurs]
then [ensure some outcomes]

 

Once in NBehave this becomes

   1: story.WithScenario("portfolio does not exist")
   2:     .Given("portfolio name is $name", "Aggressive Fund")
   3:     .When("portfolio does not exist in the database")
   4:     .Then("new portfolio should be created");

At that point mocking allows the developer to iteratively define how the API should respond to external input.  Mocks/Stubs are used to allow the developer to experiment with the API.  Core components are mocked as opposed to using mocking to emulate dependencies.

image 

In the following example NBehave is used to aid in the design of the IPortfolioRepository and IPortfolioService interface.  No production code is executed in the test.  Instead the test is used to allow the developer to decide:

  • What services should core systems provide
  • What properties/methods should core domain entities have
   1: string portfolioName = "";
   2: Portfolio p = null;
   3: var portfolioRepository =  MockRepository.GenerateStub<IPortfolioRepository>();
   4: var portfolioService = MockRepository.GenerateStub<IPortfolioService>();
   5:  
   6: s.WithScenario("portfolio already exists")
   7:     .Given("portfolio name is $name", "Aggressive Fund",   n => {
   8:         portfolioName = n;
   9:         p = new Portfolio { Name = portfolioName };
  10:         portfolioRepository.
  11:             Stub(x => x.FindPortfolioByName(portfolioName)).Return(p);
  12:         portfolioService.
  13:             Stub(X => X.CreatePortfolio(p)).Throw(new                         ItemExistsException());
  14:         })
  15:  
  16: .When("portfolio already exists in database",
  17:      () => Assert.IsNotNull(
  18:         portfolioRepository.FindPortfolioByName(portfolioName))
  19:     )
  20:  
  21: .Then("new portfolio create should fail",
  22:     () => Assert.Throws<ItemExistsException>(() =>                     portfolioService.CreatePortfolio(p))
  23:     );

 

Steps to generate an API using NBehave

  1. Go through the User story and find all the domain objects
  2. Create a class for each domain object but don’t add state
  3. Go through the User story and create a find all the services that are required (Repositories etc)
  4. Create an interface for each service but don’t add any methods
  5. Slowly implement the story
  6. Add domain object properties as required
  7. Add methods signatures as required
  8. Use Stubs to explore Inputs/Outputs to the methods


Skillsmatter NBehave talk

29. March 2009 11:51 by David in

I will be doing a talk on Behavior Driven Development using NBehave at Skills Matter on April 15.  As yet the registration page doesn’t look as though its been put up.  However Gojko as always has put a link on UK .NET.

http://ukdotnet.ning.com/events/inthebrain-opensource-net-1

I will post an outline of the talk next week when I have fleshed in out further…

Calculating PI using the Monte Carlo method using F#

29. March 2009 11:09 by David in

Earlier in my career I spent a number of years designing electronic circuits using circuit simulation software.  One of the most important steps within the design process is to determine how robust the circuit will be to different scenarios such as temperature changes or to the spread in electrical characteristics in physical components (ie 100 ohms +/- 5%).  One useful design technique is Monte Carlo analysis where all of the simulation parameters such as component values and temperature are randomly changed by a small value and the circuit simulation is run.  The process repeats thousands of times.  The designer is able to use the results to hone into any areas within the design that unstable and either change the design or use more expensive components that have a tighter tolerance.

Monte Carlo is used:

  • In finance to complex products such as options
  • In mathematics to perform numerical analysis

Monte Carlo can be used to calculate PI

We can calculate PI by using knowing how to calculate the area of a square and a circle as follows

  1. To calculate the area of a Square {S = Width*Width}
  2. To calculate the are of a Circle {C = PI*Radius*Radius}
  3. If we place a circle inside a square Width = 1, Diameter = 1, Radius = 0.5 
  4. Therefore S = 1
  5. Therefore C = PI * 0.5 * 0.5 = PI * 0.25 = PI/4
  6. The Ratio is area between the Circle and the Square is Ratio = C/S = PI/4/1 = PI/4
  7. Hence PI = Ratio * 4

So now we can find PI by knowing there is a relationship between the area of the circle and a square.  Using Monte Carlo Analysis we randomly pick points within the square and check if they also fall into the circle.

  1. Randomly select a Point in the Square P = (X, Y) where X is between 0..1 AND Y is between 0..1
  2. Using the Pythagorean theorem P is in the circle if X*X + Y*Y <= R*R; X*X + Y*Y <= 0.25
  3. Ratio = Points in Circle/All Points

Here is the code

   1: let r = new Random()
   2:  
   3: let inUnitCircle _ =
   4:let y = 0.5 - r.NextDouble()
   5:let x = 0.5 - r.NextDouble()
   6:   match y * y + x * x <= 0.25 with
   7:     |true -> 1
   8:     |_ -> 0
   9:  
  10: let rec calculate_pi_using_monte_carlo(numInterations:int)= 
  11:let numInCircle = List.sum_by (inUnitCircle)[1 .. numInterations]
  12:   4.0 * (double)numInCircle / (double)numInterations 

The unit test is utilises the very useful AreApproximatelyEqual method from mbUnit.  As the number of iterations (random samples) increases the calculated PI starts moving closer and closer to the that of Math.PI.

   1: [<Test>]
   2: let row() =
   3:let rowVal = [(100, 0.2); (1000,0.07); (3000,0.03)]
   4:   List.iter (fun (i, d)-> Assert.AreApproximatelyEqual (Math.PI, calculate_pi_using_monte_carlo i, d)) rowVal

Unfortunately I was unable to use  [<Row([|(100, 0.2); (1000,0.07); (3000,0.03)|])>] syntax to utilise the Row attribute within F# since the compiler seems to be unable to create arrays of type object.  I would be very interested if anyone has been unable to get it to work.

Introduction to PostSharp and AOP slides and code

25. January 2009 12:30 by David in

On Thursday I participated in the Open Source .NET Exchange held by Skills Matter

It was a great night.  The venue was full and I and the other speakers received lots of positive feedback (although my microphone was not on correctly so I ended up shouting…).  As you can imagine, during a 15 minute speech, you can only scratch the surface of a topic.  However, with the talks being so diverse, I think most people, including myself, learnt a lot.

I especially liked Mike’s talk on the repository pattern.  Comparing and contrasting different implementations of a complex pattern/technique is a powerful method to understand the inevitable hidden intricacies.  His advice about being weary of using a single generic repository interface or base class for all repository’s was very timely.  In some code, I recently reviewed, the developer using NHibernate to populate a read only report.  Unfortunately within the project wide repository there were save and delete methods available.  Clearly this throws out the underlying read only nature of the data out the window.  Following Mike’s advice our team will be moving away from trying to be overly clever, with the design, of our repositories and move instead to ones that clearly identify the base aggregates and clearly expose the expected behavior of the root object at run time.

Interestingly I spent more time discussing the features of Active MQ with David and a group of people interested in using NMS than PostSharp.  David and I worked on a project where we used the Publish-Subscribe capabilities of Active MQ to push data to clients that were running Flash/ActionScript.  Using Push technology, as opposed to Pulling from database, reduces load and increases scalability.  We are using the same technique at a large finance company to push market prices directly to client machines running WPF.  In the .NET community people it is common for MSMQ to be used as the messaging infrastructure.  This choice heavily limits the design. 

MSMQ:

  • does not provide Publish-Subscribe capability – Although it can be emulated with NServiceBus/Mass Transit
  • does not allow you to tag messages with metadata which can then be used for intelligent filtering/routing
  • has a 4 MB limit on message size

On the PostSharp front there was a lot of interest in using PostSharp to “break the build” since its such a quick win for teams that are using NHibernate. I will try to blog about this technique in a little bit more detail soon.

 

Slides and Source

As promised here are the slides and the source code that were used during the presentation. 

To use the code download and install PostSharp using the default settings.  This will configure MS Build to automatically invoke PostSharp whenever the PostSharp.Laos.dll assembly is referenced.

Test Driven Development - Tools and Techniques

18. December 2008 00:26 by David in

Earlier tonight Chris and I presented our talk on Test Driven Development.  I have to say I was rather impressed at the turn out since its only a week away from Christmas and we still managed around 40 people.  As promised here are the slides from the presentation and all of the code/files that were used during the talk. 

Test_Infected_Presentation.ppt (544.50 kb) 

SkillsMatter.TestInfected.rar (6.15 mb) 

Running Fitnesse

  1. Uncompress the SkillsMatter.TestInfected.rar file into C:\SkillsMatter\TDDTalk
  2. Run C:\SkillsMatter\TDDTalk\TDD\fitnesse\run.bat - This will start the Fitnesse web server on port 8080
  3. load http://localhost:8080/MidlandsFoodsFirstCut
  4. Press the Suite link and the tests should execute and all fail - This demonstrates the deliverable from a Business Analyst
  5. Load http://localhost:8080/MidlandsFoods 
  6. Press the Suite link and the tests should execute and most will pass
Debugging Fitnesse
  1. Uncompress the SkillsMatter.TestInfected.rar file into C:\SkillsMatter\TDDTalk
  2. Run C:\SkillsMatter\TDDTalk\TDD\fitnesse\run.bat - This will start the Fitnesse web server on port 8080
  3. Load C:\SkillsMatter\TDDTalk\TDD\JackPlaysSnap\JackPlaysSnap.sln
  4. Set the Fitnesse project as the StartUp Project
  5. Under project propertie
    1. Set Start external program to C:\SkillsMatter\TDDTalk\TDD\fitnesse\dotnet\TestRunner.exe
    2. Set the Commandline arguements to localhost 8080 MidlandsFoods.GiantWinsJackNeverSnaps
    3. Set the Working directory to C:\SkillsMatter\TDDTalk\TDD\fitnesse\
  6. Set a breakpoint in JackPlaysSnapFitness.cs
  7. Run the project under debug mode

I'm a WISC developer

14. December 2008 16:59 by David in

I read a blog entry on WISC today.  Its the .NET developers equivalent to LAMP.

LAMP

 

  •  Linux
  • Apache
  • MySQL
  • Perl/PHP etc...

 

WISC

 

  •  Windows
  • IIS
  • SQL Server
  • C#

 

I'm not sure how well know this acronym will become, but it is kinda cool...

http://www.25hoursaday.com/weblog/2008/06/06/VelocityADistributedInMemoryCacheFromMicrosoft.aspx 

PostSharp Presentation 22 January 09

14. December 2008 16:06 by David in

I'm Coming to Open Source .NET eXchange at Skills Matter 2009I've been roped in to do another presentation at Skills Matter.  This time as part of the Open Source .NET Exchange program there will be a number of interesting talks on the night including.

The schedule is as follows... 

 

  • WELCOME TO THE OPENSOURCE.NET EXCHANGE (GOJKO ADZIC)
  • JQUERY (DYLAN BEATTIE)
  • ASPECT ORIENTED PROGRAMMING WITH POSTSHARP (David)
  • FLUENT NHIBERNATE (SEBASTIEN LAMBLA)
  • ACTIVEMQ AND NMS (DAVID DE FLORINIER)
  • IMPLEMENTING THE REPOSITORY PATTERN (MIKE HADLOW)
  • PANEL DISCUSSION: SPRING.NET AND CASTLE PROJECT (RUSS MILES & GOJKO ADZIC)
I will be talking about using PostSharp to simplify the development of plumbing code in Silverlight projects.