Castle Windsor IoC and AOP with ASP.NET MVC 4

0

It is no secret that I am a huge supporter of IoC (Inversion of Control) and AOP (Aspect-oriented Programming), which is evident with several earlier posts focused on building Enterprise applications. My first IoC experience was a Java portal project, which implemented Spring for many enterprise services. My recent projects have been .NET, which include several Spring.NET and Castle implementations. When evaluating IoC products, Ninject and more recently Microsoft Unity have also been candidates. IoC is gaining Microsoft community acceptance, especially with the release of ASP.NET MVC 4 and improved support/integration. See previous article for more information on design patterns including Dependency Injection (DI), Single Responsibility Principle (SRP) and Separation of Concerns (SoC).

In this post, I will provide an overview of the MVC 4 Castle Windsor IoC and Castle Core Dynamic Proxy features. Gasp…after several articles and posts with Spring.NET? This is true, but every project should evaluate several IoC (also referred to as Dependency Injection) containers and select the one that fits.

I would recommend NuGet for packages, so you can simplify the management of 3rd party libraries and tools.

Once you create a new ASP.NET MVC 4 solution and add the Castle packages to your solution, we can start building the application framework and scaffolding. Since this is a quick-start for Castle IoC and AOP features, we’ll build the essential components for a working MVC 4 solution.

First, create a Framework folder for the IoC and AOP classes. I would recommend Framework or Core, which are descriptive names. I created a Windsor sub-folder, which indicates the items will be supporting the IoC and AOP. The next level is Installers and Interceptors at the same level, so the following is an example of the project structure.

Framework Folder Structure

First, we will create the boilerplate WindsorControllerFactory to handle the IoC plumbing. The following is a snippet of the code, which you can add to the Framework.Windsor folder.

01
using System;
02
using System.Web;
03
using System.Web.Routing;
04
using Castle.MicroKernel;
05
using System.Web.Mvc;
06
 
07
namespace Mvc4Castle.Framework.Windsor
08
{
09
    /// <summary>
10
    /// Castle Windsor MVC4 Controller Factory Implementation for IoC
11
    /// </summary>
12
    public class WindsorControllerFactory : DefaultControllerFactory
13
    {
14
        private IKernel _kernel;
15
 
16
        public WindsorControllerFactory(IKernel kernel)
17
        {
18
            _kernel = kernel;
19
        }
20
 
21
        public override void ReleaseController(IController controller)
22
        {
23
            _kernel.ReleaseComponent(controller);
24
        }
25
 
26
        protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType)
27
        {
28
            if (controllerType == null)
29
            {
30
                throw new HttpException(404, string.Format("The Windsor Controller at path '{0}' could not be found.", requestContext.HttpContext.Request.Path));
31
            }
32
            return (IController)_kernel.Resolve(controllerType);
33
        }
34
    }
35
}

Next, we will create IWindsorInstaller implementations. The installers will register components or objects with the IoC container. Alternatively, you can choose XML-based configuration files to define the components/objects instructing the container on the “How” to instantiate. In our example, we will use the MVC controllers included in the starter project (e.g. HomeController).

01
using Castle.MicroKernel.Registration;
02
using Castle.Windsor;
03
using Castle.MicroKernel.SubSystems.Configuration;
04
using System.Web.Mvc;
05
using Mvc4Castle.Controllers;
06
 
07
namespace Mvc4Castle.Framework.Windsor.Installers
08
{
09
    /// <summary>
10
    /// Castle Windsor Controller responsible for registering classes managed by
11
    /// the IoC container.
12
    /// 
13
    /// This implementation registers all MVC Controllers (IController).
14
    /// </summary>
15
    public class ControllerInstaller : IWindsorInstaller
16
    {
17
        public void Install(IWindsorContainer container, IConfigurationStore store)
18
        {
19
            container.Register(Classes.FromThisAssembly()
20
                                .BasedOn<IController>()
21
                                .LifestyleTransient()
22
                                .ConfigureFor<HomeController>(c => c.DependsOn(Dependency.OnComponent("service", "MyServiceResource"))));
23
        }
24
    }
25
}

In this example, we are registering objects/components based on IController located in the current assembly. We are also configuring the HomeController and dependency injection of the service argument. We’ll explore additional options for registering objects/components in the container.

We also need to add the container to our start-up processes in the global.asax. The Application_Start and Application_End require some logic to handle this task.

01
    public class MvcApplication : System.Web.HttpApplication
02
    {
03
        private static IWindsorContainer _container;
04
 
05
        protected void Application_Start()
06
        {
07
            AreaRegistration.RegisterAllAreas();
08
 
09
            WebApiConfig.Register(GlobalConfiguration.Configuration);
10
            FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
11
            RouteConfig.RegisterRoutes(RouteTable.Routes);
12
            BundleConfig.RegisterBundles(BundleTable.Bundles);
13
            InjectContainer();
14
        }
15
 
16
        /// <summary>
17
        /// Create MVC4 Controller for Castle Windsor IoC container
18
        /// </summary>
19
        private static void InjectContainer()
20
        {
21
            _container = new WindsorContainer().Install(FromAssembly.This());
22
 
23
            var controllerFactory = new WindsorControllerFactory(_container.Kernel);
24
            ControllerBuilder.Current.SetControllerFactory(controllerFactory);
25
        }
26
 
27
        /// <summary>
28
        /// Destroy Castle Windsor IoC container
29
        /// </summary>
30
        protected void Application_End()
31
        {
32
            _container.Dispose();
33
        }
34
    }
35

You can add several installer implementations using the fluent API or XML-based configuration files, so you can organize your object/component definitions for the IoC container. Since we are following design patterns, we will introduce Dependency Injection/DIP practices to inject the object/component dependency via the controller constructor. The service will represent the business logic, which is responsible for orchestrating the calls to the data tier or other service objects to process the client request and return an appropriate result.

1
       private IService _service;
2
 
3
        public HomeController(IService service)
4
        {
5
            _service = service;
6
        }

So…let’s take a look at a few container registration options. The following will register all objects/components with a type ending in “Service” (i.e. wildcard Service) from the current assembly. The life-cycle can also be configured, which in the examples ”transient” indicates a new instance.

1
           container.Register(Classes.FromThisAssembly()
2
                 .Where(type => type.Name.EndsWith("Service"))
3
                 .WithServiceDefaultInterfaces()
4
                 .Configure(c => c.LifestyleTransient().Interceptors<ServiceTitleInterceptor>()));
5

We’ll cover the AOP or dynamic proxies, which is the Interceptor. The following example defines the dependency using a named value, so the “Title” property will be set to the static value “Injected By Dependency Value”.

1
            container.Register(Component.For<IService>()
2
                .ImplementedBy<HomeService>()
3
                .Named("MyServiceValue")
4
                .DependsOn(Dependency.OnValue("Title", "Injected By Dependency Value"))
5
                .LifestyleTransient());
6

The next example is injecting the “Title” property value from the web.config appsettings “Title” parameter.

1
            container.Register(Component.For<IService>()
2
                .ImplementedBy<HomeService>()
3
                .Named("MyServiceConfig")
4
                .DependsOn(Dependency.OnAppSettingsValue("Title", "Title"))
5
                .LifestyleTransient());
6

The following illustrates injecting a property value from a resource file, which also is the “Title” property. This provides several source options for the dependency injection, which can also include a named component (see Named).

1
            container.Register(Component.For<IService>()
2
                .ImplementedBy<HomeService>()
3
                .Named("MyServiceResource")
4
                .DependsOn(Dependency.OnResource<App_LocalResources.Resource1>("Title", "Title"))
5
                .LifestyleTransient());
6

The previous examples introduce the Castle Windsor IoC basics as they apply to MVC4. The last topic is AOP, which we encountered with the interceptor in a previous object/component container registration. The Spring.NET AOP support was introduced in a previous post. Castle core is a lighter version, which is based on implementing the Castle.DynamicProxy.IInterceptor.

01
    public class ServiceTitleInterceptor : IInterceptor
02
    {
03
        public void Intercept(IInvocation invocation)
04
        {
05
            if (invocation.Method.Name.Equals("get_Title"))
06
                invocation.ReturnValue = "You have been hijacked at." + DateTime.Now;
07
            else
08
                invocation.Proceed();
09
        }
10
     }

In this interceptor implementation, the Intercept method contains logic to check for the method name “get_Title”. This is a Title property getter. If true then ReturnValue is set to “You have been hijacked at.” including a date-stamp. If the method name is not “get_Title” then the control is returned to the intercepted object and processing continues. You can implement an interceptor for logging, caching, transaction management, error handling and other common services without adding code to every class and method.

The following is an installer snippet for registering an object/component using the ServiceTitleInterceptor interceptor, so every call will be intercepted and the Intercept method executed.

1
.Interceptors<ServiceTitleInterceptor>()

The above will apply the ServiceTitleInterceptor to the object/component. You can also add the InterceptorAttribute ( [Interceptor(typeof(ServiceTitleInterceptor))] ) to the target class, which is an alternative to the fluent API approach above.

01
using Castle.Core;
02
using Mvc4Castle.Framework.Windsor.Interceptors;
03
 
04
namespace Mvc4Castle.Framework.Services
05
{
06
    [Interceptor(typeof(ServiceTitleInterceptor))]
07
    public class HomeService : IService
08
    {
09
        public string Title { get; set; } 
10
    }
11
}

I hope this quick-start provides another IoC and AOP option for your MVC projects. Since I was evaluating several IoC libraries for a current project, I thought a summary could be helpful.

Related Posts:

Sping.NET IoC

Spring.NET and MVC3

Spring.NET AOP

Automate Merge With TFS Build

0

In the previous TFS Build Process Template and Versioning post, we discussed customizing your Team Foundation Server (TFS) build workflow to dynamically inject the build number into the assembly version. This post will present a process to automate the merge process within a TFS build template. This feature would benefit organizations maintaining a development branch, which requires merging into the main or trunk after a successful release.

The following custom workflow sequence can be included as the final activity of the development build, which will be conditionally executed based on successful build and test results. If you are not performing manual merge processes then you should not encounter conflicts. I would recommend creating a copy of your current build template and add a name prefix as a version number. For example: MyBuildTemplate.2.xaml. Once you have tested the new version, you can replace the current MyBuildTemplate.1.xaml with MyBuildTemplate.2.xaml.

The following is the high-level Merge sequence illustrated in the workflow.

Merge Sequence

The first step is add the arguments, which will be the input to drive the merge process. The parameters should be exposed in the build definition and Queue Build prompt.

Merge Arguments

The SourceMergeBranchPath and TargetMergeBranchPath contain the paths to the TFS location, where the source changes are the “to be” merged into the target. The MergeOptions will allow you to select the appropriate merge option to apply to the process, which will be discussed later in this article.

Next, we will add the logic into the Merge sequence starting with a conditional statement verifying the 2 argument values are valid. The MergeOptions argument is a default, so no validation is required.

Merge Validate Required Arguments

If the required arguments are validated then we begin the merge using another sequence to wrap the logic. The Else path will add a message to the build and exit the merge sequence.

The expanded Merge Begin sequence appears in the image below.

Merge Begin Sequence

The key step is the Run Merge, which is assigns the GetStatus value. The merge process is executed as a method of the Workspace object. The following is the method call to perform the merge.

Workspace.Merge(SourceMergeBranchPath, TargetMergeBranchPath, Nothing, Nothing, LockLevel.None, RecursionType.Full, MergeOptions)

The first 2 parameters are the source and target merge branch paths, which are 2 arguments we discussed previously. The final parameter is the MergeOptions argument, which we can set in the build definition or queue build prompt. The result will be assigned to GetStatus, which will analyze in the next steps.

Merge Conflicts

In the above condition, we check if conflicts or errors were encountered by checking the GetStatus object. If the counts are zero then we can continue to follow the success merge path. If a failure or conflict is encountered then we report the issues and abort the merge process. This path will require additional review and decision.

The GetPendingChanges is also an assignment, which represents the pending changes as a result of the merge. The Workspace.GetPendingChanges will return a collection of the pending changes and the next condition will process any changes.

Merge Process Pending Changes

The Process Pending Changes sequence will include a ForEach to process each pending change. In this process, we are simply recording the change for the build log.

Merge Report Pending Change

When you select a Visual Studio merge, you must perform a check-in after you resolve all conflicts within your workspace. So…the next step is to checkin or commit the changes. The Checkin Pending Changes sequence is responsible for this task.

Merge Checkin Pending Changes

The Run Checkin is also an assignment, where again we call a Workspace method to perform the checkin of the workspace pending changes. The following is the Expression Editor with the CheckinResult assignment.

Merge Checkin Pending Changes

The PendingChanges was a previous assignment and the second parameter are the comments applied to the checkin process. In this example, we assign a standard checkin comment for the changeset. 

If we switch to the Else path of the Pending Changes condition then we would report no pending changes to checkin. The merge is still successful, but no changes were processed.

So…we also need to complete the Else path of the Merge Begin sequence, where we check for failures and conflicts. We just completed the successful merge path, where no failures or conflicts were reported. The following is the Display Merge Results sequence, which reports the failures and/or conflicts to be reviewed.

Merge Report Conflict

The conflict and failures reporting process is essentially the same, where the ForEach will process each item of the collection and a WriteDeploymentInformation can log an entry. The final merge status or result is a failure, so you can also set the build status value accordingly.

We completed the review of the Merge process, which can easily incorporate into your TFS build template and perform an automated merge. The final topic is the Merge Options parameter, which we introduced earlier as an argument. This is also a parameter of the Workspace Merge method, which instructs the process to follow the appropriate conflict resolution defined by the build definition or manual build. The following are the options and a brief explanation, so you can assign the appropriate merge option for your process.

  • None – no special options
  • AlwaysAcceptMine – discard any changes as resolve conflicts using AlwaysAcceptYours resolution
  • ForceMerge – same as tf.exe force option
  • Baseless – source and target items have no branch relationship
  • NoMerge – same as tf.exe preview option

You can also update the Process Parameter Metadata, so it includes helpful information for the user. This appears below for the 3 merge arguments we setup earlier.

Merge Process Parameter Metadata

I hope this article provides an option for an automated TFS build process merge process. This implementation requires no external or third-party assemblies, so it should be easy to add to your current build templates. Since this is an independent build process, I would recommend creating a merge workflow template and calling the merge as a child of your main build template. This should be based on the success of your build and test, so the automated merge is executed during a successful build result.

Job Scheduling

0

It is likely you will be challenged with a project to build one or more scheduled jobs, which are services that run in the background to perform long-running activities or tasks without user interaction. In this post, we will introduce an open source job scheduling system – Quartz.NET. It is a clean design for a developer to implement and provides many ready-to-use powerful features. You can visit http://quartznet.sourceforge.net/ to review the documentation and download.

I would suggest reading the tutorial and documentation, so you are familiar with all features. This article will focus on the hosting options with an introduction to the core features. The following is an overview of the key Quartz.NET concepts.

  • Scheduler – create an instance of the scheduler
  • Job – an IJob implementation containing the Execute method invoked by the scheduler
  • Trigger – defines the firing of a job, which is the scheduler component

This is a design that follows separation of concerns, since the job is not coupled to the trigger/schedule. This provides the flexibility to configure a job to participate in one or more schedules.

So…let’s take a look at a very simple job implementation.

1
    public class MyJob : IJob
2
    {
3
        public void Execute(IJobExecutionContext context)
4
        {
5
            Console.WriteLine("MyJob is executing...");
6
        }
7
    }
8
}

The context parameter contains the properties and JobDataMap, which enables you to pass information to the job. The following is an example.

1
// create the job info
2
JobDetail jobDetail = new JobDetail("MyJob", null, typeof(MyJob));
3
 
4
//set the job data map collection 
5
jobDetail.JobDataMap["message"] = "Testing";
6

The following is the updated MyJob Execute method extracting the JobDataMap value.

1
        public void Execute(IJobExecutionContext context)
2
        {
3
            JobDataMap dataMap = context.JobDetail.JobDataMap;
4
            string message = dataMap.GetString("message");
5
            Console.WriteLine(message);
6
        }
7

The next step is creating an instance of the scheduler, so we can execute the jobs. You can create an instance of the scheduler programmatically.

1
// create a scheduler factory
2
ISchedulerFactory factory = new StdSchedulerFactory();
3
 
4
// create scheduler and start
5
IScheduler scheduler = factory.GetScheduler();
6
scheduler.Start();
7

This start-up process can also be managed using a Windows service, which is our first option for hosting the Quartz Scheduler. It’s convenient that the system provides an out-of-the-box service, so you just install the available service. You will find the service or Quartz Server is located under the server folder with the appropriate .NET version. The following is a screenshot of the install command and process, so the Quartz Server is registered as a Windows Service.

Quartz Server Install

Quartz Server Install

Please remember to run the install as Administrator and you should also make sure the exe is Unblocked, which will cause installation issues. If the install is successful the Quartz Server should appear in your Services list. Just set the appropriate user and start the service. The service is setup to create event log entries, so you can check the status with the Event Viewer.

The final step is wiring the scheduler, job and triggers. We have the scheduler running and the job defined, but now we need to setup a trigger. You have a choice of several triggers – including the simple, calendar and cron trigger. In this example, we will configure the job to fire at a specified time every day. We will setup the Quartz Server configuration file, which provides the ability to define jobs and triggers. You could also programmatically configure, but we already have the Windows service hosting the scheduler. The following is the quartz_jobs.xml file, which provides the configuration for the service.

01
<?xml version="1.0" encoding="UTF-8"?>
02
<job-scheduling-data xmlns="http://quartznet.sourceforge.net/JobSchedulingData" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.0">
03
 
04
  <processing-directives>
05
    <overwrite-existing-data>true</overwrite-existing-data>
06
  </processing-directives>
07
 
08
  <schedule>
09
    <job>
10
        <name>MyJob</name>
11
        <group>MyGroup</group>
12
        <description>Job for Quartz Server</description>
13
        <job-type>Quartz.Jobs.MyJob, Quartz.Jobs</job-type>
14
        <durable>true</durable>
15
        <recover>false</recover>
16
    </job>
17
 
18
    <trigger>
19
 <cron>
20
         <name>CronTrigger</name>
21
         <group>CronTriggerGroup</group>
22
         <description>Trigger to to first job at 12:45</description>
23
         <job-name>MyJob</job-name>
24
         <job-group>MyGroup</job-group>
25
         <misfire-instruction>SmartPolicy</misfire-instruction>
26
                <cron-expression>0 45 12 * * ?</cron-expression>
27
        </cron>
28
    </trigger>
29
  </schedule>
30
</job-scheduling-data>

The wiring is much clearer with the above configuration file. The schedule section contains job and trigger definitions. The job definition contains several properties, but job-type defines the target IJob implementation. The job-name and job-group are references, so the trigger can identify the target job for execution. In this example, we are defining a cron trigger to execute the job at 12:45pm. The cron-expression contains the following space delimited values.

  • Seconds
  • Minutes
  • Hours
  • Day of Month
  • Month
  • Day of Week
  • Year (optional)

This is a very flexible trigger, so you can schedule jobs to execute based on a variety of values. For example: if you would like the job to run on weekdays then set the “Day of Week” value to “MON-FRI”. As you explore the options, you should find a trigger option that will satisfy your requirements.

So…this is a quick overview of the core Quartz.NET features and hosting the scheduler as a Windows service. In the next post, I will provide the steps to host under IIS using the Spring.NET IoC container. This option offers a few advantages including simplified configuration and invoking non-Job service methods.

I hope this article provides an overview and quick start for establishing a .NET job scheduler, so you are not forced to write the plumbing code to perform the same tasks. 

TFS Build Process Template and Versioning

2

I am currently enjoying the wealth of new Visual Studio 2012 features. It was a few days before I really appreciated the new user experience, since productivity and access to common options are much improved. This is not the focus of this article, but I thought it was worth mentioning. This post is about sharing and saving time.

The TFS 2012 default XAML build template does not provide support for versioning, which is a feature that provides injecting assembly version and build information. This information is critical to validate a deployment by checking the assemblies and collecting the unique version number. This value is a combination of the major version, minor version, build number and revision. You can view the version information by selecting the assembly file properties under the details tab.

Assembly File Properties

The project AssemblyInfo file maintains the values for the version information. The AssemblyFileVersionAttribute contains a build number, so this information should be changed during every build. In this scenario, the AssemblyFileVersionAttribute should be associated with a specific TFS build. This can be accomplished by customizing the default build template to include a process to assign a unique version number linked to the build.

The first step is create a copy of the default XAML build template and add to source control. You can rename this file, since it will be a custom XAML template. Next, download the Community TFS Build Extensions from CodePlex, which contains a collection of code activities to extend the current core functionality. This includes the TfsVersion activity, which is a versioning task supporting the assignment and assembly file injection of the information. You must add the Community TFS Build Extension assemblies to source control and reference the custom assemblies with the TFS Build Controller.

The following are the modifications required, so change the new XAML build template created earlier. It is easier to create a new TFSBuild project, add the assembly references and link (i.e. not add to the project) to the XAML build template. If you do not setup a project then editing the XAML is difficult without the activities appearing in the Toolbox. 

Add the Major, Minor and TFSVersionNumber arguments to the build template.

Build Template Arguments

Click the Metadata arguments selection and add the Major, Minor and TFSVersion parameters below, so the build definition will provide help and categorized information.

Build Template Metadata Major

Build Template Metadata Minor

Build Template Metadata TFSVersionNumber

The TFSVersionNumber is only available when queuing a build, so it will not appear in the build definition. This provides the option to override the generated version number when manually triggering a build. This will be handled with a conditional, so if the value is set then skip generating a version number. The Major and Minor parameters are assigned with the build definition or queuing a manual build. The three parameters will be grouped under the Versioning category, so they will be easier to manage.

Next, add the build template items to assign the version number.

Build Template SetVersionNumber

The new activities must appear within the current Update Build Number For Triggered Builds, so the build number/name is assigned the same value as the assembly information. If the TFSVersionNumber is not set then SetVersionNumber conditional executes GetTFSVersion to assign the value for the build. The GetTFSVersion properties appear below, which generates the version number based on the Major and Minor values. The unique build number is assigned as the 3rd element of the version number, which are all separated by a delimiter.

Build Template GetTfsVersion Properties

The Action property is set to GetVersion, so the generated version number is assigned to the TFSVersionNumber. It is assigned as the build number and file version in subsequent tasks.

Build Template GetTfsVersion Properties Version

The current Update Build Number is changed to assign the TFSVersionNumber, so the build number is the same value as the version number. I recommend appending the build definition name, so the following is an example.

  • MyBuildDefinition_2.1.167.01

The final step is the file versioning, where the version number is injected into the AssemblyInfo files before compiling. The following is the build template with the new items.

ild Template ApplyVersionNumber

The Apply FileVersionNumber appears after the current Get Workspace, which contains a condition to verify the TFSVersionNumber is assigned. The TFSFileVersioning sequence includes FileMatchingAssemblyInfo and SetTfsVersion. The FileMatchingAssemblyInfo properties appear below, which basically generates a list of AssemblyInfo files from the current workspace.

Build Template SetVersionNumber FindingMatchingFiles

The SetTfsFileVersioning properties appear in the next image, which accept the AssemblyFiles collection and TFSVersionNumber.

Build Template SetVersionNumber Properties

The TfsVersion activity is again handling the work, but in this case injecting the previously generated version number into the AssemblyInfo files.

After almost committing to building custom activities to perform the same tasks, I stumbled onto the TFS Community Build Extensions and the very helpful TfsVersion activity. After a few more hours of testing the various options, I arrived at the above process. In my case, synchronizing the build number and AssemblyFileVersion was the objective. If you seek a slightly different process then I would look at the TFS Community Build Extensions and the TfsVersion activity options.

In the end, I hope this saves you a little time and research. If you find something helpful then please comment and share.

Building Enterprise Frameworks – Testing and Mock Objects

0

This is the third installment of the “Building Enterprise Frameworks” series, which is the evolving design of an enterprise framework. In the series introduction, we presented a problem faced by many enterprise software teams and delivered a plan. The previous blog entry introduced the preliminary data access layer and domain model, which are a collection of abstractions forming a unified framework or infrastructure. In this blog entry, we refactor the data access layer and build the supporting unit tests.

Before covering testing, we will refactor the Repository introduced in the previous installment. At this time, we will eliminate the RepositoryBase abstract class. After testing and coding additional NHibernateRepositoryBase features, this class provided no immediate value to the enterprise framework. The abstract class was moved to the Framework Repository folder, which was vacated by the previous RepositoryBase. We also refactored the Generic interfaces and classes, which better aligns with our technical design and objectives. The following is the revised Framework project structure.

The following is the IRepository interface, which defines the required Repository implementation methods. As discussed in the previous installment, these are the basic Create-Read-Update-Delete (CRUD) operations.

01
using System;
02
using System.Collections.Generic;
03
using Joemwalton.Framework.Domain;
04
 
05
namespace Joemwalton.Framework.Data.Repository
06
{
07
    public interface IRepository<TEntity, TId>
08
        where TEntity : IEntity<TId>
09
    {
10
        void Save(TEntity entity);
11
        void Remove(TEntity entity);
12
        TEntity FindById(TId id);
13
        List<TEntity> FindAll();
14
    }
15
}

Since we eliminated the RepositoryBase abstract class, the NHibernateRepositoryBase will replace the method implementation for our NHibernate base class. The next installment of the “Building Enterprise Frameworks” series will focus on NHibernate and IoC, so the details of the design will not be discussed here – including session and transaction management. The base class will include the CRUD method implementation, so the concrete Repository implementations remain focused on providing domain specific data services. Basically…eliminating redundant code!

01
using System;
02
using System.Collections.Generic;
03
using NHibernate;
04
using NHibernate.Criterion;
05
using Joemwalton.Framework.Data.Repository;
06
using Joemwalton.Framework.Domain;
07
 
08
namespace Joemwalton.Framework.Data.NHibernate
09
{
10
    public abstract class NHibernateRepositoryBase<TEntity, TId> 
11
        : IRepository<TEntity, TId>
12
        where TEntity : IEntity<TId>
13
    {
14
        private ISessionFactory _sessionFactory;
15
 
16
        /// <summary>
17
        /// NHibernate Session Factory
18
        /// </summary>
19
        public ISessionFactory SessionFactory
20
        {
21
            protected get { return _sessionFactory; }
22
            set { _sessionFactory = value; }
23
        }
24
 
25
        /// <summary>
26
        /// Get current active session
27
        /// </summary>
28
        protected ISession CurrentSession
29
        {
30
            get { return this.SessionFactory.GetCurrentSession(); }
31
        }
32
 
33
        public TEntity FindById(TId id)
34
        {
35
            return this.CurrentSession.Get<TEntity>(id);
36
        }
37
 
38
        public List<TEntity> FindAll()
39
        {
40
            ICriteria query = this.CurrentSession.CreateCriteria(typeof(TEntity));
41
            return (List<TEntity>)query.List<TEntity>();
42
        }
43
 
44
        public void Save(TEntity entity)
45
        {
46
            using (ITransaction transaction = this.CurrentSession.BeginTransaction())
47
            {
48
                this.CurrentSession.SaveOrUpdate(entity);
49
                transaction.Commit();
50
            }
51
        }
52
 
53
        public void Remove(TEntity entity) 
54
        {
55
            using (ITransaction transaction = this.CurrentSession.BeginTransaction())
56
            {
57
                this.CurrentSession.Delete(entity);
58
                transaction.Commit();
59
            }
60
        }
61
    }
62
}

We are finished with the refactoring, so our attention shifts to building unit tests for our framework. As you noticed, we have no concrete classes – this is by design. So…how do we test interface and abstract classes? Why are we creating interfaces again?

The answer to question 2 is loose coupling and ability to test classes in isolation. This also promotes our core design principles and the supporting design patterns, which we covered in the “How to design and build better software for tomorrow?” series.

How do we test interface and abstract classes? The best approach is creating mock objects that implement the interfaces and inherit the base classes. The mock objects will mimic our concrete implementations, but with no real business logic. The following is the Framework Test project structure, which will contain our unit test and mock classes. 

The following is the EntityMock class, which inherits the EntityBase and defines the Generic type as int. This will represent the Id property type.

01
using System;
02
using Joemwalton.Framework.Domain;
03
 
04
namespace Joemwalton.Framework.Test.Mocks
05
{
06
    public class EntityMock 
07
        : EntityBase<int>
08
    {
09
        public EntityMock() 
10
        {
11
            this.Id = 1;
12
            Validate();
13
        }
14
 
15
        protected override void Validate()
16
        {
17
            this.FailedValidations.Clear();
18
            if (this.Id == 1)
19
                FailedValidations.Add("Testing");
20
        }
21
    }
22
}

The mock object contains an implementation for the constructor and Validate method, since the EntityBase handled the Id property. The following is the RepositoryMock class, which will focus on the relevant method implementation to support the framework.

01
using System;
02
using System.Collections.Generic;
03
using Joemwalton.Framework.Data.Repository;
04
using Joemwalton.Framework.Test.Mocks;
05
 
06
namespace Joemwalton.Framework.Test.Mocks
07
{
08
    public class RepositoryMock 
09
        : IRepository<EntityMock, int>
10
    {
11
        public void Save(EntityMock entity)
12
        {
13
            throw new NotImplementedException();
14
        }
15
 
16
        public void Remove(EntityMock entity)
17
        {
18
            throw new NotImplementedException();
19
        }
20
 
21
        public EntityMock FindById(int id)
22
        {
23
            return new EntityMock();
24
        }
25
 
26
        public List<EntityMock> FindAll()
27
        {
28
            return new List<EntityMock> { new EntityMock() };
29
        }
30
    }
31
}

The mock object implements the IRepository interface, where the contract defines the basic CRUD operations. The method implementation is not important, since we are not testing the ability of the Repository to retrieve or persist objects. The IRepository is a Generic interface, so the concrete entity type definition is required.

With the mock objects available, we can build our unit tests for the entity and Repository framework. In the following, the unit tests are based on the Microsoft Visual Studio Test libraries, although you can also build your tests with the open source NUnit framework. In either case, the goal is building the necessary unit tests for the framework.

01
using System;
02
using Microsoft.VisualStudio.TestTools.UnitTesting;
03
using Joemwalton.Framework.Data.Repository;
04
using Joemwalton.Framework.Test.Mocks; 
05
 
06
namespace Framework.Test
07
{
08
    [TestClass]
09
    public class EntityTest
10
    {
11
        public EntityTest() { }
12
 
13
        public TestContext TestContext { get; set; }
14
 
15
        [TestMethod]
16
        public void GetIdTest()
17
        {
18
            EntityMock mock = new EntityMock();
19
            int expected = 1;
20
            Assert.AreEqual(expected, mock.Id);
21
        }
22
 
23
        [TestMethod]
24
        public void SetIdTest()
25
        {
26
            EntityMock mock = new EntityMock();
27
            int expected = 2;
28
            mock.Id = expected;
29
            Assert.AreEqual(expected, mock.Id);
30
        }
31
 
32
        [TestMethod]
33
        public void GetFailedValidationsTest()
34
        {
35
            EntityMock mock = new EntityMock();
36
            Assert.AreEqual(1, mock.GetFailedValidations().Count);
37
            string expected = "Testing";
38
            Assert.AreEqual(expected, mock.GetFailedValidations()[0]);
39
        }
40
    }
41
}

In the above EntityTest class, we decorate the class with TestClass and test methods with TestMethod attributes. The test methods create an instance of the EntityMock and validate the results, although the EntityMock instantiation can also be accomplished during the test class initialization and shared for all test methods. The following is the RepositoryTest, which follows the same approach as the EntityTest.

01
using System;
02
using System.Text;
03
using System.Collections.Generic;
04
using System.Linq;
05
using Microsoft.VisualStudio.TestTools.UnitTesting;
06
using Joemwalton.Framework.Test.Mocks;
07
 
08
namespace Joemwalton.Framework.Test
09
{
10
    [TestClass]
11
    public class RepositoryTest
12
    {
13
        public RepositoryTest() { }
14
 
15
        public TestContext TestContext { get; set; }
16
 
17
        [TestMethod]
18
        [ExpectedException(typeof(NotImplementedException))]
19
        public void SaveTest()
20
        {
21
            RepositoryMock mock = new RepositoryMock();
22
            mock.Save(new EntityMock());
23
        }
24
 
25
        [TestMethod]
26
        [ExpectedException(typeof(NotImplementedException))]
27
        public void RemoveTest()
28
        {
29
            RepositoryMock mock = new RepositoryMock();
30
            mock.Remove(new EntityMock());
31
        }
32
 
33
        [TestMethod]
34
        public void FindByIdTest()
35
        {
36
            RepositoryMock mock = new RepositoryMock();
37
            EntityMock expected = new EntityMock();
38
            EntityMock actual = mock.FindById(1);
39
            Assert.AreEqual(expected.Id, actual.Id);
40
        }
41
 
42
        [TestMethod]
43
        public void FindAllTest()
44
        {
45
            RepositoryMock mock = new RepositoryMock();
46
            EntityMock expected = new EntityMock();
47
            List<EntityMock> actual = mock.FindAll();
48
            Assert.AreEqual(1, actual.Count);
49
            Assert.AreEqual(expected.Id, actual[0].Id);
50
        }
51
    }
52
}

The next step is run the tests. This is simple using Visual Studio, which includes several convenient options depending on your version. The Test menu or toolbar will provide an option to “Run All Tests in Solution”, which will run the tests and report the results. Alternatively, the Test View window will provide another interface to run tests.

In the Test View, you can highlight all tests and select “Run Selection” from the toolbar. This will execute the unit tests and display the information in the Test Results window, which appears below.

As you can see, the Test View window also provides several options to run and debug tests. Unfortunately, you will lose these handy features with an open source or non-Microsoft test tool.

In summary, we created mock objects for our domain model and Repository framework. Once we developed the mock objects, we created unit tests to ensure the relevant base implementation is working as expected. As we refactor the framework, the unit tests will ensure changes do not introduce bugs or break existing features. The introduction of a Continuous Integration (CI) process will further extend the test value with an event-driven build and test execution process. This can be accomplished using Microsoft Team Foundation Server (TFS), CruiseControl.NET, Team City or several other CI products.

What’s next? The next installment will focus on NHibernate including the SessionFactory, mapping and transaction management. This will segue into the IoC and Spring.NET support, which will provide many time saving NHibernate features.

Finally, I received several requests for a Java implementation. So…I am planning to build a Java equivalent enterprise framework solution. Thanks again for your comments and suggestions!!!!

Previous: Building Enterprise Frameworks – Data and Domain

Thinking Enterprise Architecture

Go to Top