Archive for October, 2012

Building Enterprise Frameworks – Testing and Mock Objects

0

This is the third installment of the “Building Enterprise Frameworks” series, which is the evolving design of an enterprise framework. In the series introduction, we presented a problem faced by many enterprise software teams and delivered a plan. The previous blog entry introduced the preliminary data access layer and domain model, which are a collection of abstractions forming a unified framework or infrastructure. In this blog entry, we refactor the data access layer and build the supporting unit tests.

Before covering testing, we will refactor the Repository introduced in the previous installment. At this time, we will eliminate the RepositoryBase abstract class. After testing and coding additional NHibernateRepositoryBase features, this class provided no immediate value to the enterprise framework. The abstract class was moved to the Framework Repository folder, which was vacated by the previous RepositoryBase. We also refactored the Generic interfaces and classes, which better aligns with our technical design and objectives. The following is the revised Framework project structure.

The following is the IRepository interface, which defines the required Repository implementation methods. As discussed in the previous installment, these are the basic Create-Read-Update-Delete (CRUD) operations.

01
using System;
02
using System.Collections.Generic;
03
using Joemwalton.Framework.Domain;
04
 
05
namespace Joemwalton.Framework.Data.Repository
06
{
07
    public interface IRepository<TEntity, TId>
08
        where TEntity : IEntity<TId>
09
    {
10
        void Save(TEntity entity);
11
        void Remove(TEntity entity);
12
        TEntity FindById(TId id);
13
        List<TEntity> FindAll();
14
    }
15
}

Since we eliminated the RepositoryBase abstract class, the NHibernateRepositoryBase will replace the method implementation for our NHibernate base class. The next installment of the “Building Enterprise Frameworks” series will focus on NHibernate and IoC, so the details of the design will not be discussed here – including session and transaction management. The base class will include the CRUD method implementation, so the concrete Repository implementations remain focused on providing domain specific data services. Basically…eliminating redundant code!

01
using System;
02
using System.Collections.Generic;
03
using NHibernate;
04
using NHibernate.Criterion;
05
using Joemwalton.Framework.Data.Repository;
06
using Joemwalton.Framework.Domain;
07
 
08
namespace Joemwalton.Framework.Data.NHibernate
09
{
10
    public abstract class NHibernateRepositoryBase<TEntity, TId> 
11
        : IRepository<TEntity, TId>
12
        where TEntity : IEntity<TId>
13
    {
14
        private ISessionFactory _sessionFactory;
15
 
16
        /// <summary>
17
        /// NHibernate Session Factory
18
        /// </summary>
19
        public ISessionFactory SessionFactory
20
        {
21
            protected get { return _sessionFactory; }
22
            set { _sessionFactory = value; }
23
        }
24
 
25
        /// <summary>
26
        /// Get current active session
27
        /// </summary>
28
        protected ISession CurrentSession
29
        {
30
            get { return this.SessionFactory.GetCurrentSession(); }
31
        }
32
 
33
        public TEntity FindById(TId id)
34
        {
35
            return this.CurrentSession.Get<TEntity>(id);
36
        }
37
 
38
        public List<TEntity> FindAll()
39
        {
40
            ICriteria query = this.CurrentSession.CreateCriteria(typeof(TEntity));
41
            return (List<TEntity>)query.List<TEntity>();
42
        }
43
 
44
        public void Save(TEntity entity)
45
        {
46
            using (ITransaction transaction = this.CurrentSession.BeginTransaction())
47
            {
48
                this.CurrentSession.SaveOrUpdate(entity);
49
                transaction.Commit();
50
            }
51
        }
52
 
53
        public void Remove(TEntity entity) 
54
        {
55
            using (ITransaction transaction = this.CurrentSession.BeginTransaction())
56
            {
57
                this.CurrentSession.Delete(entity);
58
                transaction.Commit();
59
            }
60
        }
61
    }
62
}

We are finished with the refactoring, so our attention shifts to building unit tests for our framework. As you noticed, we have no concrete classes – this is by design. So…how do we test interface and abstract classes? Why are we creating interfaces again?

The answer to question 2 is loose coupling and ability to test classes in isolation. This also promotes our core design principles and the supporting design patterns, which we covered in the “How to design and build better software for tomorrow?” series.

How do we test interface and abstract classes? The best approach is creating mock objects that implement the interfaces and inherit the base classes. The mock objects will mimic our concrete implementations, but with no real business logic. The following is the Framework Test project structure, which will contain our unit test and mock classes. 

The following is the EntityMock class, which inherits the EntityBase and defines the Generic type as int. This will represent the Id property type.

01
using System;
02
using Joemwalton.Framework.Domain;
03
 
04
namespace Joemwalton.Framework.Test.Mocks
05
{
06
    public class EntityMock 
07
        : EntityBase<int>
08
    {
09
        public EntityMock() 
10
        {
11
            this.Id = 1;
12
            Validate();
13
        }
14
 
15
        protected override void Validate()
16
        {
17
            this.FailedValidations.Clear();
18
            if (this.Id == 1)
19
                FailedValidations.Add("Testing");
20
        }
21
    }
22
}

The mock object contains an implementation for the constructor and Validate method, since the EntityBase handled the Id property. The following is the RepositoryMock class, which will focus on the relevant method implementation to support the framework.

01
using System;
02
using System.Collections.Generic;
03
using Joemwalton.Framework.Data.Repository;
04
using Joemwalton.Framework.Test.Mocks;
05
 
06
namespace Joemwalton.Framework.Test.Mocks
07
{
08
    public class RepositoryMock 
09
        : IRepository<EntityMock, int>
10
    {
11
        public void Save(EntityMock entity)
12
        {
13
            throw new NotImplementedException();
14
        }
15
 
16
        public void Remove(EntityMock entity)
17
        {
18
            throw new NotImplementedException();
19
        }
20
 
21
        public EntityMock FindById(int id)
22
        {
23
            return new EntityMock();
24
        }
25
 
26
        public List<EntityMock> FindAll()
27
        {
28
            return new List<EntityMock> { new EntityMock() };
29
        }
30
    }
31
}

The mock object implements the IRepository interface, where the contract defines the basic CRUD operations. The method implementation is not important, since we are not testing the ability of the Repository to retrieve or persist objects. The IRepository is a Generic interface, so the concrete entity type definition is required.

With the mock objects available, we can build our unit tests for the entity and Repository framework. In the following, the unit tests are based on the Microsoft Visual Studio Test libraries, although you can also build your tests with the open source NUnit framework. In either case, the goal is building the necessary unit tests for the framework.

01
using System;
02
using Microsoft.VisualStudio.TestTools.UnitTesting;
03
using Joemwalton.Framework.Data.Repository;
04
using Joemwalton.Framework.Test.Mocks; 
05
 
06
namespace Framework.Test
07
{
08
    [TestClass]
09
    public class EntityTest
10
    {
11
        public EntityTest() { }
12
 
13
        public TestContext TestContext { get; set; }
14
 
15
        [TestMethod]
16
        public void GetIdTest()
17
        {
18
            EntityMock mock = new EntityMock();
19
            int expected = 1;
20
            Assert.AreEqual(expected, mock.Id);
21
        }
22
 
23
        [TestMethod]
24
        public void SetIdTest()
25
        {
26
            EntityMock mock = new EntityMock();
27
            int expected = 2;
28
            mock.Id = expected;
29
            Assert.AreEqual(expected, mock.Id);
30
        }
31
 
32
        [TestMethod]
33
        public void GetFailedValidationsTest()
34
        {
35
            EntityMock mock = new EntityMock();
36
            Assert.AreEqual(1, mock.GetFailedValidations().Count);
37
            string expected = "Testing";
38
            Assert.AreEqual(expected, mock.GetFailedValidations()[0]);
39
        }
40
    }
41
}

In the above EntityTest class, we decorate the class with TestClass and test methods with TestMethod attributes. The test methods create an instance of the EntityMock and validate the results, although the EntityMock instantiation can also be accomplished during the test class initialization and shared for all test methods. The following is the RepositoryTest, which follows the same approach as the EntityTest.

01
using System;
02
using System.Text;
03
using System.Collections.Generic;
04
using System.Linq;
05
using Microsoft.VisualStudio.TestTools.UnitTesting;
06
using Joemwalton.Framework.Test.Mocks;
07
 
08
namespace Joemwalton.Framework.Test
09
{
10
    [TestClass]
11
    public class RepositoryTest
12
    {
13
        public RepositoryTest() { }
14
 
15
        public TestContext TestContext { get; set; }
16
 
17
        [TestMethod]
18
        [ExpectedException(typeof(NotImplementedException))]
19
        public void SaveTest()
20
        {
21
            RepositoryMock mock = new RepositoryMock();
22
            mock.Save(new EntityMock());
23
        }
24
 
25
        [TestMethod]
26
        [ExpectedException(typeof(NotImplementedException))]
27
        public void RemoveTest()
28
        {
29
            RepositoryMock mock = new RepositoryMock();
30
            mock.Remove(new EntityMock());
31
        }
32
 
33
        [TestMethod]
34
        public void FindByIdTest()
35
        {
36
            RepositoryMock mock = new RepositoryMock();
37
            EntityMock expected = new EntityMock();
38
            EntityMock actual = mock.FindById(1);
39
            Assert.AreEqual(expected.Id, actual.Id);
40
        }
41
 
42
        [TestMethod]
43
        public void FindAllTest()
44
        {
45
            RepositoryMock mock = new RepositoryMock();
46
            EntityMock expected = new EntityMock();
47
            List<EntityMock> actual = mock.FindAll();
48
            Assert.AreEqual(1, actual.Count);
49
            Assert.AreEqual(expected.Id, actual[0].Id);
50
        }
51
    }
52
}

The next step is run the tests. This is simple using Visual Studio, which includes several convenient options depending on your version. The Test menu or toolbar will provide an option to “Run All Tests in Solution”, which will run the tests and report the results. Alternatively, the Test View window will provide another interface to run tests.

In the Test View, you can highlight all tests and select “Run Selection” from the toolbar. This will execute the unit tests and display the information in the Test Results window, which appears below.

As you can see, the Test View window also provides several options to run and debug tests. Unfortunately, you will lose these handy features with an open source or non-Microsoft test tool.

In summary, we created mock objects for our domain model and Repository framework. Once we developed the mock objects, we created unit tests to ensure the relevant base implementation is working as expected. As we refactor the framework, the unit tests will ensure changes do not introduce bugs or break existing features. The introduction of a Continuous Integration (CI) process will further extend the test value with an event-driven build and test execution process. This can be accomplished using Microsoft Team Foundation Server (TFS), CruiseControl.NET, Team City or several other CI products.

What’s next? The next installment will focus on NHibernate including the SessionFactory, mapping and transaction management. This will segue into the IoC and Spring.NET support, which will provide many time saving NHibernate features.

Finally, I received several requests for a Java implementation. So…I am planning to build a Java equivalent enterprise framework solution. Thanks again for your comments and suggestions!!!!

Previous: Building Enterprise Frameworks – Data and Domain

Thinking Enterprise Architecture

Building Enterprise Frameworks – Data and Domain

0

In the previous blog entry “Thinking Enterprise Architecture”, I presented the background for a series focusing on Enterprise Architecture. Previously, we discussed the core design principles for software development. We also introduced several design patterns to satisfy the core principles. The concepts were applied to several technologies and demonstrated using Inversion of Control (IoC), Model-View-Controller (MVC) and Windows Communication Foundation (WCF). Since the core design principles and design patterns are not framework or language specific, the “Part VI: Brick Design – How to design and build better software for tomorrow?” article was a Java implementation of the Contact Manager application.

In this entry, I will provide the preliminary enterprise framework core components supporting the domain model and data tier. I will emphasize the word preliminary, because the framework will evolve and refactoring will be required to satisfy our objectives. I will loosely base the framework on the design built by the team introduced in the background entry, since they prefer not to share all design and implementations. The project will be based on my standard Contact Manager application. Basically, we are separating the common components and establishing a framework to build enterprise applications.

First, we create a new project as a class library for the common enterprise framework. The project contains the Data and Model root folders, which are collections of interface and abstract classes. These are not our concrete implementations, but the shared objects that our enterprise projects reference.

The Domain folder includes the IEntity interface and the abstract EntityBase class. All domain model concrete classes should implement IEntity. The interface is our contract, so this should drive the implementations within an interface-based system. The interface includes the Id property, which is the only domain object property that should be common for all concrete implementations.

1
using System;
2
 
3
namespace Joemwalton.Framework.Domain
4
{
5
    public interface IEntity<TId>
6
    {
7
        TId Id { get; set; }
8
    }
9
}

We elected to create a Generic interface, so we can define the types at run-time and not design-time. In this case, our Id property type is unknown. The implementation will define the type, so we are not forced to commit in the contract.

The EntityBase is an abstract class, which serves as a base class to define shared behavior for an implementation to inherit. This class is providing the implementation for the common entity Id property and validation logic. In this domain-driven design, we are encapsulating the business rules for validating the domain object. The Validate method is abstract, since only the concrete implementation would be aware of the validation logic required for the domain object. As we refactor the framework classes, the following will change to reflect our technical design.

01
using System;
02
using System.Collections.Generic;
03
 
04
namespace Joemwalton.Framework.Domain
05
{
06
    public abstract class EntityBase<TEntity,TId> 
07
        : IEntity<TId>
08
        where TEntity : IEntity<TId>
09
    {
10
        public  TId Id { get; set; }
11
 
12
        private List<string> _failedValidations = new List<string>();
13
        protected List<string> FailedValidations 
14
        {
15
            get { return _failedValidations; } 
16
        }
17
 
18
        public List<string> GetFailedValidations()
19
        {
20
            _failedValidations.Clear();
21
            Validate();
22
            return _failedValidations;
23
        }
24
 
25
        protected abstract void Validate();
26
    }
27
}

The next step is the skeleton design for the data tier, which includes the data access tier or repository. The repository is a memory-based domain collection, which is responsible for persisting and retrieving the domain objects. We will also build an interface and base class to support the concrete repository implementations. The IRepository interface is the contract, which includes methods for the standard Create-Read-Update-Delete (CRUD) operations. At this point, we would like all repository implementations to support the basic operations – Save, FindById, FindAll, Update and Delete. The concrete implemenations could include additional methods, which provide additional operations.

01
using System;
02
using System.Collections.Generic;
03
using Joemwalton.Framework.Domain;
04
 
05
namespace Joemwalton.Framework.Data.Repository
06
{
07
    public interface IRepository<TEntity,TId>
08
        where TEntity : IEntity<TId>
09
    {
10
        void Save(TEntity entity);
11
        void Remove(TEntity entity);
12
        TEntity FindById(TId id);
13
        List<TEntity> FindAll();
14
    }
15
}

Since our domain object type is unknown at design-time, we use generics to allow run-time assignments. The where clause is included as a contraint, so the run-time type must be IEntity. The TEntity represents the run-time domain object. The following RepositoryBase abstract class is a placeholder, so this will also be refactored as our design evolves.

01
using System;
02
using System.Collections.Generic;
03
using NHibernate;
04
using Joemwalton.Framework.Domain;
05
 
06
namespace Joemwalton.Framework.Data.Repository
07
{
08
    public abstract class BaseRepository<TEntity,TId> 
09
        : IRepository<TEntity,TId>
10
        where TEntity : IEntity<TId>
11
    {
12
        public void Remove(TEntity entity) { }
13
 
14
        public void Save(TEntity entity) { }
15
 
16
        public TEntity FindById(TId id)
17
        {
18
            return default(TEntity);
19
        }
20
 
21
        public List<TEntity> FindAll()
22
        {
23
            return new List<TEntity>() { default(TEntity) };
24
        }
25
    }
26
}

Next, the Data.NHibernate folder contains the NHibernateRepositoryBase abstract class. This class contains NHibernate implementation code including the SessionFactory. Since we also selected Spring.NET, the Spring framework Object-Relational Mapping (ORM) modules provides the plumbing for this support. The IoC container provides the features to simplify the configuration, NHibernate session and enterprise transaction management. This class will also require refactoring as the features and implementations are discovered, but the following is a start.

01
using System;
02
using NHibernate;
03
using Joemwalton.Framework.Data.Repository;
04
using Joemwalton.Framework.Domain;
05
 
06
namespace Joemwalton.Framework.Data.NHibernate
07
{
08
    public abstract class NHibernateRepositoryBase<TEntity,TId> 
09
        : BaseRepository<TEntity,TId>
10
        where TEntity : IEntity<TId>
11
    {
12
        private ISessionFactory sessionFactory;
13
 
14
        /// <summary>
15
        /// Session factory
16
        /// </summary>
17
        public ISessionFactory SessionFactory
18
        {
19
            protected get { return sessionFactory; }
20
            set { sessionFactory = value; }
21
        }
22
 
23
        /// <summary>
24
        /// Get current active session
25
        /// </summary>
26
        protected ISession CurrentSession
27
        {
28
            get { return sessionFactory.GetCurrentSession(); }
29
        }
30
    }
31
}

In summary, the above are the draft enterprise framework classes and refactoring will be required in the future installments of the series. In the next entry, we will build tests including the mock objects for our abstractions. Once the enterprise framework is established, we will create a Contact Manager application. So…stay tuned for the next article in this series!

Next: Building Enterprise Frameworks - Testing and Mock Objects

Previous: Thinking Enterprise Architecture

Thinking Enterprise Architecture

0

I recently met a friend and fellow software engineer to discuss his current project. We met earlier this year during my .NET Code Camp presentation. I appreciate his candid feedback and suggestions for future articles, since he is a very talented developer and fanatical baseball fan. Although I really enjoy baseball, I am more of a seasonal fan in October. He is also an advocate of the same software design principles and supporting design patterns outlined during the presentation, which I also featured in “How to design and build better software for tomorrow?” multi-part series.

The Project

The project entailed building three major back-office applications, so he assembled a development team for each application. The teams were instructed to follow the software design principles, which they all agreed were important. After several weeks, he was surprised that each team presented such different solutions. The data access layer choices were NHibernate, Linq2Sql and Entity Framework. Although all teams built web solutions, two teams selected Model-View-Controller (MVC) and the third team Model-View-Presenter (MVP). The teams also selected different Inversion of Control (IoC) frameworks, which included Spring.NET and Microsoft Unity. They also followed different validation and exception handling strategies. After he summarized the current status, he expressed his disappointment and frustration. He was obviously directing some of the blame at me. I apologized and sympathized with his frustration. After a brief baseball playoffs and politics conversation, I suggested we work together to rectify the current situation and develop a plan to get the projects back on track. He recognized the long-term impact would be detrimental to the organization, since they would be forced to maintain so many competing technologies, tools and implementations.

I am not surprised that three teams delivered three different solutions. The core principles provide guidance and promote well- designed software, which all three products exhibited. They encourage a design that meets the objectives of the organization and project, but not tightly coupled to any implementation. Change is expected. The ”How to design and build better software for tomorrow?” multi-part series demonstrated a variety of presentation, business and data access layer implementations. The same principles and design patterns were applied to console applications, WinForms and MVC. We developed concrete data access implementations using pure ADO.NET and LINQ. The core software design principles and patterns are also platform independent, so we built working .NET and Java applications.

After this explanation, he better understood the objective of the software design principles and patterns. We agreed all teams were successful and accomplished the objectives. What went wrong? How do we fix it?

Enterprise Architecture

The enterprise architecture is an organization-wide framework containing a collection of collaborating modules to support the organization’s needs, which is the foundation for a unified platform. This requires establishing the common infrastructure or framework promoting the same core software design principles and design patterns. I would also include the supporting processes.

The logical approach is building a common framework before launching the development of the domain-level modules, but we now have three working applications and no enterprise architecture. After reviewing the three applications, we noticed the teams built similar interfaces and abstract classes. They also share many common services and concrete implementations. These are all common framework candidates, but we were still faced with many decisions as the enterprise evolves. In the end, we decided to apply basic project management techniques and invite the three teams to participate in the sessions. We referred to the three applications as “prototypes” and discussed the technical merits of each including risk, cost and schedule assessments. The teams prepared demonstrations and discussed the technical design. Although the teams initially lobbied for their design, the project management techniques reduced the probability of a biased decision.

The following is a summary of the major decisions and foundation for building the framework.

  • Presentation/User Interface - MVC3 and Razor using view models
  • Business - service objects acting as the entry points coordinating the exchanges between presentation, other service objects and the data access layers
  • Domain – a rich domain-driven design (DDD) supporting enterprise validation
  • Data Access – persistence using NHibernate to SQL Server
  • IoC – Spring.NET

After a few weeks, the enterprise team started building the common framework based on the agreed technical design. Once the framework is complete, the team will refactor the three applications to integrate the common framework. Since the team followed the core software design principles and design patterns, this task will require much less work. The Dependency Injection (DI) pattern and IoC will simplify this process, so the team can focus on building the domain modules to satisfy the needs of the organization. The project schedule was pushed slightly to the right resulting in a delay, but all is not lost. The team is confident the common framework and wealth of reusable components will enable them to accelerate the schedule and deliver the products on time. They also agree the quality will be much improved. Maybe you did not notice, but I no longer refer to them as teams. The experience not only produced a unified enterprise framework, but they are now a unified team.

In the majority of my engagements, we always plan and build a common framework. If you do not have a dedicated team to maintain the common framework then appoint members from all project teams to participate. In both cases, the enterprise team should act as the change control board (CCB) to approve or deny change requests. It is critical to ensure changes do not break existing features or modules, so following the best testing practices is essential.

When the team extended an invitation to participate in this process, I could not decline the offer. They also agreed to allow me to blog about the experience and share with the community. So…I plan to chronicle the journey and decisions along the way including plenty of code examples. Please visit again soon…

Building Enterprise Frameworks – Data and Domain

Building Enterprise Frameworks – Testing and Mock Objects

Project Management 101 For Developers (Part II)

0

In Part I of the series, we discussed the Initiation and Planning phases of a project. In Part II, we will introduce the Implementation and Closing phases.

Implementation

The project plan was the deliverable of the Planning phase, so this will be the input for the Implementation. In this phase, the project team produces the product or service. The project manager manages expectations, coordinates team members and monitors performance. The baseline provides the guideline for measuring performance, which contains information for the project constraints – scope, cost and time/schedule.

The following are some of the monitoring tools.

  • Earned Value (EV) – monitoring formula including scope, cost and schedule.
  • Variances – cost performance index (CPI), schedule performance index (SPI), budget at completion (BAC) and estimate at completion (EAC).

Earned Value

The earned value represents the value of work completed to date. You should refer to the work breakdown structure (WBS), where we defined the work packages. During this discussion, we stated the work packages are used for tracking the costs/budget. At that time, the planned value (PV) represents the budgeted cost of the planned work. If the work package budget is $100 and is scheduled to be 25% complete today then the planned value is $25 (25% of 100). The actual cost (AC) is the actual money spent.

The earned value (EV) is similar to planned value, but it is based on actual work completed and not planned. This information is best illustrated in a chart, where the actual costs (AC), planned value (PV) and earned value (EV) are presented as value over time. At a glance, you will be able to determine if the project is over budget and/or behind schedule.

Variances

The next step is calculating the variances using the EV, PV and AC. The following are a few common formulas for variances.

  • Cost Variance (CV) = EV -AC
  • Schedule Variance (SV) = EV – PV

You would prefer to see positive results for both formulas, which are cost savings or ahead of schedule. The following are two indexes, which provide a percentage result using the EV, PV and AC.

  • Cost Performance Index (CPI) = EV / AC
  • Schedule Performance Index (SPI) = EV / PV

Again, positive results greater than 1.0 represent good performance. You can find many books and articles published on Earned Value (EV) and Variances.

Adjustments

You continually assess and monitor the project using the various tools. In many cases, you will be faced with a decision to adjust the current course based on your assessment. The following are some of the common adjustments.

  • Termination – decision should be based on sunk costs, where money already spent is a factor.
  • Schedule – the most likely scenario is accelerating a schedule.
  • Scope Reduction

If you can accelerate or reduce the schedule on a critical path then you can implement one of the following techniques.

  • Crashing – adding more resources
  • Fast-tracking – running tasks in parallel that were planned to be sequential

I have participated in several projects, where one or both techniques were introduced. The cost and risk is usually high. In my experience, crashing is subject to the law of diminishing returns. I prefer the saying “too many cooks in the kitchen” for crashing. If the existing resources are stressed then the additional burden of integrating new team members will be challenging. If planned carefully then both techniques can rebound a struggling project.

Change

The change word usually sends shivers down your spine, but is almost unavoidable in many projects. The source of change can be the customer, team, project manager, program manager or outside stakeholders. Since change is unavoidable, preparation is key to managing changing. A configuration management (CM) plan will ensure the proper processes are followed for change. All change requests should be documented and individuals assigned to make the decisions. This is usually accomplished with a Change Control Board (CCB), so the change request is presented to the board for an appropriate decision. The submitting party should prepare the change request including the analysis representing the risk, cost, schedule and other project impacts. Configuration Management (CM) is an essential discipline for documenting, communicating and controlling changes.

Risk

We previously discussed risk, which was a key planning item. We identified and presented a strategy for each risk, so now the focus shifts to monitoring and controlling risk. At this stage, the probability or impact may change. Since the project is evolving, new risks could emerge and require additional planning requiring reserves.

Quality Assurance

Quality Assurance (QA) monitors and governs to ensure the project will satisfy the quality standards. The QA tools include quality audits, cost-benefit analysis and others.

The software development life-cycle (SDLC) also falls under the QA category. The popular SDLC models are waterfall, incremental, spiral and agile. A waterfall model is a traditional sequential approach with a significant planning phase investment. The modern agile methodologies, XP and Scrum, support and adapt to change. I have participated in a variety of projects supporting the most popular SDLC models, but more recently tailored agile methodologies are dominant. It is important to understand that a project can follow an agile methodology or just the implementation phase can be agile. I always preface agile as “tailored”, because a project and organization can follow the practices that are most beneficial.

There are several popular processes applied to SDLC as a guide to process improvements for organizations. I am familiar with the Capability Maturity Model (CMM), which is a five level model created by the Software Engineering Institute at Carnegie Mellon University. The scale is a rating 1 – 5, as you achieve greater process improvements. Level 1 is initial or starter, so the process is not very structured. Level 2 is repeatable, so the foundation processes have been established. Level 3 is achieved with the use of standardized processes. Level 4 is managed, where the metrics are published for process and quality. Finally, Level 5 is optimization using continuous process improvements. The majority of my CMM/CMMI projects have been between level 3 and 5.

I can probably continue discussing QA, SDLC and process improvements for hours. I am a huge supporter of many of the principles and models presented, but more than anything – discipline is the key to success. If you establish processes then it is essential that all team members and stakeholders are disciplined. An undisciplined team member can cause significant damage to a project – regardless of their skill level.

Closing

This is probably the most overlooked phase, since most team members are thinking about the next engagement. The administrative and contract closure are expected, but what other activities should be included in the project closing?

  • Customer Acceptance – final approval
  • Customer Transition – product and support shifts to customer 
  • Lessons Learned – discussing the good and bad
  • Self-Assessment – evaluating your individual performance

The lessons learned activity documents positive and negative experiences of a project. This is helpful information for future projects, so the next team can take advantage of the positives and avoid the negatives. This could be the source of future cost and risk planning sessions, since you have historical information increasing your accuracy for estimating.

You personally should also benefit from the lessons learned and the experiences of the project. I will always perform a self-assessment of my performance during a project. If I was unhappy with the outcome of a specific situation then I will review and identify corrective actions for the future. What can I do better next time?

I hope this series provides a foundation for project management, so you have a better understanding of the core concepts when engaged in related conversations.

Part I

Go to Top