Sunday, December 11, 2011

What about PMs in Scrum-based development?

In scrum terminology there is no such thing as a project manager (PM). However, in addition to the product owner (PO) and the scrum master (SM) Henrik Kniberg in his excelent "from the trenches"-book talks about a "sponsor". Someone that provides for the teams long-term staffing and work environment as well as comitting to remove impediments outside the team. He describes himself, in his position as development line manager, as having this role.

In my experience many large organisations are totally project organised bringing together people from different parts of the organisation as well as external consultants to form a team. In such organisations the traditional PMs, if accepting to have an empowered team, is often ideal for this sponsor role.
That brings me to propose the following division of responsibility:

Product Owner: Responsible for what the team is to build. Merges requirements from different stakeholders and manages priorities.

Scrum Masters:
Guides the team and PO in following the agreed process. Makes sure the surroundings understand how the team works and do not interfere. The SM role is usually not full time and is therefore filled by a member of the team.

Project Manager:
Responsible for long-term staffing and external contacts in order to provide the team with the best possible environment to meet the needs of the organisation.

Friday, October 21, 2011

Ideas and books forming me as a Software Engineer


This September it was 15 years since I joined university to study Software Engineering, the official name was “Informatics”, but I think software engineering (or authoring and gardening as argued in another post) is better describing what I was interested in and, luckily, what I'm currently doing. Since then I've got some 10+ years of experience from working in the industry and a whole 15 years of continuous learning. As an exercise for myself I've set out to go through the major ideas I've picked up and used along with the people and books that collaboratively have formed me to the software engineer I am today. Hopefully you find this post at least a bit interesting and inspiring as well.


Object Orientation (OO)
In 1996, and the following years, OO was the new hot thing in software design, at least at my university. I learned to model domains (even though I didn't know that word by then) in objects using the OMT (Object Modeling Technique) notation by James Rumbaugh (later on one of the main contributers to UML). This way of abstracting the real world into objects with attributes, methods and relations very much appealed to me as it offered the tools to unambiguously document the analysis result by drawing a diagram. I did also believe my teachers when they talked about OO as the enabler of universal libraries of small reusable objects. Now I, and I'm sure they too, know this is never going to happen, but the OO-paradigm still brings structure for reuse within a system and hence is a great tool for honoring the DRY-principle
Ever since those first courses I started to think of OO as the way to design and build software systems. Later on I've come to see OO as the way of designing and building transaction based information systems where manipulation of state is the primary focus. Other types of systems might be better suited to use some other paradigm, but this type of systems for administering information is what I've been working with almost the entire time since I graduated. In university I learned the theory of OO, but we didn't actually turn any of our OO-designs into working software. We sure learned programming, but then it was to solve more traditional algorithmic problems. When leaving university I was all eager to learn how to do OO in real world projects. And boy, was I disappointed!
For several years I was employed in projects where I learned lots of things about programming and projects in general but nowhere was OO used to anything but data diagrams (now drawn in UML notation which I learned through Martin Fowler's “UML Distilled), i.e. objects with attributes and relations but no behavior, that was later turned into database tables and data structures. All behavior was programmed into transaction scripts, either directly in the GUI components or as free standing functions or stored procedures. I was desperately looking for a real world example of true OO implementation because on my own I couldn't really figure out what it would look like in order to work properly. The first piece of the solution came when I got to read Craig Larman's “Applying UML and Patterns . In this book he describes all aspects of OOA/OOD, how the software can be structured with boundary classes, controllers and entities, and shows by example code how it all comes together. Still, this isn't the approach intuitively encouraged by the structure in popular frameworks such as Java EE and Spring Framework. Out-of-the-box they rather suggests a static service structure operating on DTO:s, and that is, I think, the main reason why most systems are built with data and behavior separated and not using the true OO-paradigm.


Domain-Driven Design (DDD)
In a Jfokus 
2008 tutorial I was first introduced to Domain-Driven Design. The three hour tutorial only scratched the surface of DDD but it was enough for me to understand that this was the description of how the full OO-paradigm is used to build real world systems. Soon after, I got to read Eric Evans' book “Domain-Driven Design - Tackling complexity at the heart of software, which I have posted about previously, and since then I firmly believe that DDD is the approach to use for designing and implementing complex software systems.
DDD is not a method, nor is it simply a technique or an architectural pattern. I would rather call it an approach to software development, including analysis, design and implementation.
Perhaps what most of my colleagues first think of when DDD is mentioned is the design sketches I use to draw with the business domain at the center and technical integration packages all around. The domain being built on the conceptual building blocks, entities, value objects, repositories and services, as introduced by Eric Evans. But even though that part of the approach might be the easiest to pick up, and also important in building working software with DDD, I think what makes the biggest difference to me is the ubiquitous language. This practice of building a model with a shared language based on, but with more precise definitions than, the language spoken by the domain experts is the basis of my philosophy to always build the software so that structure and logic follows the business domain. I think that is the only way to guarantee a flexible design such that a small change to the business is always a small change to the software, never a big one. My experience is that the second you deviate from that principle, most often due to time pressure or pure laziness, you are asking for trouble further down the road. It might take a year or so, but it will come back and bite you!


Test Driven Development (TDD)
If DDD provides the approach on analysis, design and implementation, TDD is clearly what integrates that approach with quality awareness and assurance. TDD is my way to ensure a testable design and a correct implementation that is easy and secure to improve further. In a previous post, "Are you testinfected?"
, I told the story of how I got introduced to TDD, how I was skeptical at first and how I later on proved the usefulness to myself. Now I rarely, and definitely reluctantly, writes any code without first writing a failing test.


A Clean Coder producing Clean Code
Ever since the start of my career I’ve wanted to produce code that I’m proud of. However, the benchmarks for that assessment have changed over the years. Now I consider well tested and easy to read code being the standard to achieve. Since any piece of code is read much more often than it is written I think readability is prime quality measurement for code. Lately I’ve been reading both Clean Code 
and The Clean Coder by Robert C Martin. Both are excellent books. The first one giving good advice on how to make the code readable, the second really demanding reflection on what it means to be a professional software developer.


Agile with Scrum and Kanban
Another interest of mine is software development processes and techniques. At first I learnt how to deliver in a world of waterfalls, later I got to experience RUP
. Despite all bad things said about RUP, I’ve always found the core messages on iterative, risk driven development to be right and frankly quite agile. However, agile approaches such as Scrum, Kanban and developmenttechniques gathered in XP take this a long way further. The major inspiration here has been Henrik Kniberg, first in his Jfocus tutorial in 2008 and through his two books on Scrum and Kanban.
Currently I’m working in a world of mixing and matching. I’m picking the best parts out of all those above to create a process and way of working that supports being as agile and lean as possible in an otherwise RUP-ified world. I think, at the current state it is somewhat of coming success.



“All problems are not worth solving”
That was one of the advice I got from an experienced colleague of mine many years ago when I was aspiring to go from being a sheer hacker to become a CapgeminiCertified Software Engineer
. He probably gave me a bunch of other advice too, but this is the one I still remember. And I also think, for me personally, it might be the most valuable advice I’ve ever got. Since problem solving is my trade I happily go of designing solutions, then it is good to remember thinking for a minute or two about whether the problem really deserves being solved or if it is something we can as happily live with. At the bottom of this is that all resources are limited and needs to be applied where they make the best return on investment. On the other hand, we need to be careful not to spend too much time on deciding if a problem is worth solving, i.e. it is not always worth solving the prioritization problem if the solution offered is cheap enough.


The future
This pretty much sums up where I am today, but of course I will continue to evolve and in the near future I see the following ideas as the most interesting.

- Strategic design – How DDD concepts such as Bounded Context is applied at a larger scale to make strategic decisions on how to factor architectures and apply different design and development strategies. In most discussions “architecture” seems to mean frameworks, databases, application servers and technical layering. I think architectures would be much more interesting and usable if they concerned business domains.
- CQRS – I have only touched the surface of this approach to structure systems with separated command and query sides and event stores and messaging instead of relational database schemas and object/relational mappers. However, I think is an interesting approach and I definitely plan to dig in deeper.
- DCI  – Even though DDD has come to be my preferred approach I must admit the structure of the code base with behavioral code localized in every domain object has its draw backs. With requirements based on use cases or user stories, it is sometimes hard to answer the not so infrequent question “how is this UC implemented?”. DCI (Data, Context, Interaction) solves this problem through keeping the behavioral code in role objects implemented within a context representing a single UC. The code in the role objects are then injected into the data objects representing the domain. To me, it sounds like an interesting add-on to the ideas of DDD and in the future I like to continue exploring the benefits of this approach and the demands it poses on the implementation language.
- Functional programming – In order to build simpler programs that are safe to parallelize on multiple cores functional programming has gained new interest in the last years. Up until now I’ve lived only in the world of imperative programming and grasping the concepts and underlying mechanics of functional programming will pose a challenge. A challenge I’m happy to take on.
- Scala – Of all the new languages having emerged on the JVM I think Scala is the most interesting. It combines functional programming with additional powers in OO-programming without losing the benefits of static typing. All of which I think are interesting qualities. I think Scala might be the enabling programming language for many of the ideas I listed here for future exploration.

Tuesday, October 18, 2011

The value of using Domain Driven Design

Some days ago I came across the relevant question of the value of using Domain Driven Design. Are there any hard numbers to use in convincing management about the benefits of DDD?

Hard numbers are really hard to get in software development since you never ever develop the same system twice and even if you did, it wouldn't be the same since you would have learnt a lot from the first attempt. But I'll share some thoughts and observations on using DDD.

By emphasizing communication with domain experts in developing the ubiquitous language DDD helps you to get started on the right foot. Any system is simple from the beginning. The first set of functionality is never complex to implement since you have no other code to conform to and you tend to start with something basic. It means it is easy to just hack something together mixing levels of abstraction and business logic with technical complexity. Since first requirements are simple, the first complexity tends to come from unfamilliar tools like ORMs or other technical frameworks. Since we all are technicans at heart these things often become most important and the archutecture tends to be more about technical concerns than the domain. All of which, we know, lead to a mess when requirements get more complicated.

By using DDD we are turning the focus away from technical concerns and instead focus our effort to the business domain. With a bit of domain modelling, including test driven implementations, we get to investigate a larger set of requirements before going into technical details of persistence and other layers. An agile principle is to delay hard to change decisions to the last responsible moment, e.g. not deciding on a hard to change persistence model until we know what the domain really looks like. All in all, this leads to better architecture because the architecture is centered around the domain, not around technical frameworks.

Another aspect is complexity. What you consider complex depends on what you already know. Currently, I'm working with a large system where some parts are implemented using DDD and other parts are, what I would call, "undisciplined" transaction scripts. I think TS have their proper place and use, but implementing them in a way mixing business logic with technical concerns is never right. Hence "undisciplined". In this project I've heard people complain about different things they think is complex. Those people being unfamilliar with DDD but having a long history of working on the system thinks the TS code with mixed concerns and very few abstractions is fine since they see exactly what is going on and they "know" what the context is and how everything works on a detailed level. And they think the DDD approach of separating things into different layers and building abstractions for business concepts just adds complexity.

Then we have the newcomers, that don't have any prior DDD-experience either, that struggle with understanding of the domain. Even though DDD is new to them they pretty quickly pick up on the concepts and think it is a great help in understanding the domain. To them it is the TS code that is most complex since it doesn't make any difference between lines of business logic and those of database interaction.

These are two aspects on "the value of DDD". In my previous post I wrote about DDD being so much more than a set of technical building blocks and I think the most value from DDD is gained when wisely desiding on what parts to apply in each unique situation.

Wednesday, October 5, 2011

Is Domain Driven Design always applicable?

Disclamer: This blog post discusses use of DDD in information centric systems, which in my experience is most of the systems out there. It does probably not apply to technical software such as compilers and operating systems.

In my opinion the answer to the question in the title depends on how you define DDD. Do you do DDD if you do not use all technical building blocks? I think so. Is DDD more than the tachnical building blocks? I definately think so.

Personally, I would opt for using DDD in almost any situation but those that I know for sure will not evolve for more than a day or so and where I'm dead sure of the requirements. In any other situation I would start building the Ubiquitous language together with the domain experts to get a common vocabulary. This doesn't sound so much, but it is huge to avoiding expensive misunderstandings. (See Dan Bergh Johnsson's excelent presentation from JFokus this year.)

For the systems architecture I would always deploy the strategy to keep business logic and infrastructure code separated. It makes wonders for readability and testability. My rule of thumb is that business logic should be easily testable using unit tests (and end-to-end tests independent of the run-time environment) and infrastructure code should be so simple you generally need no other testing but a few basic integration tests.

For design my strategy is to keep the code close to the business in terminology, structure and logic. This is important to make sure that the impact of a change is about equal in size in both business and software. Here the ubiquitous language is an absolute need.

Then we come to the technical building blocks and now it is time for some choices. In general I think you could talk about two types of systems, or parts of systems; those mostly concerned with changes in object state and those mostly concerned with processing data streaming through the system.

In the first case entities, aggregates and repositories are a natural fit, in the second I think transaction scripts (in DDD context called domain services, since they do only concern domain logic, no infrastructure code, as per discussed above) are a nice fit. When the most important feature is to crunch some data, perhaps modify it and then route it further to some recepient (like another system or some persistent store) I think it is the "processing pipeline", i.e the stateless service code, that should be emphazised. So in those cases the internals of the data isn't very interesting and might be better left in some simple DTO format.

Whether going with entities or servicies I think the practice of using typed value objects for input parameters give a lot in terms of clarity and IDE-support when writing and reading the code. From a technical perspective, this is the easiest DDD building block to use and I think it has great advantages even if it is the only part of DDD that you apply to the system.

In conclusion, for the types of information systems I've been developing over the years DDD is the way to do it, however that doesn't necessarily mean everything should be entities or value objects. DDD is so much more.

Sunday, August 28, 2011

Unit vs Integration tests

Lately I've spent some time thinking and discussing pros and cons of unit and integration tests. Basically I think you need both but they have totally different characteristics and should therefore be combined carefully. Before we dive in, lets define the terms for the sake of this discussion since I know there are at least as many definitions of "integration test" as there are projects out there.
Unit test: Test focused on one or a closely related set of classes. Defined and run outside any run-time environment (container). All dependencies are mocked/stubbed.
Integration test: Test focused on one or several (sub)systems. Defined in code and run in the run-time environment. Some, but not all, dependencies might be stubbed.
With definitions done, let's go for some characteristics I've found to be true.
Unit test - pros:
1: Small units (single classes or a small set) are tested independently without interferences from the surrounding environment.
2: Can be run instantly with a few keystrokes - no deployment necessary, the whole (sub-) system doesn't even have to compile. Makes it easy to test incrementally.
3: Supports Test Driven Development.
4: With small units it is easier to cover all interesting execution paths.
Unit test - cons:
1: Mocking frameworks could be a challenge to master, but they are oh-so useful when you get to know them.
2: Different tests must be in sync to ensure that individual units work together. Can be tricky, especially when dealing with semantic changes in the interface of a class/unit. Requires disciplined top-down TDD.
A colleague of mine pointed out that my con #1 actually could be a pro if using TDD (as in writing test and production code simultanouosly) since having a hard time mocking dependencies would be an incentive towards writing code with fewer dependencies, which in general is a good design practice. However, my experience is that a good mocking framework is necessary to effectively write unit tests, and as with every tool there is learning curve. Now for integration tests.
Integration test - pros:
1: Tests the code in its true context with few or no stubbed dependencies.
2: Supports testing of non-functional requirements, such as performance, security and transaction boundaries.
Integration test - cons:
1: Can only be run after the (sub-) system has been fully developed, built and deployed. Tests are written as an afterthought instead of driving the development.
2: The code-build-deploy-test-cycle takes a long time which make debugging cumbersome.
3: Might require other parts of the system than those under test to be fully functional in order to bring the system into the state for the test to start. This creates strong dependencies on parts of the system that are not targeted by the test.
Most of my cons for integration tests only become severe when integration tests are used to test business logic, which I've seen from programmers claiming unit tests with mocked dependencies being too hard to write. When used to test real integration issues (such as inter-system communication or database connectivity), the cons sort of goes away since those issues cannot be detected anyway prior to having at least a "walking skeleton" implementation of the architecture.
In conclusion, I think both unit and integration tests (as well as automated systems/acceptance tests) are really needed in a software project. However, each need to be used propely. For ensuring implementation of business logic I think unit tests are the proper choice. It might be a bit harder to write because you would have to mock dependencies instead of relying on the state currently set up in the database, but at the time of adding functionality or refactoring your domain code unit tests are so much easier and thereby cheaper to run since you do not need to bring your complete system into a known state, you do not even have to compile, let alone install, your complete system. Striving for a good coverage (some say a 100%, I don't) of your domain logic, preferably using a test-first approach, you also tend to achieve a better design since well designed objects are easier to test.
Side note: Today we got lucky like a crazy when we found a bug in the business logic because we by chance happened to run the integration test designed to test that feature. It could be, and was, claimed to be an argument for the benefit of using integration tests for testing business logic. However, I think it is not. For starters, the bug had been in the code for about five months. Not even the guy writing the feature knew were in the system the code was located. We had to spend several hours to find the implementation. After that, the bug was easy to spot. It gives a perspective on cycle time and why quick feedback is important. Secondly, the bug could not be found by unit test since there was no coverage on this part of the system. I'm pretty sure the reason for that is that it is really hard to write unit tests for code directly manipulating a JPA entity manager. So, the problem was really that business logic was burried in the database access code and hence not unit tested. If a unit test had been attempted I'm pretty sure the logic had been moved into a more test friendly design. Which of course would have been better from other perspectives as well.

Monday, July 11, 2011

Software Engineering Essential Reading 2 - Domain Driven Design

In 2004 Eric Evans published his book “Domain Driven Design – Tackling complexity at the heart of software”. It's quite a heavy bit of literature, but every page is filled with insightful discussions of the benefits and practices on designing object oriented software with a model true to the business domain, and thereby worth every minute spent reading and reflecting. Eric has solid experience from developing software in support of complex business domains and he openly shares and discusses both success and failures.
The book discusses domain driven design (DDD) on several different levels:

  • The importance to use the same domain model in all aspects of a project – in writing, in the spoken language and in the source code.
  • The stereotyped building blocks used in modeling, design and implementation of domain centric software systems. Entity, Value Object, Repository, Service, Aggregate and a few more.
  • How to use Strategic design and concepts such as Bounded Context to define current and future logical systems or enterprise wide architectures without having to unify the whole enterprise in a single domain model.

The book is the definitive starting point in learning and applying DDD, one of the really important design paradigms now and in the future. This book, together with the active community, both locally and on the Internet, has helped me in successfully designing and deliver complex software under complex technological and organizational circumstances. This is definitely one of the books I'm happy to come back to now and then and I think every professional software designer should have a copy. Before I first read it I was wondering why no one out in the “real” world used OO-design even remotely similar to what I learned and practiced in university. In his book Eric put this straight and he made me instantly feel “at home” with a true OO-design, carefully tuned to model the businesses, as the way to go about software design. Thanks Eric for sharing!

Thursday, July 7, 2011

Value Producers and Supporters

In the lean and agile world anything that produces value for the client is a good thing, everything else should be carefully scrutinized before carried out. This doesn’t mean you shouldn’t produce any documentation, it just means you should produce only the documentation that someone is likely to read. It doesn't mean you shouldn't produce any automated tests, it just means you should make sure you produce them in such a way you optimize value during the development period and for future maintenance. To do this right you need to understand which part of the project organization is producing direct value for the client and which part is supporting the Value Producers.

In many projects you could find three groups of Value Producers: programmers who produce code that builds the product, testers who make sure the product does what it is supposed to do and technical writers who produce the user documentation. Everyone else, requirements analysts, architects, designers, etc, are usually in a supportive role. The Supporters does not themselves produce anything of direct value to the client, however that doesn’t mean that their work is not important. It is. As long as they make it easier for the Value Producers to do their work, now and in the future.

That said one of the challenges in running an efficient agile project is to understand who is supporting who and how to make it as easy as possible to produce maximized value for the client. In order to do that you first need to understand what deliverables (both artifacts and activities) that produce value to the client. In some projects the client might get great value from the structuring and questioning on  business processes done by the requirements analysts in order to define the system requirements. In other projects the client is perfectly clear on their business and the requirements specs are only needed to formally document the systems functionality and drive testing. Next you remove any activity or artifact that does not  add value to the client or is invaluable in order to support efficient value production and make sure everyone inside and outside of the project understands their role and for whom they are supposed to produce value, in other words: if  they are a Value Producer or a Supporter.

Wednesday, June 29, 2011

Define and apply “The boy scout rule”

The boy scout rule”, as defined by Robert C. Martin, tells us to make small improvements to existing code as we pass by implementing new features. It doesn't mean we need to make the whole class/file perfect, it just means we should make some small improvement in addition to the new feature, which of course should have a clean implementation. This is a good principle to include in the culture of any project. But how to do make it happen in a team with many developers, each bringing their own experiences and standards? In my project we are currently making a serious effort. Here are the actions we are taking. Please leave a comment with your own experiences/ideas.

First, talk about it. Introduce the name “The boy scout rule” - it is a quite funny name, and easy to remember. Load it further with the reference to real world scouts not caring who left the garbage found at the camping site, but always making sure the site is cleaned and left a bit better than found. Transfer the values into software development as a way to continuously refactor an existing code base while at the same time implementing new features.

Second, define, in concrete terms, what “The boy scout rule” means in the project. In our project we have made the following definition, with reference to our design principles:
  • Increase readability
    • Choose intention revealing names from the business domain for classes/methods/attributes/parameters/variables.
    • Extract methods to enforce the principle of Command Query Separation (CQS) and intention revealing method names.
  • Enhance the structure
    • Move methods to the“right” location to enforce the Single Responsibility Principle (SRP) for classes.
  • Make it easier to do changes without introducing bugs:
    • Increase readability (as above).
    • Enhance the structure (as above).
    • Use domain objects for method parameters (e.g. use a ZipCode object instead of a simple String).
Third, lead by example. Start with the the informal leaders (often the most experienced developers) and have them spread the practice through interaction and pairing in daily work. Personally, when pairing with colleagues I try to explicitly point out what part of the work we are doing is implementing the new feature and what part is applying “The boy scout rule”. By doing so I hope to spread both the knowledge and practice to apply the rule in daily work.

Thursday, May 19, 2011

Proposed design principles for internal design

In my current project we are to implement additional functionality in an existing code base. The system is SOA-based, composed of many sub systems, each implementing a set of services, as detailed by the architectural guidelines given to the project. In practice this means that each sub system hosts a set of domain logic behind a façade of Java EE Stateless Session Beans or Message Driven Beans. As the service façade approach is described in detail in the guidelines, any guidelines of the internal design is equally absent. Therefore I’ve, with great inspiration from the DDD community, put together this list of design principles for proposed use within the project, but I think they could be equally applied to internal design of most administrative IT-systems. What do you think? Should something be added or subtracted?

General design principles

  • Adhere to the Single Responsibility Principle (SRP), i.e. every module (class/method) has one responsibility, not several.
  • Choose names from the business domain for classes, methods, attributes, parameters and variables.
  • Use domain objects as parameters instead of primitives like string or int.
  • Design the system such that domain classes can be unit tested individually.
  • Each (part of a) sub system should use one design metaphor, either 
    • domain centered - business logic placed in domain objects, preferable when changes to state is most important -  or
    • service centered – business logic placed in stateless services that operate on data in DTO:s, preferable when the data flow is most important. 

From this follows

  • Keep business logic and technical implementation details separate, i.e. do not combine these two types of complexity.
  • Adhere to Command Query Separation (CQS), i.e. each method should be either a command (having a side-effect and possibly returning a result) or a query (returning a result without any side-effect).

On a more detailed level

  • As a tool for communication use the following stereotypes for objects implementing the business domain logic:
    • Value Objects – immutable, created when needed, not persisted.
    • Entities – mutable (as per allowed by business rules), loaded/persisted through Repositories.
    • Repository – interface describing load/persist of Entities.
    • Factory – interface describing creation of Entities and Value Objects when external dependencies (to Repositories, Factories or Services) must be fulfilled.
    • Service – stateless class housing business logic that operate on data in DTO:s
  • Dependencies are preferably handled through dependency injection to facilitate loose coupling between objects.

Application of principles
These principles define a solid approach to design of new (sub) systems. When used in work with an existing code base it can be used for continuously applying small improvements in accordance with “the boy scout rule” and as vision to compare with current state in order to form a road map for larger refactoring efforts.

Friday, May 6, 2011

No man is an island

What’s the most common reason for failure in a software project, or perhaps in any project? You will get many different answers depending on who you ask. The designers might say “The requirements were not clear enough and the programmers didn't follow the architecture”. Testers on the other hand might say “The programmers never delivered what we were supposed to get when we were supposed to get it” and programmers tend to say “We got lots of ‘bug-reports’ that where really outside the scope”. The list can be long, but at the end of the day it all comes down to lack of communication.

Let’s face it, developing software is a complex and creative handiwork. That’s why we try to stay away from the over-planned, defined, way of waterfall and turn to iteration based approaches like RUP or Scrum. True iterations give us great feedback to base the next, (hopefully) better, version on. However, working with iterations is not enough. The saying “No man is an island” origins back to about year 1600 and is as true as ever when it comes to software projects. For many years we have set up the projects so that specialist testers form a test team, specialist designers form a design team, specialist programmers form implementation teams, and so on. This for sure brings advantages in having specialists learning from each other becoming even more specialised. But it also introduces a great deal of hand-overs between the teams, each missing some vital information. I believe this way of organising the development project is one of the key reasons why projects fail.

In the agile community we talk about “Cross-functional teams“, meaning that people with different specializations work together to achieve a common goal. In software development that means that a programmer sits next to a designer and a tester, working together on transforming a requirement into a piece of well designed, well tested code ready for production. The agile practices are all about maximizing communication, e.g. by locating people in the same room with access to broadband communication aids such as whiteboards.

If you are currently in a project where the test team and development team sit at different floors and the primary tool for communication is release notes, then you have a superb opportunity to increase the likeliness of project success just by re-locating so that developers and tester form collaborative teams working together on both design, test design, implementation and test execution.

Saturday, April 16, 2011

Automated functional tests - an organisational change


Introducing automated systems (functional) testing to a project used to run manual functional and regression tests at the end of every development cycle is not a technical issue – well, it is to some extent, but there are good solutions out there – it is a fundamental change to the work your testers perform.

In the manual approach a typical set up of roles dictate that test designers create written test specifications in some tool or in ordinary documents. The specifications contain everything the tester needs to execute the test, including detailed step-by-step instructions.  The tests are then manually executed over and over again making sure to catch any regression in the system functionality.

When moving to automated testing the previously text based test specifications are transformed into an executable format. Perhaps the behaviour driven approach is used with some short textual description that underneath is connected to code exercising the system, preferably through the GUI if such exists. Since execution of the tests is automated the testers previously doing the manual test execution are no longer needed. Instead test implementers, with the ability to build the code that executes the tests, are working alongside the test designers. Tests are executed by the build system, perhaps nightly or even more frequent making the feedback loop faster.

This is a big change - and yes, it is a good one - but it is important to understand the impact to the organization of your test/project team and the new skills required. It is also important to understand that the automated tests are now a fundamental artifact for your system, just like the written test specifications used to be, and it is to be owned by the test lead. Just because it is automated and uses code to run, it is not to be handed over to the developers.

Friday, April 1, 2011

The main pillars of Scrum according to me

In this post I'll share with you what I think are the most valuable pieces of Scrum, pieces I'm never willing to compromise over.

Iterative/incremental
Scrum is truly iterative mandating to produce a fully functional increment of the final product, including at least some business functionality, in each sprint. In addition each sprint ends with a sprint review and retrospective making the feedback loop explicit.

Transparency 
In Scrum everything is visible. Any misconception in the organisation, design or infrastructure is pulled into the light. Honesty is just the first name!

Continuous improvement
Fast feedback loops, in larger and larger circles, and transparency are a solid ground for continuous improvement both to the product under development and to the process itself. Always strive to do things faster and with higher quality than the last time around.

Self-managing teams 
Scrum is based on the belief that a team of professionals is most effective if left alone to organise, solve its problems and work towards the sprint goal instead of having management telling them what to do and how to do it. Managers are to take the role of facilitators/sponsors making sure the team has everything it needs to produce as much value as possible.

Definition of done 
When is a task done? In order to truly complete a piece of work and deliver a high quality increment it is absolutely vital to have a solid definition of done. Any work that should have been done to get the increment ready for production will not go away if left undone. Instead it will accumulate over the following iterations and add up exponentially to a large, undefined, piece of work requiring a stabilisation phase of unknown length to meet quality criteria.

These are my fundamental values which I try to stick to. What do you think? Do you have other/additional most pressures parts?

Tuesday, March 15, 2011

The only certification I'm proud of

It seams the popularity and importance of certifications goes up and down in the IT-industry. The only thing constant is the ongoing debate on whether certifications are valuable as a "receipt" on competence or a total scam. I think there is a value but the value vary greatly depending on the type of certification.

After a bit over ten years in the industry I've gathered a set of certifications. However, the value of them are in most cases very little over following a coarse or reading a book. I'm certified in both OOAD by IBM/Rational  and in Java programming by Sun/Oracle. I have also ample experience in both fields but I got the certifications prior to that, by just reading the books and completing a test. For my Scrum Master certification I didn't even have to complete a test, following a two day course was enough. This is the reason for this particular certification being so heavily criticised, and I agree, the "certification" does not bring any additional value. But the two day course was great for gaining fundamental knowledge on values and core practices and I have since practised implementing Scrum and other agile techniques and value systems successfully.

These types of certifications are, despite that it might involve hard work to complete them since they tend to test on so specific things, entry level in most cases and as such adding no more value than successfully completing an exam at a small course at university. But there are other types of certifications that are based on real world experience and proven capabilities evaluated by experienced software engineers. Within the Capgemini group we've had such certifications for different professions for a long time and more recently it has been aligned with the publicly known Open Group ITSC. This is a certification based on what I've done in real world assignments and I think it is also a valid quality guarantee for the knowledge and experience I can bring into a new project. This is the only certification I'm proud of!

Wednesday, March 9, 2011

Borders make communication break down. Don’t let them pile up!

Communication involves transferring information from the sender to the receiver. In human communication the sender and receiver are each in separate personal contexts based on their personal history, interests and the reason for which they engage in the act of communication. The data being transferred (i.e. the words spoken or written, pictures drawn, the body language being used, etc) are formed based on the information being processed and filtered through the sender's context and then re-interpreted to information through the context of the receiver. Successful communication relies on the information not being to much obscured through this process.

This said, all human communication occurs between contexts, the recipe for successful communication is that the differences between the contexts are as small as possible. So what is it that makes these context differ? What is it that makes communication hard? I like to use the term "borders". In professional communication, e.g within a project or within a department or a company, many different types of borders can be identified. If you were to communicate without crossing any of those boarders the communication wouldn't be very interesting or valuable, so that is not a success factor. But when laying out your organisation you should take care not to place different borders at the same place since borders tend to reinforce each other and pile up to something higher than the sum of its parts, i.e. making important communication unnecessary hard.

Let's have a look at some commonly found borders in professional life to make the idea clear.

Borders between professions
It is easy for a developer to talk to a fellow developer and for a tester to talk to another tester since they share the same professional context, same professional interests, similar experiences, etc. It is much harder for a developer to talk to a tester or a requirements analyst since that communication is crossing the border of professions.

Borders between area of responsibility
Area of responsibility could be the project you are currently working in or the sub-system (within a larger project) that you are assigned to, but it could also be the requirements for a project as opposed to the code base of the same project. People with different areas of responsibilities tends to have different focuses and priorities and hence communicates out of different contexts.

Borders between geographic locations
Many software projects of today are distributed over several different locations. It might be different countries, different cities, different buildings or as little as different floors or even different rooms. Never the less we, being humans, quickly develops a common context with those people sitting in the same room as opposed to those at another location, hence communication over geographical boarders is harder. Having to consider not being able to speak face to face with people at other locations but being bound to telephones, e-mail, IM, etc, doesn't get communication any better
.
Borders between chains of command
People separated into teams with different managers also tend to have a harder time communicating than those being assigned to the same team. In some cases management policies might not allow any communication except through the chain of command and that is obviously a problem. But even when direct communication is allowed it suffices from the different context resulting from different priorities in the two teams.

Borders between cultures
The clash of cultures is pretty common in more or less any software project these days. It might be due to organisational mergers, consultants being brought in to work alongside employees or part of the project being outsourced to another company or another country. The more cultural differences the higher the communication border.

So, what to do with all this? It is something to think carefully about when setting up your project organisation. E.g. consider a pretty common organisational pattern: You start with placing, analysts, architects, developers, testers and CM:s each in their separate team with a separate manager only reporting to you, being the top line project manager. Then you decide that development should be outsourced to another company placed in another country (typically with lower cost per developer) and you provide them with no other means of communication than e-mail, IM and phones. Said and done, after some time you start to wonder why there is so much frustration within the other teams, everyone saying it is impossible to work with the development team. Yeah, not so hard to understand, right? You have managed to put the development team in a context where all borders, profession, area of responsibility, location, chain of command and culture, are placed at the exact some place, i.e. right between the development team and the rest of the project. Given this you should be happy if any communication at all is taking place.

Of course, it isn't possible to have the whole project in the exact same context, since it wouldn't be very practical to have only developers or only testers making up the whole project. So, some cross-boarder communication is necessary and that is just find. Usually it works well if you take care to set up your organisation so that borders do not coincide. E.g., this is one of the reasons why cross-functional teams are good. The team members have to cross the professional boarders and perhaps also cultural once to communicate but they are all within the same area of responsibility and (hopefully) in the same location. If you need to have an off-site development team (perhaps with another cultural background) it is a very good idea to ease the communication by having at least one developer on-site handling the communication with the development team over the geographical boarder.

The bottom line advice: Watch out for context borders and don't let them pile up to destroy communication within your project!

Friday, February 25, 2011

Iterations based on feedback deepens your understanding

A few weeks ago I came across a "challenge" in the form of a competition from one of the JFokus exhibitors. It seemed fun and small enough for a spare-time project so I set out to solve it.

For those of you not reading Swedish I'll give you a translation of the problem statement:
"What should be done is to reduce a string as much as possible. You do this by repeatedly removing sub-strings. Not any sub-string, but only from a given set.
The string to reduce is:
VOCDIITEIOCRUDOIANTOCSLOIOCVESTAIOCVOLIOCENTSU
To reduce it you are allowed to remove occurrences of the following words:
TDD, DDD, DI, DO, OO, UI, ANT, CV, IOC, LOC, SU, VO

E.g. some repeated removals could be:
1) ...BIDDDOCDDD...  (remove DDD)
2) ...BIDDDOC...     (remove DDD again) 
3) ...BIOC...        (remove IOC) 
4) ...B...           and so on... 

What does the shortest string you can produce in this way look like?

To be a correct solution the program should be able to produce the shortest string given these in data and also for other start and reduction strings. In other words, it should be a general solution for this type of problem."


In an initial analysis session I managed to solve the given string by hand and thereby deduced an algorithm in three steps:
  • First split the start string based on "unreducible" chars, i.e. chars in the start string not part of any of the reduction strings.
  • Reduce each of the reducible sub-strings by recursive removal of reduction strings.
  • Concatenate the results into the shortest possible string.
Due to a limited (and unknown) amount of time to spend on the problem I decided to split the work into two iterations each resulting in a delivered solution to maximize my chances of being able to post at least some solution to the competition.

Iteration One
In the first iteration I decided to work on the second step of my deduced algorithm, i.e reduce the string by recursive removal of reduction strings, since it could stand as a solution on its own. Yes, this is definitely a brute-force, not so elegant, solution but never the less it solves the initially stated problem. I made the recursion happen inside a loop over all reduction strings to explore the shortest rest of applying the reduction strings in different order. From a performance point-of-view this pretty soon get out of control when the length of the start string and number of reduction strings grow, but for a iteration-one-solution I think it was OK.
Before packaging up my application I worked it over to ensure easily readable and neatly factored code guided by unit tests.

Iteration Two
Soon I got feedback that my solution did not solve all invariants of in-data used to evaluate submissions, i.e. it wasn't a good enough general solver. I supposed this was due to the severe performance problems kicking in for a bit longer start strings and higher number of reduction strings.
I quickly wrote some additional end-to-end unit tests with tougher in-data and got to work on adding implementation for the first and third steps of my initial algorithm. The well tested and factored code made it easy to add the new functionality. In addition I added caching to ensure that no sub-string is ever evaluated more than once. Running my end-to-end tests through the profiler tool provided evidence for a strong performance boost; the brute-force solution of iteration one did almost 140 000 recursions over the initially stated in-data to find the shortest string, now I was down to 92 recursions!
Architecturally-wise my solution wasn't fancy or innovative, I rather stayed with the OO-best-practices I normally use. The solution was implemented in Java with a simple interface describing the API for the logic.

public interface StringReducer {
    String reduce(String startString);
}

I now had three implementations of this interface (UnReducibleCharBasedStringReducer, RecursionBasedStringReducer and CachingStringReducer) that could be linked together to implement the complete algorithm.

Iteration Three
Again I got feedback that my solution did not solve all invariants of in-data used to evaluate submissions. At first I was a bit surprised since I really thought the solution was complete, but the feedback made me think harder (and open up another iteration) and soon I realized that my algorithm was lacking an important piece: For the general case, there is no guarantee that there will be a single string that is the shortest. It might very well be that the shortest result includes two or more strings with equal length. Consider for example the start string "ABC" and reduction strings "AB" and "BC". Reduction using "AB" gives the rest "C" while reduction using "BC" gives the rest "A", i.e. two different strings with equal length depending on the order in which the reduction strings are applied. So far my solution had randomly returned the latter of these results, omitting the first. Now I changed the StringReducer interface to include the notion of a Rest wrapping one or more strings with equal length. This required a pretty significant re-write of the implementations but the tests already in place guided the work and secured a high quality result.

In the process of re-write I also found another situation that could yield multiple results. Consider the start string "IOIOI" and the reduction string "IOI". This set-up could return two alternative results, "IO" and "OI", depending on which of the two occurrences of the reduction string that is removed first. Adding this to the RecursionBasedStringReducer was as easy as adding a new test and adding another loop in the recursive function.
With these modifications I submitted my solution for the third time, a couple of days before the dead-line of the competition.

Iteration Four
This time the feedback said that my submission was accepted and I waited for the jury to make their decision on the perceived best solution. I didn't win - but I followed the heated discussions taking off right after the jury had announced their decision. In this discussions several suggestions for hard-to-handle in-data were put forward. For each I added new tests to my solution and found that it could handle most of them. The exception was a rather long start string (not possible to split) with associated rather many reduction strings that could be used to reduce the entire start string. I'm pretty confident my solution eventually would have come to the right conclusion but the large number of reduction strings made the number of needed recursions huge. But of course the solution is to pay attention to the first reduction path returning an empty string. After that there is no need to continue searching since no string could possibly be shorter. Again adding a test and injecting a simple condition in the right placeRecursionBasedStringReducer was enough to solve the problem.
In doing so I also found a case of "false reduction strings", i.e. reduction strings containing characters not found in the start string. These strings will never be able to reduce the start string and hence can be removed right away instead of causing extra reduction paths to be evaluated. This finding caused another small update to the implementation and that's the current state of my StringReducer.

Conclusion
My take-away from solving this challenge is this real life example showing that working iteratively based on feedback helps you to arrive at a deeper understanding of the problem domain and a step-wise better solution, something we should always strive for in any project regardless of scope and domain.
In addition it was great fun!

Tuesday, February 15, 2011

We are all Gardeners and Authors

In the previous book on Software Engineering that I read the work of designing and developing software systems was characterised as gardening. The authors talked about the ever ongoing effort of sustaining and improving the internal quality of the code in terms of nurturing, guiding and pruning. I kind of liked that picture of the system under development as something fragile and tender that we should lovingly care about to make sure it will grow into something really valuable for the business we set out to serve.

In terms of software development this work of a gardener's can be translated into finding the proper input of both domain knowledge and reusable pieces of existing code (nurturing), applying a sound architecture to shape the system form (guiding) and constantly refactor to improve existing code and get rid of what is not serving its purpose any more (pruning). At our disposal we have tools like Domain Driven Design (DDD), Test Driven Development (TDD), different levels of software patterns and a powerful IDE. Growing a system is also a great metaphor in these times of agility. Doing the agile development right we will continuously add small pieces of new functionality to a working system always being careful not to wreck the tender "plant" we are growing. In other words - we should all be Gardeners.

We should also be Authors. In the current book I'm reading, "Clean Code" by Robert C Martin, "Uncle Bob" argues that good source code should be just like prose. It should be easy and enjoyable to read as well as informative and clear about its intentions. After all the code we write will be read by ourselves and others many, many, many times in the future. Writing  code that can be correctly and efficiently executed by a computer is not that hard. In most cases the compiler and run-time optimizer will take care and by adding some tests to the mix we should not get that many surprises. The really hard part is to write code that can correctly and efficiently be read, understood and changed by humans. So, next time you sit down to do some programming, try to think of yourself not as someone talking to the computer but as an Author talking to your fellow and future developers with the source code as your means of communication.

Tuesday, January 25, 2011

True iterations deliver value

Everyone wants to be iterative and incremental these days. Iterations are indeed nothing new; they have been around inside and outside of RUP and other methodologies for a long time and they show up again in SCRUM (called Sprints) and XP. Many projects claims to be iterative, but are they really?

In my view you are truly iterative only if you for each iteration go through the whole cycle of analysis, design, implementation, integration, test and deploy to really produce a high quality increment of the final system. Many projects I’ve seen claims to be iterative but tends to focus only on analysis and design in the first iteration, hence implicitly converting the iterations to phases in a good old fashioned waterfall.

The only way you get to benefit the true value of iterative development is to run through the whole cycle and collect feedback from everyone involved. Then you know how to do even better in your next iteration. As a Software Engineer we should really push for true iterations in our projects since that is the only way we are sure to be involved and able to give and receive early feedback.

Thursday, January 13, 2011

Software Engineering Essential Reading 1 - Growing Object-Oriented Software, Guided by Tests

Under the title "Software Engineering Essential Reading" I plan to share short reviews of books and articles I find most useful and an essential read for every software engineer.

First up is an (in my opinion) outstanding book on Test Driven Development (TDD) written by Steve Freeman and Nat Pryce. The title is "Growing Object-Oriented Software, Guided by Tests" and, as the title suggests, this book is more about how to successfully build changeable and maintainable software using TDD than a technical book on a specific testing framework. Sure, through the book they are using jUnit 4.x and jMock2, but the book is not a go-through of framework features, rather it uses lots of example code to show how they "grow" (just like a gardener does with flowers) software. These frameworks just happens to be their favorite tools.

The book roughly consists of three parts:

  1. First the authors go over their motivation for TDD and the principles and techniques they apply to both testing and OO-design and -implementation. Part of this is really heuristics to follow for a clean, readable and maintainable design of production code which also is greatly discusses in other books, but I like the fact that it is included here because it brings the whole picture of responsible software development rather than just explaining TDD ripped out of its context.
  2. In the second part they go through a long (about 150 pages) example of "growing" an application using the principles an techniques from part 1. The example is very well done and easy to follow. In the beginning I felt it was very valuable, but at the end I thought it was running a bit long. But just as I was about to skip some pages the book went on into part three.
  3. In the last part they wrap up and enforces their key ideas by going over their principles in more depth backed-up with the examples from part two. In many passages this part is so dense and right-to-the-spot that you have to pause a minute or so after each paragraph just to think over the truths that just were brought to you.
All in all, this is a truly great book and I would say it goes straight to the top of all SE-related books I've ever read. I strongly recommend anyone engaged in software development to get a copy.

Friday, January 7, 2011

Podcasts I'm following

One very nice way to keep up with development in the SE-space is listening to podcasts while doing something boring like commuting, doing the dishes, etc. Here I list my personal favorites.

  • The JavaPosse - Four very experienced Java developers talking about news in the JVM-space (Java, Scala, Groovy, Android, etc). Episodes of interviews and recorded sessions from their yearly open-space conference (The Java Posse Round-up) are also sprinkled in the feed. They've been running for over five years now and continues to keep up.
  • The JavaSpotlight - The official (I think) Java podcast from Oracle. Half an hour weekly show with all Oracle-related Java news.
  • The NetBeans podcast - The official podcast form the NetBeans team at Oracle. News on my favorite IDE.
  • The Basement Coders - A group of experienced developers sharing their thoughts on lots of Java-related subjects and conducting interviews with interesting people.
There is no excuse for not keeping up with the development, just install your favorite podcast receiver and start to use your dull times!

What are you listening to? Make a comment with your personal favorite podcasts.