Saturday, July 13, 2013

What are the layers for?

As I’ve described in earlier posts I like to use a Hexagonal (or Onion) architecture when building enterprise back-end systems. The reason for that is that it places the domain model in the center and surrounds it with whatever integration code is needed to make it work in the technical environment where the system is deployed. It also makes the domain model totally independent of anything outside itself, which makes it super easy to thoroughly test drive without any execution environment.
In a recent discussion (as in so many others) we pretty soon got down to the question why we need all the layers. The simple reason for that is that every layer has its distinct reason to exist, so I figured the best way to motivate them is to describe those reasons on a per layer basis. So here it goes.


Domain
This is the center, the heart, of the system. The sub-title of Eric Evans’ wonderful book is “Tackling complexity in the heart of software” and that is what Domain Driven Design is all about. The book is full of good advice on how to do that, but from a layering perspective I think the number one technique is to separate the inherent complexity of the domain (i.e. the complexity of the business) from the accidental complexity that we introduce by using computers to solve the problem of the business (i.e. the complexity of using a database, messaging service or deploying as a web application). In short: do not mix code implementing business logic with code handling technical matters! Hence, the domain layer is where all the business logic, and nothing but the business logic, goes. Any service needed by the domain, e.g. a repository for storing entities, are defined as an interface, in business terms.


Infrastructure (part of the integration layer)
Surrounding the domain layer we have the code responsible for integrating to the “outside world”, the “anti-corruption layer” in Evans’ terms. I commonly call this layer “integration”, but it falls into different parts. One is the “infrastructure” or “persistence” and another is “application”.
“Infrastructure” or “persistence” commonly is about integrating with a database or messaging infrastructure but it could also be about integration with other systems. In short, this is the layer where any services defined as interfaces in the “domain” get their technical implementation. In this layer there should be nothing but technical concerns, and we must be careful not to let any business logic leak out to this layer.


Application (part of the integration layer)
This is the part of the integration-layer where requests to the domain model come in from the “outside world”. Just like integrating with a database or another system is a matter of converting between different models (the purpose of the “anti-corruption layer” described by Evans) this part of the integration layer is also solely about conversion between the model of the incoming request and the domain model. This conversion can be split into three parts:
- Data conversion: Incoming data are converted to their counterparts in the domain model and later back to the data format of the return value.
- Functional conversion: The incoming service call is “converted” into one or several calls to the domain model. This often includes finding an entity from a repository, calling a business method on it and storing it back in the repository. But it could also translate into a single call to a domain service.
- Error handling: Internally the domain model most likely uses unchecked exceptions with specific meaning to the business. In the “outside world” other error handling mechanisms might be in use that requires conversion.
Other than that the application services also tends to be responsible for starting and committing transactions.


Wrap-up
In this post I’ve tried to show the very different responsibilities of each layer (application, domain and infrastructure) in my typical back-end systems architecture. This is an approach I’ve been using successfully for several years in a non-trivial system and although it has many properties similar to Hexagonal architecture, it doesn’t adhere to it in every bit. The reason for that is not in any way scientific, it is purely because I didn’t know about it at the time where I set out to design the architecture of the system I was working on to support the use of DDD (and then first and foremost separation of business and technical concerns) in the best possible way. Therefore the basis of the architecture described here, is the concepts of a Domain Model in a Bounded Context surrounded by an Anti-Corruption Layer as described by Eric Evans in his book.

Wednesday, September 26, 2012

DDD on top of Services, DDD inside Services - A Domain Driven approach to SOA

This post is about Domain Driven Design (DDD) and Service Oriented Architecture (SOA) and how the former can be used to build services for the latter. But it starts of in a few observations about current state of the union.

The "DDD needs a database" assumption
I've come to find it being a pretty common understanding that software systems using DDD building blocks have to be backed by a database, and, even more specific, having repository implementations using an ORM framework. This leads to the assumption that domain entities, once added to the repository, are attached to an EntityManager that is responsible for "auto-magically" saving changed data as transactions are completed. I find these assumptions bringing to many constraints on a domain driven system architecture without any need. In fact, I can't find anything in the DDD approach, as put forward by Eric Evans, that supports this understanding. Instead I see the domain model as a piece of code completely free of any explicit or implicit dependencies on the surroundings. For a couple of years I've been working using this approach and in this post I will describe the architecture of such a system where repository implementations rely on services published by other systems as well as storing part of the data in a dedicated database.

SOA as spaghetti on top of CRUD
I've also found Service Oriented Architectures (SOA) to be implemented as mostly CRUD operations for fine grained data objects, or services with, even really complex, business logic firmly placed in Transaction Scripts (TS). TS might be good enough for simple things, but quite quickly the code turns into spaghetti as complexity rises. Complex business logic is where DDD really shines, but given the assumptions discussed above, many people don't see it as an option when there is no underlying database, but a set of CRUD like services.
And there we have another problem with many service implementations. When just using simple DTOs as parameters and/or return values and presenting the client with a CRUD-like service API, what support are we then giving the client-side developer? A bunch of getters and setters that could be set in any combination! How to know what (from a business point-of-view) makes up a valid object? And how to know what are valid modifications? Such APIs force the client-side developer to have intimate knowledge about the server-side implementation, and changes to the API won't show up as compiler errors. When APIs that I depend on change, I like the compiler to be able to notify me.
I think we can do much better and offer the client-side developer much better support by extending the service API with some DDD building blocks like Entities and Value Objects. In this post I will explain how we did that, in effect implementing a published domain.

Separation of concerns - The independent domain model
One of the principles of DDD that I think are most important to apply is "Separation of concerns". It is also part of Robert C Martin's SOLID principles by the name "Single responsibility principle". In short, every piece of code should deal with one problem only and have only one reason to change. It is also an important part of writing clean code since it increases clarity if a piece of code only does one thing. In the DDD perspective I apply this principle by separating code that models the business domain in order to solve business problems from technical code that glues the application together or handles communication with the outside world, such as databases or services exposed by other systems. This leads to a system with a traditional layered architecture with a slight twist: the domain model is kept in the center with no outgoing dependencies whatsoever. The domain model is concerned only with solving business problems, while the surrounding integration layer (or "anti-corruption layer" in Evans' terms) is concerned only with technical issues. The domain model is built using the DDD building blocks (entities, value objects, repositories, services and factories), with repositories (and sometimes also services and factories) represented as interfaces used by the domain, but implemented in the integration layer. Thereby creating the dependency from the infrastructure layer to the domain, and that's the twist! This is perhaps an unusual architectural style, but it is not new. Similar models have previously been described as “Hexagonal Architecture” and “Onion Architecture” among others. Robert C Martin has a nice summary of the history of this architecturalstyle on his blog.
This architectural style brings the following benefits:
- Easy to read business logic, since it is not mixed with code for interaction with databases, messaging services or other technical concerns. This caters for fewer mistakes as the software evolves.
- Easy to deploy. The domain code can be deployed in different runtime environments without changes, since no dependencies exist. At one point we made great use of this as we were moving to a new deployment platform while at the same time developing new functionality for the business.
- Easy to test business logic. All business logic can be tested by automation without any need for a runtime container. Doesn't really have to argue why that is good...

Repository on top of services
Since the domain is only concerned with business logic and the business doesn't care how entities are stored, only the fact that they can be stored and at what step in the business process that happens, all repositories in the domain are represented as interfaces. It is now up to the integration layer to provide implementations for those repositories.
Regardless whether entities are to be stored in a database, in files, or by calling a remote service (In my case it was services exposed via a customer specific API on top of stateless EJB, i.e. basically RMI, but it could be web services or any other protocol as well), or a combination, a repository needs access to the internal state of the entity in order to get (for storing) and set (for re-constructing) values. Typically most of that state isn't public in the domain model, so the repository implementation needs to gain access. Depending on security settings, reflection might be an alternative. In most cases it is not. Instead we made the members of the entity protected and created a specific sub-class of the entity in the integration layer that we used both for reading when state was not public, and for re-constructing entities on read requests.
With the problem of gaining access to internal state solved it is pretty easy to write a repository implementation that reads and writes most of the data over one or several  remote service calls and keeps the rest in a few database tables in a dedicated database. This split storage model is often a result from the fact that existing external services might not fully support the needs of the domain model. Another reason might be performance. In some cases we had to cache carefully selected data pieces in our own database to ensure timely retrieval. The beauty of this approach is that we can do all those tricks needed in the repository implementation without having any part of it leak into the domain model code; the separation of concern is total.

Published domain and services with business meaning
As mentioned above, CRUD-oriented services that let a client store and retrieve DTOs with getters and setter for every attribute push a great burden onto the developer of the client code. If using DDD to implement the service internally, we could do better by offering that domain knowledge to the client, packaged as a published domain and services that carries business meaning. A CRUD-service will never have a place outside the integration layer of the client, while the signature of a service well crafted in business terms might be a good candidate for a service API used directly in the domain model of the client. Of course in the shape of an interface with a small implementation in the integration layer to carry out the technical lifting of making a remote call. But the point is, the service signature can be used verbatim and the integration layer can be kept thin since the client domain doesn’t have to re-define what the service means in business terms; hard earned domain knowledge is reused.
The same goes for the business objects. If we publish those parts of the domain that can be used outside our service implementation, we offer the client-side developer to directly benefit from the domain knowledge we have gathered in building our service. But what parts can be published? Well, if you think about it, the part of a good API that is most useful is the limitations it imposes, i.e. the help you get to avoid doing stupid things. By publishing a model of domain objects as plain Java objects, with constructors and accessors ("getters", but in business terms and not necessarily following the Java Beans convention), we help the client-side developer to avoid constructing invalid objects and accidentally changing state that, from a business perspective, should be immutable. With only this much the  client-side developer gets much better support than with only the raw data format offered by our stateless services. In addition it might also be appropriate to offer a thin layer on top of the service calls that that exposes the services in terms of these domain objects and takes care of transforming them to/from the raw format used to go over the wire.
Having a published domain gives the client-side developer the choice to either just model transformations of it into  a more suitable model for the client context or, if contexts are closely related, decide to take on a conformist approach and extend the domain objects with additional functionality. I've done both in different contexts and it is so much better than having to experiment with DTOs to find out what are valid combinations of attribute values.

Conclusion
DDD is suitable for implementing domains on top of external services. It is also suitable for the implementation of such services, and if we carefully select parts of the domain model to publish we offer great help to the client-side developer.

Saturday, September 1, 2012

When is it appropriate to design a service?


In a system architecture using Domain Driven Design (DDD) you typically find a few typical building blocks (stereotypes) - Entities, Value Objects, Repositories and Domain Services – where the first two are stateful objects and the rest are stateless services implemented either as infrastructure services outside the domain (Repositories) or inside the domain containing only business logic (Domain Services). A common question is “When is it appropriate to design a service instead of placing the business logic in an entity or value object?”

For developers more used to building procedural designs, rather than object-oriented ones, it seems to be more natural to place logic in stateless services and have them operate on objects that are no more than data containers. This is what is commonly referred to as an “anemic domain model”. It is considered an anti-pattern in the DDD community since it decouples data from behavior and thereby produces a much less expressive and knowledge tense domain model.

I'm not a fan of stateless services in the domain model. Instead I try to favor bringing business logic into the Entity or Value Object that holds the information needed - in OO-terms it is called the "information expert". In most cases it isn’t that hard, especially if functionality is decomposed into short methods, each allocated to the object representing the concept at heart of the functionality.

A specific type of functionality that might be trickier to handle is entity creation. Who is to be responsible? For sure, it can’t be the entity itself. In general my experience is that there will be some sort of hierarchy between concepts. E.g. a SalaryPeriod might be connected to an existing RegistrationPeriod, which also contains submitted Timesheets. Then it is a good fit to have the RegistrationPeriod handle creation of the SalaryPeriod. So in general it is almost always possible to find a good place for rules regarding creation of an entity in a parent concept. The same goes for functionality that has to operate over several entities of a given type.

However, there might be cases where no suitable parent concept exists in the domain. That is one of the cases where I find designing a domain service appropriate. And there are others. Here is a small extract from a previous blog post of mine where I briefly describe another situation where I think domain services are a good choice:
 "In general I think you could talk about two types of systems, or parts of systems; those mostly concerned with changes in object state and those mostly concerned with processing data streaming through the system. In the first case entities, aggregates and repositories are a natural fit, in the second I think transaction scripts (in DDD context called domain services, since they do only concern domain logic, no infrastructure code [..]) are a nice fit. When the most important feature is to crunch some data, perhaps modify it and then route it further to some recipient (like another system or some persistent store) I think it is the "processing pipeline", i.e. the stateless service code, which should be emphasized. So in those cases the internals of the data isn't very interesting and might be better left in some simple DTO format."

To make the list of appropriate service design complete I’ll end with adding a few lines on external services. These services get injected into the domain. In the domain I would have only an interface describing the service in terms of the domain. This is the way to integrate with surrounding infrastructure or other systems. It is the same pattern as with Repositories, in fact a Repository is just a specialized service.

Another example of an external service is when some part of the business logic, e.g. a calculation is broken out into a separate service, implemented using another programming language or paradigm. From a domain point-if-view it is as if we get that calculation service from another system instead of doing it ourselves. The reason might be performance or it might be that the logic already exists and we want to continue using it instead of re-implementing. However, each time this happens the maintenance burden is increased a bit, so I think it should be done with careful consideration.

To sum up, favor business logic implementations in the domain objects, not in services. It leads to a more elaborate and therefore more useful domain model which is easier to keep consistent than logic spread over disparate services. I’m convinced in the long run this approach makes maintenance of the system easier. 

Wednesday, August 8, 2012

How to handle reporting with Domain Driven Design?


A pretty common question regarding Domain Driven Design (DDD) is how to handle reporting functionality in a system using a DDD-approach. As in most cases, the answer is "it depends".

If what we are aiming for is easy to change reporting I would transfer data into a BI-system and run reports from there. There is absolutely nothing to gain in designing our own BI-tool. There are plenty of them in the market, and in combination with some common data warehouse design patterns they do a good job both extracting, storing and providing easy prepared and ad-hoc reporting. If we do not need a fancy reporting interface just a separate relational database schema would do for running some SQL-queries. The key is to keep reporting separate from the business system. This is a good rule using DDD or not.

If it is more of "a small summary" or some accumulated totals that should be shown inside the business application I'd try to keep it inside the domain model. Most values, just like any attribute, would fit nicely inside an entity. E.g. if we need to calculate an OrderSummaryByStock, it might be placed as Stock.runningOrderSummary(). This makes it readable right of the entity it belongs to. If running into performance problems I'd look into keeping those numbers up-to-date as part of the command or update transaction, storing the accumulated numbers in the database, as part of the entity.

If you see overall problems with read performance due to multiple reads per write I would consider a CQRS-approach (with separate read-views kept up-to-date by events exported from the domain model as it gets updated), at least for the views in problem.

Saturday, January 28, 2012

Domain Driven Design and batch processing

In DDD the design of a system is very object centric and therefore focuses on individual objects (or aggregates of objects) that interacts through sending messages (commands, queries and events). This is very unlike traditional batch processing where one or several functions are applied iteratively to a batch of input data. Between the two there is a significant missmatch, but nevertheless, once in a while we need to offer a batch oriented interface to our domain logic or need to call a service that offers a batch interface.

Implementing a batch interface
This is the easiest part. We just need to create a thin script that manages the iteration over the batch, for each entry makes a call to the proper application service (tasked with coordinating calls on the domain model) and collects any response data returned. All business logic needed to perform the batch operation is held inside the domain package, as usual. From a domain model point-of-view, the batch processing script is just another client calling the same application services as any online client would do to perform the same task. It is just that this one is making many requests over a short period of time.

Calling a batch interface
In general, batch processing is just an asynchronous call. Yes, you combine several requests into one, but a call with just one request in the batch would still be a valid one. As long as there is no importance in which requests are made together and no relevance in the responses comming back together or aggregated in some way, that is. But then it isn't truly a batch, then it is just a single request with many input parameters. In the following I will discuss a possible solution for when the service is a true batch, i.e. serving many unrelated requests in one call, asynchronously.
Let's consider a case where we have a type of domain object including a method that is to be implemented with a call to an external service. A service that happens to have a batch processing interface. From a domain perspective the nature of this method is asynchronous. We just don't know how long we will have to wait for the result. But the fact that the call is made in batches, and not one by one is an implementation detail to be handled by the infrastructure layer.
The domain should be fully decoupled from the batch handling, which should be handled by infrastructure code. In the domain we define a service interface that takes the request for one domain object and returns nothing. This service could be called by an application service or by the domain object, or any other object or service in the domain. Then we define an event handler that gets notified when the result of the request arrive. The event handler would be responsible for taking appropriate action on the domain object depending on the result.
In the infrastructure layer we will implement the service with a message queue and on the other side of that queue some code that combines individual requests into suitably sized batch calls. The frequency and size of those batches might be tuned for performance and response times. E.g. one batch for every X number of requests, but at least one batch every Y minutes provided at least one request has been made.
Then we need a batch-driven routine that handles responses, splits them into individual messages and places them on a response queue for the event handler to process in the domain.

Sunday, December 11, 2011

What about PMs in Scrum-based development?

In scrum terminology there is no such thing as a project manager (PM). However, in addition to the product owner (PO) and the scrum master (SM) Henrik Kniberg in his excelent "from the trenches"-book talks about a "sponsor". Someone that provides for the teams long-term staffing and work environment as well as comitting to remove impediments outside the team. He describes himself, in his position as development line manager, as having this role.

In my experience many large organisations are totally project organised bringing together people from different parts of the organisation as well as external consultants to form a team. In such organisations the traditional PMs, if accepting to have an empowered team, is often ideal for this sponsor role.
That brings me to propose the following division of responsibility:

Product Owner: Responsible for what the team is to build. Merges requirements from different stakeholders and manages priorities.

Scrum Masters:
Guides the team and PO in following the agreed process. Makes sure the surroundings understand how the team works and do not interfere. The SM role is usually not full time and is therefore filled by a member of the team.

Project Manager:
Responsible for long-term staffing and external contacts in order to provide the team with the best possible environment to meet the needs of the organisation.

Friday, October 21, 2011

Ideas and books forming me as a Software Engineer


This September it was 15 years since I joined university to study Software Engineering, the official name was “Informatics”, but I think software engineering (or authoring and gardening as argued in another post) is better describing what I was interested in and, luckily, what I'm currently doing. Since then I've got some 10+ years of experience from working in the industry and a whole 15 years of continuous learning. As an exercise for myself I've set out to go through the major ideas I've picked up and used along with the people and books that collaboratively have formed me to the software engineer I am today. Hopefully you find this post at least a bit interesting and inspiring as well.


Object Orientation (OO)
In 1996, and the following years, OO was the new hot thing in software design, at least at my university. I learned to model domains (even though I didn't know that word by then) in objects using the OMT (Object Modeling Technique) notation by James Rumbaugh (later on one of the main contributers to UML). This way of abstracting the real world into objects with attributes, methods and relations very much appealed to me as it offered the tools to unambiguously document the analysis result by drawing a diagram. I did also believe my teachers when they talked about OO as the enabler of universal libraries of small reusable objects. Now I, and I'm sure they too, know this is never going to happen, but the OO-paradigm still brings structure for reuse within a system and hence is a great tool for honoring the DRY-principle
Ever since those first courses I started to think of OO as the way to design and build software systems. Later on I've come to see OO as the way of designing and building transaction based information systems where manipulation of state is the primary focus. Other types of systems might be better suited to use some other paradigm, but this type of systems for administering information is what I've been working with almost the entire time since I graduated. In university I learned the theory of OO, but we didn't actually turn any of our OO-designs into working software. We sure learned programming, but then it was to solve more traditional algorithmic problems. When leaving university I was all eager to learn how to do OO in real world projects. And boy, was I disappointed!
For several years I was employed in projects where I learned lots of things about programming and projects in general but nowhere was OO used to anything but data diagrams (now drawn in UML notation which I learned through Martin Fowler's “UML Distilled), i.e. objects with attributes and relations but no behavior, that was later turned into database tables and data structures. All behavior was programmed into transaction scripts, either directly in the GUI components or as free standing functions or stored procedures. I was desperately looking for a real world example of true OO implementation because on my own I couldn't really figure out what it would look like in order to work properly. The first piece of the solution came when I got to read Craig Larman's “Applying UML and Patterns . In this book he describes all aspects of OOA/OOD, how the software can be structured with boundary classes, controllers and entities, and shows by example code how it all comes together. Still, this isn't the approach intuitively encouraged by the structure in popular frameworks such as Java EE and Spring Framework. Out-of-the-box they rather suggests a static service structure operating on DTO:s, and that is, I think, the main reason why most systems are built with data and behavior separated and not using the true OO-paradigm.


Domain-Driven Design (DDD)
In a Jfokus 
2008 tutorial I was first introduced to Domain-Driven Design. The three hour tutorial only scratched the surface of DDD but it was enough for me to understand that this was the description of how the full OO-paradigm is used to build real world systems. Soon after, I got to read Eric Evans' book “Domain-Driven Design - Tackling complexity at the heart of software, which I have posted about previously, and since then I firmly believe that DDD is the approach to use for designing and implementing complex software systems.
DDD is not a method, nor is it simply a technique or an architectural pattern. I would rather call it an approach to software development, including analysis, design and implementation.
Perhaps what most of my colleagues first think of when DDD is mentioned is the design sketches I use to draw with the business domain at the center and technical integration packages all around. The domain being built on the conceptual building blocks, entities, value objects, repositories and services, as introduced by Eric Evans. But even though that part of the approach might be the easiest to pick up, and also important in building working software with DDD, I think what makes the biggest difference to me is the ubiquitous language. This practice of building a model with a shared language based on, but with more precise definitions than, the language spoken by the domain experts is the basis of my philosophy to always build the software so that structure and logic follows the business domain. I think that is the only way to guarantee a flexible design such that a small change to the business is always a small change to the software, never a big one. My experience is that the second you deviate from that principle, most often due to time pressure or pure laziness, you are asking for trouble further down the road. It might take a year or so, but it will come back and bite you!


Test Driven Development (TDD)
If DDD provides the approach on analysis, design and implementation, TDD is clearly what integrates that approach with quality awareness and assurance. TDD is my way to ensure a testable design and a correct implementation that is easy and secure to improve further. In a previous post, "Are you testinfected?"
, I told the story of how I got introduced to TDD, how I was skeptical at first and how I later on proved the usefulness to myself. Now I rarely, and definitely reluctantly, writes any code without first writing a failing test.


A Clean Coder producing Clean Code
Ever since the start of my career I’ve wanted to produce code that I’m proud of. However, the benchmarks for that assessment have changed over the years. Now I consider well tested and easy to read code being the standard to achieve. Since any piece of code is read much more often than it is written I think readability is prime quality measurement for code. Lately I’ve been reading both Clean Code 
and The Clean Coder by Robert C Martin. Both are excellent books. The first one giving good advice on how to make the code readable, the second really demanding reflection on what it means to be a professional software developer.


Agile with Scrum and Kanban
Another interest of mine is software development processes and techniques. At first I learnt how to deliver in a world of waterfalls, later I got to experience RUP
. Despite all bad things said about RUP, I’ve always found the core messages on iterative, risk driven development to be right and frankly quite agile. However, agile approaches such as Scrum, Kanban and developmenttechniques gathered in XP take this a long way further. The major inspiration here has been Henrik Kniberg, first in his Jfocus tutorial in 2008 and through his two books on Scrum and Kanban.
Currently I’m working in a world of mixing and matching. I’m picking the best parts out of all those above to create a process and way of working that supports being as agile and lean as possible in an otherwise RUP-ified world. I think, at the current state it is somewhat of coming success.



“All problems are not worth solving”
That was one of the advice I got from an experienced colleague of mine many years ago when I was aspiring to go from being a sheer hacker to become a CapgeminiCertified Software Engineer
. He probably gave me a bunch of other advice too, but this is the one I still remember. And I also think, for me personally, it might be the most valuable advice I’ve ever got. Since problem solving is my trade I happily go of designing solutions, then it is good to remember thinking for a minute or two about whether the problem really deserves being solved or if it is something we can as happily live with. At the bottom of this is that all resources are limited and needs to be applied where they make the best return on investment. On the other hand, we need to be careful not to spend too much time on deciding if a problem is worth solving, i.e. it is not always worth solving the prioritization problem if the solution offered is cheap enough.


The future
This pretty much sums up where I am today, but of course I will continue to evolve and in the near future I see the following ideas as the most interesting.

- Strategic design – How DDD concepts such as Bounded Context is applied at a larger scale to make strategic decisions on how to factor architectures and apply different design and development strategies. In most discussions “architecture” seems to mean frameworks, databases, application servers and technical layering. I think architectures would be much more interesting and usable if they concerned business domains.
- CQRS – I have only touched the surface of this approach to structure systems with separated command and query sides and event stores and messaging instead of relational database schemas and object/relational mappers. However, I think is an interesting approach and I definitely plan to dig in deeper.
- DCI  – Even though DDD has come to be my preferred approach I must admit the structure of the code base with behavioral code localized in every domain object has its draw backs. With requirements based on use cases or user stories, it is sometimes hard to answer the not so infrequent question “how is this UC implemented?”. DCI (Data, Context, Interaction) solves this problem through keeping the behavioral code in role objects implemented within a context representing a single UC. The code in the role objects are then injected into the data objects representing the domain. To me, it sounds like an interesting add-on to the ideas of DDD and in the future I like to continue exploring the benefits of this approach and the demands it poses on the implementation language.
- Functional programming – In order to build simpler programs that are safe to parallelize on multiple cores functional programming has gained new interest in the last years. Up until now I’ve lived only in the world of imperative programming and grasping the concepts and underlying mechanics of functional programming will pose a challenge. A challenge I’m happy to take on.
- Scala – Of all the new languages having emerged on the JVM I think Scala is the most interesting. It combines functional programming with additional powers in OO-programming without losing the benefits of static typing. All of which I think are interesting qualities. I think Scala might be the enabling programming language for many of the ideas I listed here for future exploration.