A path to escape the “No time for tests” death spiral.

We all know the “no time for tests” death spiral.

First we take exceptional shortcuts to ship something because “we have to ship it now but we’ll do better next time”. Of course next time we have even less time, we have to add features, we can’t really fix the shortcuts. It is longer and longer to add features because of the shortcuts we take and cant’ remove. We start to invest a lot to fix bugs.

That’s the beginning of a painful road for the team in charge, and we all know the end is bad. Some will try to work overtime to manage the mess. Some will burnout, tired of being so unproductive. In this hard time we’ll heard things like: “It has always been done this way, I know it’s wrong, but what can I do, I’m just a developer…”

No need to say I don’t agree, and here is the path I take as a developer every time I met this situation.


Step 1: Fix known bugs with unit tests

To introduce a quality culture in any company, start by adding unit tests every time a bug is identified outside of the development team (by the quality analysis team or in production).

When we talk about unit testing, people usually ask if it worth it. What’s the return on investment? How do you know what to test?

If a bug goes out of our team, we can’t afford to let it happen again. It worth it because a regression shows we are not mastering what we are building, and is really annoying for users. We know what to test because we have an exact failing use case.

Use this opportunity to introduce a build server, we want to know before shipping if we have regression.

jenkins.jpgStep 2: the tester strikes back with test first

Most people arguing against unit tests have this experience:
1> They write some unit tests
2> It is incredibly painful
3> They stop testing with the conclusion that unit tests are incredibly painful
(The official explanation is most likely: “It takes too much time to write tests”)

Truth is: writing good code is incredibly hard, writing bad code is freaking easy.

Testing our software do nothing but reveals the nature of our code. It reveals how it behaves when we try to use it. When the code is hard to test, the code is bad, that’s what we need to accept to overcome this step. No need to blame the tools or the methods. What we need is to learn how to improve the code in terms of decoupling and maintainability.

When it comes to an important piece of code, write the test first. It will lead to more testable code, because nobody likes painful things.


Step 3: Test Driven Development as a rule of thumb

After a few months, if we do not observe any improvement, it is interesting to understand why? In my experience, it may be due to a lack of experience with unit tests (tools and/or methods) and/or a lack of discipline.

But if we observe an improvement due to step 1 and 2, it’s easy to argue that to improve even more, we should not ship any use case not covered by unit tests

The good news is that our industry already uses an efficient method for that. It’s called Test Driven Development, and we can find tons of documentation online about it.

tdd.jpgStep 4: Less stress and better work

Be disciplined, don’t give up on TDD, and within a few months the results will be clear. The team will feel more confident and less stressed. With less stress, fewer bugs will be ship, and we’ll have more time to improve, and to drink beers.

No overtime required. Only good habits, discipline and continuous improvement around good practices.

It is important to communicate at this stage. It may take several years to reach such a result. We should remember where we come from, what we did to improve and share our story with other teams.


It’s even better when we’re not alone

Some help by someone who already took a similar path is gold, because she will point out what’s wrong in the code and how to improve it.
She will also be familiar with tools and can avoid silly mistakes we all do when discovering these practices.
She might also knows how to manage tricky managerial issues.

It is with this kind of expertise that such a path can be achieve in a few months, instead of a few years.


That’s what a developer can do

This path is not a myth, this path is not a theory.
This path I followed 3 times already, and every single time it helps me to gain confidence, credibility and to be proud of my work.

That’s the kind of things we can do, as developers, to change the way it has always been done.



Usable Software Diagram

“Who is the user of software design”?
Some Software Usable Design practitioners think that the user is the developer.

I think the answer is not the same in every Dimension.


The user of software design is the developer

Specifically in the Artefact dimension.
Maybe it was the first answer to come in mind, because the question was initially asked by some developers.
The developer uses the design to build a mind map (with abstraction and static representation) of the software. With this map, she is able to navigate through the codebase in order to achieve a feature or to fix a bug.
She will technically explain to the machine how it should behave, but other humans will need to understand these explanations. A good design will give a frame to help the developer to put responsibilities in the good place.developer-512

The user of software design is the final user

Specifically in the Runtime dimension.
If a user has to read our documentation, it is a failure and an opportunity for improvement.
The software design must be crystal clear for the user at Runtime, because she will also build a mind map (with dynamic representation) of the software.
Not the same map than the developer, because it has different purpose. But still the user will navigate in the UI thanks to this map, in order to achieve what is useful to perform her job.


The user of software design is the business

Specifically in the Decision dimension.
And the whole purpose of DDD is to bring back this reality as soon as it is forgotten in the Artefact or Runtime dimension.
Again, the business build a different mind map (with business use cases), and uses it in order to take decision. The goal is to improve the business using the power of software.


Navigating with our own map

The map metaphor is really powerful, and Eric Evans himself uses it a lot.

I also like this talk by Simon Brown about the Art of Visualizing Software Architecture. He compares the drawing of an architecture with the drawing of a map. His point is that we need to know for who is our drawing, and why. Because any representation is an abstraction of the full system, and we need to understand what will be done based on our drawing to choose the good abstraction, at the good level.

Software design is used to navigate in the understanding of a software system. For this navigation to be efficient we want:
– a consistent level of abstraction
– abstractions align with our dimensionimages
Usable Software Architecture Design
A Usable Design needs to be navigable.
A good visualisation for our Software Architecture will let us know if our design is usable or not in terms of navigability.
Taking into account the dimension will help to build useful and navigable abstractions, because it implies a specific user in a specific context.

Don’t fall into the trap of mixing scale, or dimensions, in the same drawing. It will only result in a messy diagram that we’re used to see in our enterprises.

Instead choose one scale and one dimension to build understandable diagrams for one purpose.


Micro-service and bounded context clarification

One micro-service should cover one bounded context” asserts Vaughn Vernon.

It leads to an interesting tweeter discussion with Greg Young, Romeu Moura and Mathias Verraes.  Greg and Romeu disagree. I say it depends of the context.

To know if one can cover another, we must know what a micro-service is and what a bounded context is. I find their definitions are fuzzy, and I think this discussion is a proof of that. I’ll try to clarify my point in this article.

Disclaimer: the following definitions are based on my current understanding of DDD strategic patterns, I don’t claim they are the only true definitions.


Micro-service definition

We believe we know what is a micro-service until we take a look at the Wikipedia definition.

In contrast to SOA, micro-services gives an answer to the question of how big a service should be and how they should communicate with each other. In micro-services architecture, services should be small and the protocols should be lightweight.”

This definition is symptomatic of our industry: we explain something by something else we don’t understand either. Do we share a common understanding of what is “small” and “lightweight”? I don’t think so.

A better explanation is the second property in the details of the Wikipedia page: “Services are organized around capabilities, e.g., user interface front-end, recommendation, logistics, billing, etc.

To be fair, this property has lots of symmetries with how we define a bounded context.micro-services1-297x250

Domain definition and problem space

To understand what a bounded context is, we need to define what a domain is. It is a theoretical representation of a business, in the problem space. And it can be divided in sub-domains.

For example in the well-known business of Amazon,  the core domain is about selling stuff online, and there are different sub-domain more or less generic like shipping, billing, advertising and so on.

We are in the problem space because we have no idea (yet) about how we should implement Amazon’s business , it is just a theoretical description of what they do.

DDD patterns that are applicable to the problem space (figure 1-4 page 10 of PPP of DDD)

Bounded context definition and solution space

A bounded context is a projection in the solution space to define boundaries in the system implementing the domain. Bounded contexts are important because they allow to define an ubiquitous language, valid within their boundaries. A product in a billing bounded context has not the same meaning than in a shipping bounded context.

When it is badly done, we obtain a big ball of mud, e.g. a huge system without boundaries where an update in the billing part may cause side effect in the shipping part.

We are in the solution space because it is an actual description of how we implement the domain. A bounded context does not necessarily matches exactly one sub domain. But having too many bounded contexts overlapping between different sub-domains is definitely a design smell.

solutionDDD patterns that are applicable to the solution space (figure 1-5 page 10 of PPP of DDD)

One micro-service should cover one bounded context?

Now that we defined micro-service and bounded context, we can try to decide if one micro-service should cover one bounded context?

And of course, we still cannot decide, because we still lack the (business) context. In some business context, a micro service might fit a bounded context. In some other, several micro services will be in one bounded context. The only thing we can suppose is that a micro service overlapping different bounded contexts has something wrong.

As usual in any DDD discussion, context is king.


For more thougths on bounded contexts and micro-services, there is this excellent podcast by Eric Evans.





Time-Aware design

I already advocated how an Event Driven Architecture allows our code to be more aligned with our business. Mike has the same conclusion, focusing on Practical Event Sourcing.
But why such a system is by design more aligned with the business?onceaponatimeBecause it tells a story 

Good code must tell a story.
It must tell a story to the business experts to help them take their next decision. It must tell a story to the developers to help them grab the flow of the system. It must tell a story to the users, to help them use the software.
A story is nothing but a succession of important Events.


Because it stores root causes instead of consequences

Storing the root causes of a decision is more interesting than storing its consequences. It’s always possible to derive the current state of our system by re applying the root cause and subsequent Events.
But if only the current state is known, it is harder to find what happened in the past.


Because it is a « free » logging system

Why do we use logging at Runtime? Because we try to guess the story.
If by design we track every important Event, the logging system is « free », in the sense that no more infrastructure is required to support logging.
Events can be used retroactively by the business to look for interesting correlation in the data.


Event Driven limits

There is no free lunch though.
Event Driven systems have very low coupling, which improves maintenance, but hurts navigability. When interactions between classes are done using Events, navigation is harder to follow the flow of a use case.
It will be hard to build a team of developers used to this design, as it is not mainstream. The team will need training, to perform the required mindset shift.
Also it is often implemented with no-SQL storage, a less mature technology than SQL storage.


Event Driven Architecture introduces Time in design

We know we lack the representation of time in software despite of its fundamental impact. Even if it has limits, Event Driven Architecture is a way to fill this lack by introducing time at the heart of the software.
It facilitates the alignment of our code with our business, and with the runtime.
It allows less friction between the different dimensions.
It is a Time-Aware design.



Thanks Gautier for your suggestion to replace TimeFull by Time-Aware.




The force of Time in software

Disclaimer: This post comes in an unusual form. Carlo had the kindness to review it, and his feedback was so valuable that I actually decided to add it in the post.

To be clear, I know I’m still an alchemist. But I try to show which kind of insight “physics of software” has bring to me. Very far from being perfect, even probably far from being pertinent, but still a little bit much better to me than my previous understanding of software design.

My hope is to make physics of software (like DDD) more accessible to people less informed like me.

Carlo‘s comments will appear in blue.badge-alchemistI recently tried to explain why Time can be considered as a dimension, like Decision, Artifact and Runtime. But this dimension has some particularity : it impacts the 3 other dimensions. It’s hard to find principles or practices being part of this dimension only.

Could it be better then to consider Time as a force instead of as a property?


I tend to see things quite differently here:

  • Decisions / artifacts / runtime are spaces, not dimensions. Each of those is a multi-dimensional space.
  • Just like in newtonian physics you have a 3D space and then you have time, and time is the axis along which any change happens (movement happens in time, waves require time, etc) so in the (newtonian) physics of software you need time to complement each space. As you already know I tend to see two different kind of time in the artifact – runtime space.
  • While I understand the wish to go Einstein and model software after a single “spacetime” notion, at this stage I don’t see the value of doing so. It doesn’t mean it has no value – just that at the scale I’m working right now I don’t see the value and I lose useful concepts. Just like most engineering is done at the newtonian level, and only when needed we move to quantum mechanics or to relativity, so I suppose it will be appropriate for software. At this stage we don’t even have newtonian physics so it’s hard to say 🙂
  • I really have an hard time considering time a force, because I still need time to understand the effect of any force. You apply a force and something happens -> in time.isaac%20newton

We are software Alchemist

Carlo explains how close we are to alchemy when talking about software design.

Instead of a scientific approach, we tend to have philosophical debates on what could be the best options to design software, using terms we barely understand.

We talk about “code smells”, describing how we feel about code. We are looking for the “philosopher’s stone”.  How do we define “too much coupling”, in a non ambiguous way ? We can’t, still we argue for hours on this topic.


Force in software

In science, to define a property (let’s say thermal conductivity property) we need to understand the forces altering the object (let’s say temperature gradient). Then we can precisely define, in a given context (a piece of wood under standard pressure) what is its thermal conductivity property.

We do not have  this precision in software yet, but we could look for forces and observe how our design reacts under given conditions.star-wars-and-the-forceTime as a force

Like gravity in physics, Time in software impacts all the dimensions.

Time applying to the Decision dimension is the acceptance that we can’t foresee the future. The best known strategy is : adapt to survive. Practices around lean management emerged to cope with this fact. Flat hierarchies are pushed because it allows a company to adapt faster.

Time reveals the adaptability of our business model.

In the Artifact dimension, applying Time shows if your code is supple enough to cope with new requirements. It is the difference between make it work, and make it clean. It could work quickly, but it will take more thinking to make it clean, ready to evolve, and protected against regression. This is where we’ll need low coupling and automatic testing.

Time reveals the adaptability of our code.

To have a stable Runtime over Time, we need to be scalable, secure and resilient to hardware failure. It is hard to achieve, because it requires the software to be able to react depending on how it is used. It is a complex dimension, due to interaction between software and hardware.

Time reveals the adaptability of our runtime.


This entire section is interesting, but it’s still coming from the “alchemic” perspective. It is all very reasonable, but it written in the style of all the (alchemic :-)) stuff to which we are constantly exposed. “Time reveals the adaptability of our code”. That’s true, and if you write it in this way, it seems like time is taking an active role in that, like it’s actually going in and do stuff. But it’s not. In a scientific perspective, you won’t be able to formalize, prove, replicate the “agency” of time. If you try to be more precise, to decompose things, you’ll see that changes happen in the decision space (over conventional time), and that those changes are the forces impacting your artifacts. Conventional time is “just” the dimension that allows changes to happen in the decision space. That process can be semi-ordered or largely chaotic. But the agent is not time. The agent is change.


The adaptability property

To start from a force and try to find properties in software seems already a bit clearer to me. Not perfect but at least we can better describe what we are talking about. Adaptability in reaction to Time makes sense in the different dimensions of software. Maybe we could find some way to measure it ?


Ok, this is close to the perspective of the physics of software, it’s just that I would say “adaptability in reaction to change” (not “in reaction to time”) and then investigate the nature of change (which is something I’m actually working on). See, once you begin with change, you have something you can further decompose. Is that change a creation of something new? Is that something similar to something else we had? It’s ordered / planned change? Is it a modification of something we had? Which kind of modification? In physics we have many properties that in software are collapsed under “adaptability”. Plasticity, extensibility, malleability, flexibility, compressibility, elasticity, etc. A real physics requires this level of decomposition. But (and this is the real point here) if you only have “time” as a force, you can’t decompose further. If your force is change, then yes, you can. A first step was in my ncrafts talk, that is more about “planned growth”.adaptability-4-638The predictability property

Thinking about it, I realize it was my problem when speaking about Technical Debt. What’s technical debt ? How can we define « too much » technical debt ? It explains why I need to speak about predictability instead. Again it is not perfect, but it is less ambiguous. And Time is definitely a force helping to reveal the predictability property in the different dimensions as well.

Ok, I would say something like above. Predictability is interesting – it’s about having a reasonable model for changes, and it’s what I mean when I talk about planned vs organic change (terms that I’ve taken from urbanism). I still see time as the dimension that allows changes to happen one at a time 🙂 and not as the force.predictabilityThe philosopher’s stone

We have been looking for the philosopher’s stone for too long in our industry. It would be healthier to have a more scientific approach.
We are still very far from it, but looking for forces and properties may be a good start to improve the state of the art of software development.



Thanks a lot to Brian Gibson and Carlo Pescio for the review.

Please read Carlo’s post for more explanation about Time in the physics of software.


Coding in N dimensions

I discovered this year Carlo Pescio‘s work on the physics of software. He has a scientific approach of software design. This video from DDD Europe is a good way to discover his work.

Combined with the amazing talk from James Coplien, it gives me a lot of things to process since january to improve my understanding of software design.


Designing in 3 dimensions

Carlo speaks of the 3 dimensions in software: Runtime, Artifact (Codebase) and Decision (Business).

He explains there are interesting symmetries between these 3 dimensions. For example, a change in decision requires a change in codebase, to produce a change in behaviour at runtime.
We understand the consequences of developing hard to change code: we make the business hard to evolve. Immutable business decisions are utopian, so code must be able to adapt.

It resonates with James’s talk about symmetries in design.


A fourth dimension?

We often forget to represent time when speaking about design, as James explains.

It is a shame because use cases (what matters to the business) involves Time. We should add it to the three dimensions already exposed.

There are symmetries between Codebase and Time for example. Designing software for a one shot use is really different from designing software expected to run for the next decade.


Developer maturity

These dimensions could be useful to estimate our maturity as developer. It is relative to the number of dimensions we take into account when designing software.

Here are a few typical focuses related to each dimension:

Runtime :
We focus on technologies when thinking in the runtime dimension. We want to make it work, as quickly as possible. We are concerned with performance and optimization to have fast execution.

Artifact :
When focusing on code, we are interested in the organization of our code. We may apply GOF patterns, solid principles and other technical practices to keep our code easy to maintain.

Decision :
From a business perspective, we focus on features. What it will bring to our users? Why it will be impactful for the company? We want to satisfy the customers buying our product.

Time :
Time is interesting because it brings practices in the 3 other dimensions.
Time in the Runtime includes failure recovery, availability and logging.
Time in Artifact includes version control, and practices to avoid regression.
Time in Decision is about long terms vs short terms, and foresee what the market will need.


Maturity will improve with time

The difference may be huge between a mono dimensional and a multi dimensional developer.

The good news is we can (and must) improve with time, by learning in these different dimensions. It is normal to focus only on a few of them at first, but opening our mind to other dimensions will improve our skills as software developer.

We already have different names for these multi dimensional developers, depending on the dimensions they value. We call them Craftsmen, fullstack programmers, 10x engineers, DevOps or DDD practitioners and so on.

Do we give these names, because calling them “developer” suggests a mono dimensional person?



Thanks Brian Gibson for the feedback.


Make the implicit explicit

It is really hard to manage, or explain what we can’t see.

Based on this statement, we should continuously look for ways to reveal the important implicit concepts in software. It is our chance to explain what software is to people who don’t look at lines of code all day long. matrix


Questions like « How do I introduce -whatever practice or concept- in my daily work ?  » have the same answer. We have to make explicit whatever we want to introduce, and reveal the consequences of what is  lacking.

We have to explain which problem is solved, and how.

As long as an implicit problem is not explicit, nobody will care, it is human nature. Thinking Fast And Slow explains it very well with WYSIATI : What You See Is All There Is. We are naturally biased to neglect what we can’t see.


Make the design explicit

There are several ways to reveal a design, from high level design (modules and contexts) to low level design (Class and Functions). It starts with manual diagram, but it is mostly revealed by unit testing and living documentation.

Manual diagrams are an explicit representation of what we think our software should look like.
Living documentation is generating the representation from our code, giving information on our actual code, to know if it’s aligned with what we think it should be.
Unit testing is both a design and a testing tool. Code that cannot be tested easily was poorly designed.

We could also use static code analysis with products like Sonar or NDepend, to track cyclomatic complexity or coupling.

These tools among others help to reveal our design, not only what we think it is, but also what it actually is. Based on this feedback, we can sense and act to improve the software. Without this information, it’s almost impossible to know what could be improved, because we have potentially no idea of what is wrong in the first place.


Make the predictability (debt) explicit

Missing code coverage, voluntary design shortcuts, build breaks and defects are interesting metrics to track to reveal our software predictability. We can also track less technical indicators like overall team and customer mood.

We could unlikely keep our software predictable without revealing the consequences of low predictability.


Make the domain explicit

Our code should be aligned as much as possible with our domain.
Good alignment between code and domain means that a domain problem becomes a code problem.
The alternative is to deal with domain problems AND code problems, which could be really different.

If we’re not aligned with our domain, we’ll bring technical problems instead of business solutions.

We should reveal our domain in our code, using meaningful names for the business (Ubiquitous Language), and protecting domain centric code from infrastructure with hexagonal architecture (for example).  hexagonal_architecture_sketch

DDD core concept

Make the implicit explicit is a core concept of DDD. And it resonates a lot with the idea of WYSIATI .

It is a key to improving our collaboration with non-IT people, because they can’t understand something invisible to them.

It is a key to improving our software design, because we are not able to manage what we can’t see.

Make the implicit explicit is hard because it is endless work, but it is required to have stable foundations in a DDD project.




As usual Brian Gibson helps me to improve the language in thos post. Thanks man!




Patterns, Principles and Practices of DDD

Patterns, Principles, and Practices of Domain-Driven Design was written by Scott Milett and Nick Tune. It gives a holistic view of DDD, from strategic to tactical patterns, and provides many examples to explain the concepts.

This book was presented to me by Bruno Boucard in this way : « It’s a very complete and modern view on DDD, and very few people have read it .» I agree, and it’s a shame.


Why have so few people read it?

The more I read about DDD, the more I wonder if it is wise to explain all the DDD concepts, including examples, at once. It produces huge books (this one is 792 pages) contributing to the idea that DDD is complicated. It could discourage newcomers just by its size.

Also concepts and examples  do not deteriorate at the same rate.

Explanations about strategic and tactical patterns were quite the same as in the Blue Book. A few building blocks were added by the authors like Domain Events, but nothing really new emerges. It stays worthwhile to write about it, this content can stay up to date for many years.

DDD implementations on the other hand evolve a lot. The emergence of CQRS/ES and NoSql storage, the hexagonal architecture, and other practices have changed the way we implement DDD solutions in the last decade. These solutions did not exist in the Blue Book, and we can hope in 10 years it will be outdated by better alternatives.


But is it always good to have examples?

It is really hard to show examples, because DDD is contextual. It is courageous to show examples anyway, but I would not push them into production. I don’t blame the authors though, they wrote disclaimers explaining it is short examples for learning purpose.

My point is, it was interesting to write such a complete book, and I find it’s a shame it will be half outdated when we’ll find better way to implement DDD. Maybe it would have serve the authors to write better examples in a dedicated book, and keep another book to stay focus on concept explanations.


Why you should read it anyway?

It is still a must read to have a modern understanding of DDD. It covers everything from integrating bounded contexts to implementing an event sourced aggregate and much more.

If you’re already familiar with DDD, it will improve your knowledge. If not, it is a very good start, even if you don’t grab all the subtleties at first.



Another article kindly reviewed by Brian Gibson.


The Technical Debt Myth

We assume that less design effort allows producing features faster, in the short term.  Less design effort generally means less abstraction, and tighter coupling, in order to produce working code faster.

But we tend to overlook the fact that it slows future modifications, even of unrelated features, because of tight coupling. We usually call this fact « technical debt », the term was first coined by Ward Cunningham. A better name was suggested by Steeve Freeman: unhedged call option.


Financial debt vs. Unhedged call option

Financial debt is predictable. You know how much you get, and how much you will pay back. A debt could be useful from a business perspective. A cash boost at the right time can create a major competitive advantage.

An unhedged call option also comes from the financial world, and is a really risky operation because it has unlimited downside. The buyer pays a premium to decide later if he wants to buy. The seller collects the premium, and will have to sell if the buyer decides to buy. It is not predictable for the seller.

Transpose to software, the difference with the concept of debt is the predictability in the amount of work required to fulfill our engagement.

When we write crappy code (tightly coupled and without tests), we collect the premium: we immediately get benefit from the new feature.

But as soon as we have to maintain or evolve this codebase, the option is called, and we have to pay an unpredictable amount of time (thus money) to achieve our goal.moneystack

Why? Who?

Every time this kind of tradeoff is required, we should ask who is asking for this, why, and who is going to pay it. Fun fact: those who ask for it (and directly benefit from it) are not often those who will pay it.

The sales team for example may ask for a quick hack because the customers « really, really need it yesterday ». But it may be paid by the production team, because next time the customer will want something, we still have to produce it quickly, without any regression. That’s where we will support more pressure, and may do some overtime.

In the end everyone will be impacted, because more pressure means more bugs, more regressions, and finally unhappy users. No company can survive unhappy users forever.

That‘s why good design matters. We want to produce current and future features in a sustainable and predictable way. sustainable_development_and_you

But what‘s good design?

We all try to do our best, but some of us lack knowledge and/or feedback. Plus, good design is contextual.

But regarding of our context, good designs have common points. It allows to think clearly about our software, and to evolve it easily. It allows to know where we should add/modify/fix a feature. The evolution-ability gives us an option on how to add/modify/fix this feature.

Software must be able to evolve because we know we don’t know what the software will be. We must design to be able to discover (as explained by Dan North in the 3 ages of innovation)

Decoupling is then required because we need to think about small parts in isolation, without side effects.  This kind of good design allows producing predictable software: robust, resilient and without regression when it evolves.

If we use shortcuts, and couple our code in order to rush it into production, we must be aware there will be an unpredictable amount of time to pay to get back into a predictable state.
It does not mean we should never do it, but we all have to understand this trade-off before asking for a “debt”.


What about speaking of predictability?

The more shortcuts you take, the less predictable your software will be. If you pay more for well design software, you get the opposite effect.

Unfortunately it is hard to judge, as we usually think we design software well. A good way to know if we are on good track is to rely on how predictable is the software we produce. Are the customers happy? The production team? The sales team? Do we have very few regressions? Do we produce features at a regular speed?

If not, we should consider investing more on design and refactoring, before entering into a vicious circle of unmanageable software.

Order Or Chaos Directions On A Metal Signpost

The technical debt Myth.

That’s why the comparison with a financial debt is sub-optimal. Money debt is predictable, whereas software debt is not. Unhedged call option is a better name, but still comes from the financial world.

Maybe we should stop financial comparison and speak of predictability instead. I find it is a better description of the consequences of shortcuts in the design.



Thanks Brian Gibson, always here to help me to improve!



Daily DDD

From the mouths of lots of DDD practitioners, and from Eric himself, we often hear DDD is hard. In my opinion it would be more correct to say DDD is huge. Trying to understand it from scratch, as a whole, is like trying to understand a huge code base as a whole. It’s impressive and discouraging.

But if we split it in small, organized logical components, it becomes easier. Finally we’ll see these components work well together, but before that we’ll grab only what we can, and that’s fine.

After 5 years practicing DDD, I still discover new things every few weeks. That’s ok, I’m not a DDD expert, and still it brings me a lot in my daily work. I’d like to share it in this post, to show you don’t need to know everything about DDD to get a lot from it.Sans titreBusiness first, code after  

The first great lesson I learned from DDD was : understand the business. Then we can look for the best technical implementation (if any) to solve their problems.

For me it was a real revelation, it changes the posture I had as a developer. I no longer work to do what someone thinks I have to do. I work with the business, challenge their solution and try to find practical alternatives when I believe they are technically wrong.

In my day to day work it means, talking to domain experts to analyze problems, proposing technical implementation aligned with the domain and easy to modify. I welcome changes as opportunities.Business

Problem analysis  

We can solve lots of problems with software, but it’s hard to figure out which problems are worth fixing. Unless it’s an obvious problem (but was it really a problem in this case?), we have to dig to fix the root problem, not its consequences.

Domain experts usually come to us with solution. The trick is to respectfully challenge them and their solutions. Be a learner, do your best to understand their work and what problem they are trying to solve.

The coffee machine is definitely the place to be to learn things in an informal way.

I often use the 5 whys, and graphical representation to crunch knowledge. Drawing is a fantastical tool to understand complex problems. If you can’t draw it, you can’t understand it. I also organize more formal exchange like Event Storming or Impact Mapping depending on the context.

Whenever it’s possible, I spend some time with the actual users of a system. Usually it’s where we learn the most, seeing how wrong our assumptions were.5WhysAlign the technical implementation with the domain

Have you ever try to show your code to a domain expert ? You should. Our code tells a story, it must be understandable by domain experts. The next developer should learn about our domain, just looking at our code.

It’s where we need an Ubiquitous Language, and a clear separation between domain and infrastructure concerns. The Onion Architecture allows that, but it’s not the only way.

Another great way to align our implementation with our domain is Living Documentation. This topic would deserve a whole blog post (or you can read Cyrille’s book). I would just say that it is not only BDD. Living Documentation is more like : how do you generate documentation from your code, not how do you write code from documentation.

Concrete example are the generation of:
– Change log from commits (using GIT)
– API documentation understandable by domain expert (using Sandcastle)
– Workflow description (using static code analysis and tags)

It’s amazing to see the direct impact of these techniques on your codebase. To generate useful stuff from your code, it needs to be clean, and aligned with your domain, in terms of naming and meaning. If the quality is poor, you won’t be able to generate anything understandable. If code and domain are not aligned, you will receive complaints from the domain experts, about how classes/namespaces/modules are badly named. How awesome is that ?AlignKeep our implementation easy to modify

The business may (will) change, how do we hope to stay aligned if we resist changes? It gets on my nerves when I hear a developer complains because business people “do not know what they want”. Of course they don’t, they can’t forecast the future. Our job is precisely to provide great software, even when requirements and constraints change a lot. I would even argue it’s one of the main difference between high quality software versus cheap software. Good software is resilient to change, bad software is a pain for its developers and users whenever something new happened. What is the point of being “Soft” if we are unable to change?

Our industry knows how to develop software easy to change. We use Unit Testing to provide quick feedback when anything changes in the system. A SOLID design will keep low coupling, which help to change part of the system without breaking everything.

To have a more complete feedback loop, we use DevOps to achieve continuous integration and continuous deployment.solid-principles-e1428402535364DDD is a lot of good habits

We can’t implement DDD without problem analysis, technical alignment with the domain and a codebase easy to modify. I tried to show that DDD is not something hard. I’m sure you already do, or at least heard about some of the practices I talked about in this post. You don’t need to learn all at once. Be a constant learner, one practice after the other.

Learn about these practices, try to implement them, and you also will get a lot from DDD, in your daily work.



Many thanks to my usual reviewer Brian Gibson.

IP Blocking Protection is enabled by IP Address Blocker from LionScripts.com.