I discovered DHH during the « TDD is dead» trend with an ambiguous feeling. I highly value (smart) people with different opinions about testing, craftsmanship and software, but at first glance, DHH looks pretty bumptious to me.

Who’s this guy I’ve never heard of before who decides that one of my favourite development tool is dead?

TDD is dead?

First of all I listen to him and his arguments. I always knew TDD has limits, but rarely meet them. In a nutshell, he argues that TDD may hurts design. After some thoughts my feeling is that TDD does not fit well with invasive framework like Ruby on Rails. Does it make TDD or Rails dead? I don’t think so, they are great tools useful in given contexts.

DHH and I are not working in the same context. He’s more in the web fast app context, whereas I’m more in heavy line of business context (usually not even web). The good news is that his thoughts about my work’s style are less biased than people from my context.

Why I read it

I’ve a very simple way to select the books I read: the more I heard about it, the more it goes up in my reading list. I select rework in this way, without knowing DHH was one of the authors (the other one is Jason Fried).
It was a good surprise, and I definitely recognize his style, close of his blog. I also recognize William Zinsser’s influence, something we share. It is a really pleasant book to read.


The content is full of short take-away. I think I would gain more benefice if I read it a few years ago. Some advices sound clearly obvious now. Still I admit it wasn’t this obvious when I discovered the world of software and entrepreneurship in 2009.


Favourites take-away

–        Scaling is not mandatory
–        You need less than you think
–        Beware the workaholists
–        Go to sleep
–        Don’t confuse enthusiasm with priority
–        Pick a fight


Should you read it?

I think the book stay a must read, especially if you like the direct style of William Zinsser, and/or if you are both entrepreneur and software developer.

It helps me to know more about DHH’s universe.



The over-engineering theory

I never was comfortable with the idea of “complex domain”.

“Use DDD only in complex domain” is a mantra used by DDD experts to avoid the Silver Bullet Syndrom. It is really convenient to avoid uncomfortable discussion.

Why do I mind?

We’d like to know what is a complex domain to know when to use DDD.

From a strategic perspective, I already argued that we could always use it. The remaining question is: when should I use DDD from a tactical perspective? When should I implement hexagonal architecture, CQRS/ES and other fancy stuff like that?

Not in a CRUD software?

Whatever the implementation we choose, software is a business abstraction. When we choose a CRUD implementation, we bet the future software will have few features. We bet we won’t need many abstractions.  The cognitive load due to the domain will stay low, implying we can mix business logic with technical terms, without building unmanageable software.

In my experience, lots of applications begun as simple CRUD, but they quickly evolved in something more complex and useful for the business. Unfortunately we keep coding them as simple CRUD, which results in a massive mess. Certainly not in purpose, adding just a bit of complexity day after day makes it hard to grab the whole complexity of the solution. We are the boiling frog.

The over-engineering theory

The challenge is to find at which point our simple CRUD application becomes a more complex line of business application.

My theory is that we consider patterns from tactical DDD over-engineering all the time. Because we are unable to feel the complexity of the software we are building day after day. Worse, we think that the knowledge we accumulate about this complexity makes us valuable as the dungeon master. We are deeply biased to sense our own work.

But what happens when we go working on another project? The will to rewrite the full software from scratch. Push the frog in hot water, and she won’t stay for long.


Looking for the inflection point

We’d like to find when it becomes interesting to implement tactical DDD patterns, despite the cost it has.

A part of the solution may be to check how long a new comer needs to be productive. When it is too long, there is room for improvement, and tactical patterns can help to decrease the code complexity.
Another solution may be to ask for a few audits by different DDD experts, but it could be more expensive.
A simpler solution may be to assume that the software will become more complex. It implies we should look at which tactical pattern can fit our needs, day after day.

Remember there is a world of practices in tactical DDD patterns. Using all of them all the time is not desirable. Picking the good one at the good time is the actual difficulty.

When should I use tactical DDD patterns?

I do not have a definitive answer for that, just an empirical observation: I’ve seen countless software implemented in a CRUD way that would be greatly improved by a DDD tactical approach. I’ve seen no software implemented with a tactical DDD approach that would be greatly improved by a CRUD implementation.

Just saying.


“If you think good design is expensive, you should look at the cost of bad design.” -Dr. Ralf Speth, CEO Jaguar 




It is not your domain

From a technical perspective, I would argue that a DDD project is nothing but a clear and protected domain.
Working on lots of legacy code though, I observe common mistakes to identify what is inside the domain, and what is outside.


Your data model is not your domain model

It is a really common mistake to use a data model as a domain model. I think ORM and CRUD have their part of responsibility in this misconception. A data model is a representation of the model optimized for data storage.

Building object in code from this model using ORM doesn’t make it a domain model. It is still a data model, it is still optimized for data storage. It is no longer sql tables, nothing more.

The data model has the responsibility to store data, whereas the domain model has the responsibility to handle business logic. An application can be considered CRUD only if there is no business logic around the data model. Even in this (rare) case, your data model is not your domain model. It just means that, as no business logics is involved, we don’t need any abstraction to manage it, and thus we have no domain model.


Your view model is not your domain model

Another common mistake is to add just one layer between the data model and the view. This layer has different name (Presenter, Controller, ViewModel…Whatever) in different patterns and languages, but the idea is the same: abstracting the view using code.

The mistake is when we believe this is the place to add business logic. This layer has already the responsibility to abstract the view, we don’t want to add any other responsibility.


Your data model should not notify your view

I go crazy when I find a model object with the responsibility to notify the view when some data changed (often using databinding). It literally means that this data or domain object, has also the responsibility to manage the way data are displayed.

As usual, it is all about separation of concern.

Your domain model should depend of (almost) nothing

The domain model is a set of classes or functions, group in a domain layer, containing all the business logic. This layer is absolutely free from the infrastructure. It should be a pure abstraction of the business.

A domain model is still valid if we change the way we render the software. A domain model is still valid if we change the data storage. A domain model is still valid if we decide to stop using our favourite Framework.

A domain model depends of one thing and one thing only: the business.







Just Blog It

Since 1 year I try to write more blog posts. I’m performing at least 1 pomodoro per week to write drafts. I don’t feel pressure about publishing these drafts though, sometimes I do, sometimes I throw them away, and sometimes I merge them into a single post.

I think it might worth sharing what I learned.

Writing for yourself

The first rule of blogging is: don’t write for others, write for yourself. Some people might find what you write interesting, but writing with the goal to be interesting is not possible. It’s by writing consistently about your thoughts that some people might become regular readers. I stopped wondering if my posts were good or even interesting. I just published my current state of mind.

Writing for yourself is the art to express what you think about, in a clear way. It is a useful journey by itself, a personal experience. Having readers is a pleasant side effect for our ego, not the goal.

Writing in English

I don’t think it’s mandatory, still I don’t regret this choice.

Mainly because It improves my English skills.

But also, according to WordPress, I published 28 blog posts for 6500 unique reader. Half of them are not from France. I have some readers in the UK, Belgium, Spain, Germany, Poland and US for example.
It’s a wonderful gift to have feedback from all over the world, and only an English blog allows that. As you see my English is far from perfect, but it’s enough to share and learn with the whole world.

What I learned (in no specific order)

Just blog it

I enjoyed writing to my blog this year. I feel much more comfortable since I write for myself rather than for others. I’m not bilingual, and don’t need to, even to write in English.

I hope this post will help some of you to enjoy writing on their blog as well.


Thanks to all my kind reviewers over the year.
Special thanks to Brian Gibson for his time and patience to help me improve my English skills.

And of course, thanks to all my readers, all the best for this New Year!


Abstracting the reality

Let’s define the reality as the consequence of all previous Facts.

Nobody knows every fact from the past. And when we share some facts, we don’t give them the same importance. We build our reality based on the facts we believe to know, and give them some importance based on what we think do matter.

In other words, there are several realities. A reality can be based on some false facts, still it is a reality for someone. Not to be confused with the concept of truth: there is only one truth, the problem is that nobody knows it (hint: most people suppose their reality is the truth).

Building a shared abstract reality (i.e. software) is exceptionally hard.

Spaghetti Realities

It’s the point of DDD to avoid mixing those realities. It takes lots of time and analysis to understand the realities of a business. It is an iterating task. Domain experts share a reality because they share some business processes. For example someone from the marketing team will understand another colleague from the marketing. But she won’t always understand a colleague from the shipping team. They face different challenges, they use different languages: they work in different realities.

Strategic patterns from DDD help us to build context specific solution, in order to build a software matching only one reality, avoiding an unmanageable level of complexity in the shared abstraction.

How to choose a reality

Essential complexity is the solution to a problem in its purest form. Accidental complexity is when the solution we develop is more complicated than required by the problem.

In the excellent paper Out of the tar pit, Ben Moseley and Peter Marks explain how we bring technical accidental complexity in our software, especially with states, control (the order in which things happen), and volume (the size in terms of line of code).

I think we often miss the opportunity to look for accidental complexity in the domain as well. We suppose that the problem is already well understood, whereas most of the time there is room for improvements.

How to implement the reality

If we agree that the reality is the consequence of all previous Facts, it makes sense to look for an implementation where these facts are represented. Such an implementation would be easier to translate into the reality, and vice versa. Such an implementation help to describe the essence of software: it is an automated decision maker based on past facts.This implementation is already a reality

The user send a wish into the software (“I’d like to do that…)”, the software takes a decision based on its own reality (i.e. the facts it knows), and it send feedback data to the user to help her find her next wish.

Just replace the world Fact by Event, Wish with Command and User Feedback with Query to find an existing implementation for these concepts.


About Inheritance and Composition

Like for dependency injection, heritage and composition are easily misundertsood.

We remember that programs written in Oriented Object Programming (OOP ) are designed by making them out of objects that interact with one another. Technically, we have two possibilities to share code in an OOP style: either composition or inheritance.



The purpose of composition is to make wholes out of parts. These wholes (or components) will collaborate to achieve a common goal.
Composition has a natural translation in the real world. A car has 4 wheels. Without the wheels the car is still a car, but it loses its ability to move.
Back to code, a bank account might need a print service. Without the print service, the bank account is still a bank account, but it loses its ability to be printed. It is a “has a” relationship.



Inheritance allows expressing and ordering concepts from generalized to specialized in a classification hierarchy.
The meaning of a class is visible in the inputs and outputs (public contract) of the class. As a child of a class, we are implicitly accepting responsibility for the public contract of the superclass. We are tightly coupled to this superclass: it is a “is a” relationship.
A human being is a mammal. No exceptions, all mammals have a neocortex. Without the neocortex, we are no longer mammals. Back to code, I may design a “human” child class, inheriting a “mammal” base class, both sharing a NeoCortex property.


Common mistake

The problem’s root is that sharing code can be done either by composition or by inheritance, but the impacts are not the same.
Any class can be composed by other classes because it requires their capabilities.
But with inheritance, the key is to decide if we accept responsibilities from the public contract of the superclass. If our parent has some capabilities/properties we don’t need, we deny a part of the responsibility. The “is a” relationship is broken as soon as there is something in the parent that isn’t true for the child. Adding responsibilities in the parent because “some children might need it” is not ok. All children need it, or we need another abstraction, or we need composition.
It is a problem because inheritance implies tight coupling. A refactoring of a capability can impact all the children of a base class. If one of the children does not use the refactored capability, it should not be impacted.



Lots of modern languages use the concept of ViewModel, even if it may be called otherwise. It is appealing to build a ViewModelBase with everything any children might need, like the ability to be printed, or to show a dialog box.
What happens when a child inherits this ViewModelBase but doesn’t need to be printed or to show dialogs? It accepts responsibilities that does not make sense for it. The signature and the meaning of the class are blurred. Without these printing or show dialog ability, the ViewModel is still a ViewModel.

On the other hand, implementing a RaisePropertyChanged function in ViewModelBase makes sense, because any ViewModel is by definition glue between the business logic and the view. It needs the ability to inform the view when a property is updated. Without this ability, it’s no longer a ViewModel. choosing-the-best-local-seo-company

How to choose?

The mantra “favor composition over inheritance”  is helpful, but as every mantra it is limited. It is true because sharing behaviours is always possible by composition, and will be more decoupled, but it is not always the wiser choice.

I think the “has a” vs “is a” relationship, and the reminder that every children must take the full responsibility of the parent in case of inheritance is enough to help choosing the best option.




Developers don’t like people

Any idea who are William M. Cannon and Dallas K. Perry? Neither do I until a few months ago, and that’s a shame because they have a part of responsibility in the image of software developers. They have discovered that we don’t like people.


How do you hire for a new job?

Recruiters are still pretty bad to hire developers. Mainly because they don’t understand the job we do, despite the fact that software is everywhere in our lives. But imagine when software were not mainstream, they have to hire thousands of developers with absolutely no clue of what a developer is.

Still, big IT companies in the 50’s and 60’s have to find them, so they commissioned two psychologists (William M. Cannon and Dallas K. Perry) to look for the profile of programmers.

I bought the paper they wrote in 1966. It is interesting to note they were looking for happy programmers, more than good programmers. Indeed they use the Strong Vocational Interest Blank (basically: a MCQ to find your interest in different areas) to profile 1378 computer programmers. Then they build a scale to predict how happy someone could be as a developer.


Paper quotes

“Long ago, E. K. Strong, Jr. advanced the notion that individuals interested in the same activities and things, whether these were directly job-related or not, tend to find satisfaction in the same type of employment.”

“[…]it was anticipated that a new scale for measuring the interests of computer programmers might be developed. Such a scale would be especially valuable in counselling, guidance, and recruiting as a way of identifying appropriately qualified persons so that someone could say to them something like, “Hey, you there! Your future is in computer programming!”

“The interests of computer programmers were found to be most like those of optometrists, chemists, engineers, production managers, math- science teachers, public administrators, and senior certified public accountants.

“Responses of programmers to individual SVIB items, when compared with responses of other business and professional men, showed three things. First, they indicate that computer programmers are crazy about finding the answers to problems and solving all sorts of puzzles, including all forms of mathematical and mechanical activities. Second, they don’t like people — they dislike activities involving close personal interaction; they generally are more interested in things than in people. And third, they show a liking for research activities and a tendency to prefer varied and even risky activities, while at the same time avoiding routine and regimentation.”

“Although these scales can predict satisfaction in the field, they cannot tell us how good a programmer an individual is likely to be.”


A fundamental mistake

I’m not a psychologist, but still believe this analysis have some serious errors.

I can’t understand where the conclusion that we don’t like people comes from. Worse, I imagine some candidates who might be rejected because they do like people!

This assumption hurts our industry a lot. The most challenging part of big software project is people management and communication. This is exactly what agile methods and practices like TDD, BDD, DDD and pair/mob programing are trying to fix since decades.

Assuming that we developers should not like people is a huge mistake, and denies the fact that good software is mainly a good team work. Maybe lots of happy programmers don’t like people, but there is no need to “like activities involving close personal interaction” to be a good co-worker. Trust, respect and honesty are usually enough.

Good programmers are good co-workers before all.


A shared responsibility: the media

Responsibilities are most often shared, no need to look for a single culprit. Media also take a part in the image of software developers. In every movie with a geek I remember from the 90’s, the geek was a stereotype of the introvert puny guy with googles.
Interestingly enough, our image has kind of evolves. There are more and more movies/series where we are the heroes. But there are still lots of efforts to do to improve diversity in our representation.


A shared responsibility: us

This image has become natural for us too, and we might look for it when trying to hire a good engineer. Aren’t we skeptical when speaking with a good looking person in a suit for example? I guess we hire ourselves for a while, and are still assuming that the little guy with googles is probably a better programmer than this attractive girl.

So let’s take our responsibilities and fight our own biases, especially when hiring software developers.



Thanks Sylvain Chabert for sending me this article, it inspired this post.

PS: I don’t really know where to put it, but I really like Reinstedt’s discussion of the paper. Thus here it is (or at least a part of it)

“I’m reminded a little bit, since I did research on the Strong Vocational Interest Blank a couple of years ago as part of the Com9uter Personnel Research Group, of the supposedly true story that I heard in some work that I was taking in psychology.
The story is that in New York many years ago, during prohibition days, several psychologists from different disciplines of psychology–social psychology, religious psychology, social welfare workers, etc.–did work to find out why the people were on skid row, how they got there. The social welfare workers found they got there because they came from underprivileged, socio-economic homes; the religious counsellors found they got there because they had lost God along the way; and the other psychologists found different reasons why they got there. Prohibitionists said they got there because they turned to John Barleycorn. So it strikes me as a little more than interesting that certain results that I thought I would get, I got; and results which are contrary to these which Dallas and Bill thought they’d get, they got.”



IOC Containers, Dependency Injection and Service locator

Working mainly on .NET legacy project since a few years, I often meet the same kind of problems. One of them is a bad usage of Inversion Of Control (IOC) Containers, or Service Locator to magically access everything everywhere in the code. Just like a giant singleton.

I think these tools are not well understood. They are used as part of frameworks, or as a “good practice” without understanding the reason of their exsitence.

So let’s back to basics. images

What is Inversion Of Control? (IOC)

It is when Something control your code, instead of your code controling Something.

For example, when we need a library, we call it to delegate some work (like reading a mp3 file). Hence we control it.
When we use a framework, it calls us to manage some events (like user clicks on a button).  Hence there is an inversion of control.

What is Dependency Injection?

It is when dependencies are injected into a class, instead of this class managing its own dependencies.

For example, we often need to save something using a repository. The class has a dependency on a repository. We could directly instantiate the repository in our class. As a result, we won’t be able to write unit test on this class, because it will require a database connection, no matter what is the context.
Instead, we could inject the repository, to decide depending on our context if we want to inject a test repository or a runtime repository.

Why to inject dependencies into a class? 

In OOP, computer programs are designed by making them out of objects that interact with one another.
Dependency Injection allows to change the injected class implementation, hence the behavior, depending of the execution contexts (at least test or production). It creates relationships from high-level modules to low-level modules. It is an efficient way to apply the D from SOLID: Dependency Inversion Principle.

It allows to think about a given object in isolation (by isolating its behavior), and is really convenient for unit testing. Using our own class through tests is just dogfooding, and encourages us to write simpler classes with fewer responsibilities.


How can we inject behaviour into a class?

There are three well known possibilities:

Setter injection
A setter with the required dependency in the (public) contract of the class.

public class MyClass
private MyRepository _myRepository;
public void SetMyRepository(MyRepository myRepository)
_myRepository = myRepository;

Constructor injection
The constructor is used to inject all the dependencies at once

public class MyClass
private readonly MyRepository _myRepository;
public MyClass(MyRepository myRepository)
_myRepository = myRepository;

Service locator
A class (the locator) is called to retrieve a service for us.

public class MyClass
private readonly MyRepository _myRepository;
public MyClass()
_myRepository = ServiceLocator.GetInstance<MyRepository>();

What is an IOC container?

Inversion Of Control Container is an unfortunate name for components managing Dependency Injection, in frameworks or libraries.

When we do dependency injection using a Service Locator, we explicitly call the service to get the dependency. IOC containers do dependency injection using constructor or setter injection, there is no explicit request. Here is the Inversion of control, supposed to justify the name.

It’s a bad name because we do not care about the concept of control here, the benefit is dependency injection. In other words, to understand IOC container, forget that they do IOC, just remember they do dependency injection.


How not to use it

Here is a list of usage hurting maintenance and readability I saw in different projects.

– Usage of both Dependency Injection by constructor and service Locator.
Choosing one of the three Dependency Injection methods (constructor, setter or service locator) is up to your style of code, but mixing several of them makes it hard to think about the code. Be consistent around the project, the 3 methods achieve the same goal, there is no interest in mixing them.
The lack of consistency hurts the usability of the design.
As a direct painful example, the configuration for unit testing is harder, because some dependencies have to be mocked in the IOC container, some others in the service locator.

– Usage of a static singleton of service locator.
Singleton is probably the most used pattern, unfortunately it is not always wise. It is especially hard to write independent unit tests in this case, because all the dependencies are registered in the same instance of locator.


Why to avoid Service Locator

The argument to use a service locator is usually that it reduces the pain for refactoring: the only dependency we have is the service locator, because the service locator can reach anything!
I believe it hides dependencies in order to write code faster. It literally means that each class could depend of anything. And the only way to know what the dependencies are is to look at the implementation.


Why to avoid Setter Injection

The main problem with setter injection is that the class is generally not in a valid state just after the creation. I hate when I can create a class, but it raises an exception when I try to call a function because I didn’t initialize the good values. I prefer to have explicit constructor, creating classes in a valid state, or not created it at all. I believe the role of the constructor is to ensure that the new object is in a valid state.


Why Constructor Injection makes sense

I prefer to use IOC container only to instantiate the first classes for the system to start. From this point, I create manually other classes, and explicit all the dependencies in every constructor. Some would argue it is too verbose, but I see it as a guide.
When the constructor is too big, it’s a code smell encouraging me to think again about my design. Lots of dependencies implies lots of responsibilities.

And we know how to deal when a class has too much responsibility, don’t we?


To learn more on this topic, read the excellent work from Mark Seeman and Martin Fowler.


Thanks Thibaud Desodt and Thomas Pierrain for suggesting improvements.


Learning from the past

“As a principle objective, we must attempt to minimize the burden of documentation, the burden neither we nor our predecessors have been able to bear successfully”

“My first proposal is that each software organization must determine and proclaim that great designers* are as important to its success as great managers are, and that they can be expected to be similarly nurtured and rewarded”
* Understand designers as software engineers

“There is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement within a decade in productivity, in reliability, in simplicity”

“Adding manpower to a late software project, makes it later.”

“For the human makers of things, the incompletenesses and inconsistencies of our ideas become clear only during implementation.”

“The brain alone is intricate beyond mapping, powerful beyond imitation, rich in diversity, self-protecting, and self-renewing. The secret is that it is grown, not built. So it must be with our software systems.”

“testing is usually the most mis-scheduled part of programming.”


An echo from the past

All these quotes were written 50 years ago, in the excellent book: The Mythical Man Month.

Sometimes I think: “Waouh, we know this since more than 50 years, and we still have to fight for these practices in the companies I work for? Are we developer stupid people who don’t learn from the past? Don’t we try to improve our industry? Or maybe it’s just a French problem?”


An unexpected answer

Unexpectedly, the answer comes from a discussion with my grandmother.

A few weeks ago, I talked with her for the first time about world war two. How it was, where she lived, what she saw.
Even if I know the History, I was choked when she explained me how Jewish people were parked in hotels, and how Nazis come to take them with violence, in places I know very well.
She was 13 at that time, she told me how a soldier put a machine-gun on her chest, yelling at her in German, even though she didn’t talk German at all. The soldier believed my grandmother was a Jewish kid trying to run away from the Jewish hotel. Fortunately my great grandfather arrived just in time and was able to save her. She told me many other frightening things.

I realized that, if my grandmother was more nervous, or if this soldier was less calm, or if my great grandfather was longer to find his daughter, I may not be here to write this post. I am lucky, but how many people did not have this chance? How many lives were broken because of racism and hate?


Back from the Godwin point

My point is: developers are not stupid, but developers are people. And most people don’t read, or even wish to learn. Most people unconsciously look for confirmation biases (including myself), about how things should work.

The less we have knowledge, the more we think we know. We can solve it by staying open and learning more, until we accept we know nothing.


Repeat the past

Most people believe that the past is the past, and that today is totally different. Most people don’t talk about world war two with their grandparents.

This is why nationalist parties have more and more influences in Europe.
This is why U.K. leaves the U.E. despite the fact that U.E. was the best invention against hate, racism and war since many centuries.
This is why the most powerful country in the world can vote for a racist billionaire to make America “great again”.

This is why I have to explain the things we know since 50 years in our industry in every company I work for (but in perspective, it does not really matter after all).


Happy end

The good news is that it is easy to be “a good software engineer”: read, learn, go out of your comfort zone and always challenge yourself. Avoid people and methods who pretend to solve everything.

And to start learning about the History of software development, I highly recommend  The Mythical Man Month.




Much focus in our industry is on how we can write code faster. We want super productive IDE, high level language and huge frameworks to protect us from the hard reality of building software.

In my opinion it’s more important to find how we can write maintainable code by focusing on readability.



Lots of frameworks promise faster development by “getting rid of” the plumbing, this dirty stuff we don’t want to deal with.

Frameworks might not be too bad if they don’t hurt readability. But as they almost all do, the long term maintenance is a nightmare, because we have a new dependency. It is the classical 90% percent of time to fight the framework for the 10% of things that don’t match our use case.

By trying to write an application faster, we may lost our freedom to update our software when we want. Worse, we are constrained by some naming or architecture convention, making our code less readable.


Property Dependency Injection

The only argument I hear to justify dependencies injection by properties is: it is easier to write. And it’s a really bad argument.

It is much more valuable to be able to understand easily all the dependencies of an object. One of the best solution in an OO language is dependencies injection by constructor. It allows seeing quickly what are the responsibilities of a given class.

It’s really irritating to find out at runtime that I miss a dependency because it is injected via a property, or worse, directly by calling a static IOC container in an initialize function.


Choose freedom

Instead of losing readability with a framework or other antipatterns, we can choose freedom. We must find lightweight libraries to match our needs, and not be afraid to write code. It will be more maintainable than a framework.

It is where a DDD approach makes sense: by identifying the core domain, we know which part of the system must be as pure as possible. But in a more generic domain, using a framework may be beneficial. As usual, context is king.

The core domain is the main value of our software, do we prefer a quick writing, or an easy reading?


The wrong fight

It’s appealing to believe that we should write software faster in order to improve the business we work for. I believe it’s a fundamental mistake. To be faster we need to make the software easy to evolve. And it evolves faster if we can understand quickly what is its purpose.

How many hours per day do we write code?
How many hours per day do we read code?
So why are we still writing new frameworks instead of focusing on clean code and readability?



Thanks Samuel Pecoul for suggesting improvements

IP Blocking Protection is enabled by IP Address Blocker from