The PHP Way of Life Manifesto

Amaury Bouchard
License CC BY-SA 4.0

Introduction: PHP, the Web's Language #

A brief history #

PHP's history is well known and thoroughly documented [PHP documentation 1, Lerdorf 2]. It is a language designed specifically for the Web, during the time when the Web was rapidly spreading in the 1990s.

It gained popularity among developers because it was simpler than the main solutions available at the time for creating dynamic websites, such as C and Perl. At the same time, its syntax was similar, allowing developers to adapt to it quickly.
It was also popular with hosting companies, as its configuration file − featuring directives like max_execution_time and memory_limit − enabled hosting numerous sites on the same server without the risk of a bug on one site disrupting the others. [Lerdorf 2]

The philosophy of PHP #

One of PHP's core principles is that it serves both hobbyist developers and large corporations equally well.
As its creator puts it:

I always wanted to make sure that we scale so that the weekend warriors could bring up some interesting stuff without having to read 30 different books. [Lerdorf 2]
One of the strengths of PHP is that it scales. It scales up to the largest sites in the world while at the same time it scales down to weekend warriors. Doing both in the same codebase is a challenge. [Lerdorf 3]

This idea is fundamental, and it should never be overlooked.

PHP is a humble language, and this humility has been part of its DNA since the very beginning:

PHP is about as exciting as your toothbrush. You use it every day, it does the job, it is a simple tool, so what? Who would want to read about toothbrushes? [Lerdorf 4]
I've never thought of PHP as more than a simple tool to solve problems. [Lerdorf 5]

While time and money have passed since those words were spoken, the philosophy behind them has shaped the language's evolution. PHP remains a language for everyone, not an elitist one.

PHP and modern development #

Since the creation of PHP, the practice of web development has been enriched considerably. Software engineering has enabled the creation of large-scale applications.
PHP itself has significantly evolved, offering a highly robust object model and continually improving performance.

Today, not everyone embraces the new complexity of web development. The article The New Internet by Avery Pennarun (CEO and co-founder of Tailscale) perfectly captures the spirit of the Manifesto:

What we saw was, a lot of things have gotten better since the 1990s. Computers are literally millions of times faster. 100x as many people can be programmers now because they aren’t stuck with just C++ and assembly language.
But, also things have gotten worse. A lot of day-to-day things that used to be easy for developers, are now hard. That was unexpected.

Instead, the tech industry has evolved into an absolute mess. And it’s getting worse instead of better! Our tower of complexity is now so tall that we seriously consider slathering LLMs on top to write the incomprehensible code in the incomprehensible frameworks so we don’t have to.

I read a post recently where someone bragged about using kubernetes to scale all the way up to 500,000 page views per month. But that’s 0.2 requests per second. I could serve that from my phone, on battery power, and it would spend most of its time asleep.

In modern computing, we tolerate long builds, and then docker builds, and uploading to container stores, and multi-minute deploy times before the program runs, and even longer times before the log output gets uploaded to somewhere you can see it, all because we’ve been tricked into this idea that everything has to scale. People get excited about deploying to the latest upstart container hosting service because it only takes tens of seconds to roll out, instead of minutes. But on my slow computer in the 1990s, I could run a perl or python program that started in milliseconds and served way more than 0.2 requests per second, and printed logs to stderr right away so I could edit-run-debug over and over again, multiple times per minute.

As an industry, we’ve spent all our time making the hard things possible, and none of our time making the easy things easy.

Modern software development is mostly junky overhead. [Pennarun 6]

There are three reasons for this growing complexity:

Never forget that the simplest code is the one that performs best, both in the short term (faster to develop, easier to test, quicker to execute) and in the long term (easier to maintain and more resilient to regressions).

The “most advanced” people often use simple solutions indistinguishable from people who don’t know what they are doing. Average people are often in the “knows enough to be dangerous” category by over-thinking and over-working and over-processing everything out of lack of more complete experience to discover simpler and cleaner solutions. [Stancliff 12]

> The best best code, produced by the most experienced people, tends to look like novice code that happens to work.

Could it be that best practices are designed to make sure mediocre programmers working together produce decent code?
After all, actual novice programmers write code similar to the best programmers except that it doesn't work. [Hacker News 13]

Under the guise of professionalization, the PHP ecosystem has taken much inspiration from the world of Java and lost sight of what made the language strong — and by no means amateurish.
But it doesn't have to be that way.

To oversimplify, human nature follows cycles of increasing complexity and simplification. In computing, each technology tends to grow more complex (e.g. C ➡ C++, Lisp ➡ Haskell, Java ➡ JEE, CGI ➡ WebSphere) until a simpler alternative emerges (e.g. Fortran ➡ BASIC; C ➡ Perl; C++ ➡ Java; Perl ➡ PHP/Python; Objective-C ➡ Swift).

Today, after having embraced complexity, the PHP ecosystem is ready to return to its roots and find greater balance.

Arrays: The preferred way to transfer data, simple and flexible #

PHP arrays have long been a relatively unique data type in programming languages. They are extremely versatile and can be used as numerically indexed lists, associative arrays, or a combination of both, while preserving insertion order.

Very early versions of PHP v1 actually had distinct list, map and set implementations but I replaced those early on with a unified hybrid ordered map implementation and just called it the "Array" type. The thinking was that in almost all situations in a Web app, an ordered map can solve the problem. It looks and acts enough like an array that it can be used in situations that call for an array and it eliminates the problem of presenting the user with 3 or 4 types and related keywords and syntax that forces them to try to figure out which one to use when. This decision was made in 1994 and apart from a few pedantic naming complaints over the years, I think this particular decision has stood the test of time. [Lerdorf 3]

It’s worth noting that other programming languages have since updated their equivalents (called dictionary or hash) to behave similarly by default.

Arrays are easy to understand and use for beginners. Adding elements, iterating over them, and serializing/deserializing them in JSON is straightforward. A large number of native functions are available for manipulating them.

PHP arrays can represent data structures such as stacks, queues, or records.

Code that uses arrays is easy to read and understand; it's standalone, with no additional objects to inspect.

When you need to transmit data, start by using arrays. Their flexibility allows you to adapt your development easily as needs evolve.
Most of the time, this will be sufficient, even in the long term. Your code will remain explicit and scalable.

PHP provides more specialized data structures (SplDoublyLinkedList, SplStack, SplQueue, SplHeap, SplFixedArray...). However, only use them when truly necessary. Avoid premature optimization — performance bottlenecks are usually found elsewhere.

Don't rush into creating DTOs (Data Transfer Objects — objects without business logic, used solely for transporting data) from the outset, as this will only add unnecessary complexity to your code.

We've concluded the same thing in a few projects at work. We started with naive Object implementations, and then scaled back – purely for reasons of simplicity – to passing around raw DataSets. [Atwood 14]

Of course, there are cases where an object is more appropriate, particularly when building or maintaining a library (to avoid writing defensive code that validates everything it receives as input). However, within your application code, this should not be your default approach.

Objects: Perfect for organizing code, best used with procedural logic #

A bit of context #

Object-oriented programming is a concept that has existed for decades, with the first implementations in the Simula language in 1967 [Wikipedia 15]. Most object-oriented languages are based on the concepts of inheritance, encapsulation, and polymorphism. PHP introduced a robust object-oriented model with the release of PHP 5 in 2005.

Today, the use of objects feels natural. Software engineering is now widespread, and design patterns save time by offering developers a shared language.

However, a growing number of voices argue that object-oriented programming is not the ultimate silver bullet for all development challenges.

In this sense, some believe that OOP (object-oriented programming) is suitable for certain types of development, but not for all.

I believe objects, classes, polymorphism, and even inheritance can be valid tools in some cases. However, contra OOP, these are niche cases rather than the pervasive default. [Will 16]
The OO design concept initially proved valuable in the design of graphics systems, graphical user interfaces, and certain kinds of simulation. To the surprise and gradual disillusionment of many, it has proven difficult to demonstrate significant benefits of OO outside those areas. [Raymond 17]

Object-oriented programming is verbose #

Another argument is that the object-oriented approach is significantly more complex and verbose than a traditional procedural approach, resulting in more code to write and maintain. It becomes increasingly difficult to form a mental picture of all the objects and their interactions. The flow of execution is much harder to follow compared to procedural code.

Adding objects to your code is like adding salt to a dish: use a little, and it's a savory seasoning; add too much and it utterly ruins the meal. Sometimes it's better to err on the side of simplicity, and I tend to favor the approach that results in less code, not more. [Atwood 18]
I've worked with developers who insisted that everything had to be generated through an object model, even if the object-oriented way required many times the amount of code. [Atwood 19]
I think that large object-oriented programs struggle with increasing complexity as you build this large object graph of mutable objects. You know, trying to understand and keep in your mind what will happen when you call a method and what will the side effects be. [Hickey 20]
The problems that came with Object Oriented programming is that these languages are really designed to help the developer manage the code… Now it is almost impossible to follow the execution flow. It is no longer possible to detect execution flow bugs with a simple code review. [Shelly 21]
Commenters [said] that, without OOP, code inevitably becomes spaghetti. Fear of the spaghetti monster is a healthy programmer phobia, but OOP doesn’t protect us from spaghetti — instead it merely obscures spaghetti through indirection. [Will 16]

Separating data and logic #

There's also the fact that data and operations are distinct and should remain separate. Programming is about verbs, not nouns; it's about doing things, not just manipulating abstract concepts.

Let data just be data. Let actions just be actions.
We shouldn't have to conceptualize all the things we want to do in code in terms of some kind of data. We shouldn't have to nounify all our verbs. [Will 22]
There’s no objective and open evidence that OOP is better than plain procedural programming. OOP is not natural for the human brain, our thought process is centered around “doing” things — go for a walk, talk to a friend, eat pizza. Our brains have evolved to do things, not to organize the world into complex hierarchies of abstract objects. [Suzdalnitski 23]
Objects bind functions and data structures together in indivisible units. I think this is a fundamental error since functions and data structures belong in totally different worlds. [Armstrong 24]
OO zealots are afraid of data. They prefer statements or constructors to initialized tables. They won't write table-driven tests. Why is this? What mindset makes a multilevel type hierarchy with layered abstractions better than searching a three-line table? I once heard someone say he felt his job was to remove all while loops from everyone's code, replacing them with object stuff. Wat? [Pike 25]

Object-oriented vs procedural approach #

But should we really contrast OOP with procedural code? Yes and no.
Yes, if you embrace an "all-object" approach; no, if you use objects intelligently alongside procedural logic. While objects are undeniably very practical in many situations, they can be used in a way that still lets you trace code execution step by step.

Consider the examples given by Yegor Bugayenko (Lab Director at Huawei) in his article “OOP Alternative to Utility Classes.” These examples are written in Java, but the principles remain the same.

He explains that, to determine the greater of two numbers, the procedural approach is to use a utility class containing a max() method, whereas the OOP approach would involve a Max object.

This would result in the following code:

// Procedural version
int max = NumberUtils.max(10, 5);
// Object version
int max = new Max(10, 5).intValue();

Is the second one more readable? Of course not. Moreover, the source code of the Max object is significantly more verbose than that of NumberUtils. And do we really want to create an object for every possible operation? Clearly not.

His second example demonstrates a function that reads a file, trims spaces from the beginning and end of each line, and outputs the result to another file.

Here is the procedural version:

void transform(File in, File out) {
    Collection src = FileUtils.readLines(in, "UTF-8");
    Collection dest = new ArrayList<>(src.size());
    for (String line : src) {
        dest.add(line.trim());
    }
    FileUtils.writeLines(out, dest, "UTF-8");
}

And here is the strictly object-oriented version:

void transform(File in, File out) {
    Collection src = new Trimmed(
        new FileLines(new UnicodeFile(in))
    );
    Collection dest = new FileLines(
        new UnicodeFile(out)
    );
    dest.addAll(src);
}

Take the time to read each function.
The first one can be read line by line, allowing you to easily understand the operations being performed. Even if you're unfamiliar with the called functions, you can infer what they're doing.
The second function, however, instantiates multiple objects, passing them as parameters to one another. It's challenging to follow the probable flow of execution just by reading this code.

It's clear that with strict object-oriented programming, the cognitive effort required to understand the code is significantly greater.
While this might align with the “Java Way of Life,” it’s definitely not compatible with the PHP Way of Life.

To conclude #

If you want to embrace an “all-object” approach, do Java, not PHP.

Typing: Strong in principle, adaptable in practice, but never strict #

PHP offers flexible type management. Early versions of PHP had weak typing, but it has long been possible to specify types for function and method parameters and return values. The strict_types option was later introduced, enforcing a form of strong typing.

Classic PHP typing #

Typing function parameters and return values ensures that the types received within a function match the expected types.

Consider the simple (and admittedly useless) example of a string concatenation function. Without parameter typing, we tend to write defensive code that check input data types:

function concat($a, $b)
{
    if (!is_scalar($a) || !is_scalar($b))
        throw new \TypeError(“Bad parameter.”);
    return $a . $b;
}

With parameter typing, however, you ensure that you receive data that's straightforward to handle:

function concat(string $a, string $b) : string
{
    return $a . $b;
}

As long as the concat() function is called with scalar parameters (boolean, integer, float, string), PHP automatically converts them, and the function receives strings. If you know that the calling code can only provide scalars, neither error handling nor explicit conversion is needed:

$result = concat($str1, $str2);

If, on the other hand, the calling code lacks complete control over the data being sent to the function, you can simply handle the TypeError exception, which will be thrown if a conversion fails:

try {
    $result = concat($str1, $str2);
} catch (\TypeError $te) {
    // error management
}

Most of the time, exceptions are managed at a higher level to keep the code as simple as possible.

Strict PHP typing #

If the strict_types directive is enabled, the call requires parameters of the correct type. If the calling code is known to contain only scalars, explicit conversions are required:

$result = concat((string)$str1, (string)$str2);

It's clear that, even for such a simple example, the code is more verbose. Reading it requires extra cognitive effort.

If the calling code may hold arbitrary data in the source variables, you must check their types and handle errors, in addition to making explicit conversions:

if (!is_scalar($str1) || !is_scalar($str2)) {
    // error management
}
$result = concat((string)$str1, (string)$str2);

Although activating the strict_types directive has become trendy, it leads to less maintainable code. Static code analysis tools can address nearly all cases where strict typing might be useful.

Interestingly, Gina Banyard (a member of the PHP core team) has proposed an RFC advocating for the removal of the strict_types directive:

The blind use of the strict typing mode as lead to some unintended consequences:

  • Use of explicit casts to conform to type requirements even though they are less type safe
  • The perceived need for "strict" type casts
  • Manual parsing or type juggling which the engine can already perform

Due to the systematic use of the strict typing mode imposed by modern coding styles many users do not understand what the scope of the declare is nor what it does.
Many assume it makes PHP stricter in regards to type juggling when in reality it only affects the passing of scalar inputs to functions called in userland, the scalar return value of custom userland functions, and setting a value to a typed scalar property.
It does not prevent types being juggled with the use of operators, nor functions which are called by the engine, even if the function is defined in userland with strict typing enabled. Prime examples of this are engine handlers such as the error, exception, and shutdown handlers.

Too strict may lead to too lax
The perceived need for so called "strict" type casts is a clear symptom that explicit casts are used in places where they shouldn't be just to comply with the type declaration of a function's parameter. [Banyard 26]

Wrapping up #

Typing function parameters and return values has proven its value, making code more robust.

Typed languages are essential on teams with mixed experience levels. [Kiehl 73]

However, don't enable strict_types by default. It provides no real benefit.

Web Interfaces: Server-generated HTML is key to a fast, accessible Internet #

A brief overview #

PHP was originally designed as a template engine, allowing you to embed calls to C code in an HTML file. And to some extent, it can still be used like that.

Today, there are two ways to create a website: either by generating HTML on the server, or by providing an API consumed by a Javascript application.

Javascript frameworks have become very common, driven by the popularity of SPAs (Single Page Applications). For applications with complex functionality, they are undeniably useful.

The problem with JavaScript frameworks #

Unfortunately, with the tendency to indiscriminately adopt the “best practices” of large companies, a majority of websites now use JavaScript frameworks, even though these sites operate purely transactionally: pages with links and forms; clicking a link loads a new page, and submitting a form results in a redirect.
In such cases, browser-side page generation only increases loading times, complicates development, and hinders debugging.

For users, the impact of these frameworks is immediate: web pages become heavier, load more slowly, and are less accessible.

In the last few years, it seems web performance has fallen by the wayside. Indeed, with many sites now using frameworks like React and Vue, SPAs becoming commonplace and requests going into the hundreds, the average webpage is now bigger than ever, with 2–3MB pages being more common than ever. [Cheatmaster30 27]
Byte for byte, no resource affects page speed more than JavaScript. JavaScript affects network performance, CPU processing time, memory usage, and overall user experience. Inefficient scripts can slow down your website, making it less responsive and more frustrating for your users. [Zeman 28]
The relentless pursuit of “cutting-edge” JavaScript frameworks inadvertently contributed to a less accessible web, disproportionately impacting users.
A more reasonable approach would be prioritizing information access and accessibility over flashy interfaces. [Easy Laptop Finder 29]

Not to mention the dependencies on numerous Javascript libraries, which come with their own vulnerabilities or frantic update cycles.

I feel that not that many people are speaking about dependency management fatigue.
I was spending too much time dealing with dependency updates of mostly React packages. I would update my packages to their latest release, only to realize that their APIs had breaking changes that forced me to invest time refactoring my code.

If you are trying to build a product that requires as little upkeep as necessary after being shipped, I’ll stay as far as possible from the JS ecosystem. [Rodriguez 30]

[about left-pad package] What concerns me here is that so many packages and projects took on a dependency for a simple left padding string function, rather than their developers taking 2 minutes to write such a basic function themselves.

A fresh install of the Babel package includes 41,000 files.
A blank jspm/npm-based app template now starts with 28,000+ files

Have We Forgotten How To Program? [Haney 31]

So why continue to use these frameworks even when they're not justified? Some believe that many developers prioritize their desire to use "cutting-edge" technologies over the well-being of their users, or that big tech companies exert significant influence.

The issue is the developer and designer mindset (…) that web development and design should be ‘fun’. I fully believe a lot of developers and software engineers put their job satisfaction above their users or customers.

And that’s what led to all these questionable practices, as well as a lack of interest in what matters. Heavy build systems like Webpack and dozens upon dozens of pre-made components from NPM are brought in to ‘save developer time/effort’, without much thought to the extra kilobytes (or even megabytes) of JavaScript this adds to the finished product. [Cheatmaster30 27]

Why is this so complicated? It is inherently a complicated problem of doing the front-end code to a web app; there’s a lot of moving parts, many many things can go wrong and why not let the “experts” at Facebook or Google tell us what to do, right?

Then there’s more cynical reasons. I like the suggestion that front-end programmers have been criticized as being “front-end programmers, they’re just a bunch of noobs”, for so long that now front-end programmers are sort of overcompensating by making some super overengineering stuff, just to say “Hey I’m a computer scientist too”.

And there’s another cynical reason which is that companies like Facebook and Google, who are promoting these frameworks do have an incentive to get mindshares so that, you know, it’s better for their companies if people use the technologies that they made − sort of increases the reputation. [Holovaty 32]

Server-side generation and progressive enhancement #

The solution is to stick to what you do best: generating HTML pages server-side. Pair them with CSS, and add JavaScript only for progressive enhancements if needed.

Best Practices for Optimizing JavaScript: Wherever possible, don’t use JavaScript
Generating markup on the server is a much better approach for performance than relying on client-side JavaScript to generate everything.

Reducing your dependence on JavaScript for the load of your page not only reduces the amount of JavaScript the browser must download, parse, compile, and execute, but it also lets the browser take advantage of its own internal optimizations to get maximum performance. [Zeman 28]

Code that runs on the server can be fully costed.
Code that runs on the client, by contrast, is running on The Devil's Computer.

As a direct consequence, an unreasonably effective strategy is to send less code. In practice, this means favouring HTML and CSS over JavaScript, as they degrade gracefully and feature higher compression ratios.
The only thing that makes web experiences good is caring about the user experience. [Russell 33]

Progressive enhancement is a way of building websites and applications based on the idea that you should make your page work with HTML first. Only after this can you add anything else like Cascading Style Sheets (CSS) and JavaScript.

If you believe your service can only be built using JavaScript, you should think about using simpler solutions that are built using HTML and CSS and will meet user needs.

If you use JavaScript, it should only be used to enhance the HTML and CSS so users can still use the service if the JavaScript fails.

Do not build your service as a single-page application (SPA). [UK Government 34]

Progressive enhancement is a design and development principle where we build in layers which automatically turn themselves on based on the browser’s capabilities. Enhancement layers are treated as off by default, resulting in a solid baseline experience that is designed to work for everyone.

We do this with a declarative approach which is already baked into how the browser deals with HTML and CSS. For JavaScript — which is imperative — we only use it as an experience enhancer, rather than a requirement, which means only loading when the core elements of the page — HTML and CSS — are already providing a great user experience. [Bell 35]

If you really need to add Javascript to your pages, avoid using a full-fledged framework. Your site will remain lighter, more responsive, and more accessible by using plain Javascript or a specialized library like Vik, Turbo, htmx, or even trusty old jQuery. But you should never need to create a Javascript 'controller'.

React served us well, but things that were once easy became harder. It hit us when adding a single input to a form resulted in a 700-line pull request. As a result of switching from React to StimulusJS, we deleted about 60% of our JavaScript. Interestingly, the amount of JavaScript in our application has remained relatively flat since then.

StimulusJS lets you consolidate more application logic and state to the backend. Sure, you might not get the reactivity of client-side state but client-side state is a lie. StimulusJS wants you to write as little Javascript as possible.
Lines of code are a liability not an asset.

We’re now on a tech stack that we’re not fighting against. [Sutton 36]

I think the “fat client” era JS-heavy frontends is on its way out. The hype around edge applications is misplaced and unnecessary for building many different flavors of successful businesses. Many interactions are not possible without JavaScript, but that doesn’t mean we should look to write more than we have to.

And best of all, we don’t need to carry the mental overhead of state management on the frontend to enable this experience. Everything is just a page of HTML with some JS sprinkles, so there’s no state to maintain between route changes. There’s no complicated state management on the client. [Sutton 37]

A simple front-end for a simple build #

Taking it further, simplifying front-end development also simplifies your build process. Instead of going through complex stages, you can ideally skip pre-processing entirely before deploying your site. A faster dev-deploy-test-debug cycle means greater efficiency.

Transpiling with Babel ushered in the era of horrendously complicated transpiling pipelines and tooling. Writing the JavaScript of the future wasn't free. The price was an ever expanding web of complexity.
I no longer believe that this bargain is worth it for most new applications. [DHH 38]

You can't get faster than No Build

I'm working without using any form of real build steps on the front-end. It's all just so... simple. It's also fast. Really fast. Infinitely fast.
For the first time in probably 15 years, the state of the art is no longer finding more sophisticated ways to build JavaScript or CSS. It's not to build at all. [DHH 39]

Databases: ORMs and NoSQL look like your friends, but SQL truly is #

In a web application, data can be accessed in several ways: via APIs, relational databases (RDBMS), non-relational databases (NoSQL), files, and more.

NoSQL Databases #

There are many different types of NoSQL databases, each built on distinct mechanisms. Some excel at highly specific use cases.

Document-oriented databases have gained popularity, but it's important not to misinterpret their capabilities. While they promise to index all stored documents, attempting complex queries involving multiple parameters—similar to relational queries with joins—often leads to disappointing performance.

MongoDB joins are very brittle (when things change, application programs must be extensively recoded), and often MongoDB offers very poor performance.

Joins in MongoDB generally execute with poor performance. MongoDB does not have a query optimizer and the execution strategy is hard-coded into the application. Whenever merge-sort or hash-join is the best choice, Mongo performance will suffer. [Stonebraker 40]

MongoDB excels in write performance, making it an excellent choice for applications that require high-speed data insertion. On the other hand, MySQL leads in read performance, particularly for queries that can take advantage of efficient indexing.
If write performance is critical, MongoDB is the better option. However, if your application demands fast reads on indexed fields, MySQL would likely serve you better. [Verma 41]

DynamoDB is the worst possible choice for general application development. [Kiehl 73]

In general, NoSQL databases are accessed through specific APIs, which are neither more efficient nor more readable than SQL queries.

Notice that the [MongoDB] code is a great deal more complex than the Postgres code, because MongoDB doesn’t have relational join notions and is in a lower level language than SQL. Also, it requires the programmer to algorithmically construct a query plan for the join. [Stonebraker 40]
Awkward it may be, but SQL is a lot more succint and readable than multiple lines of API calls. In fact, when trying to explain how to use its API, the MongoDB documentation lists the equivalent SQL queries. That's a pretty clear vote for the usability of SQL. [Voss 42]

When starting a project, chances are a relational database is the right choice.
If your needs are diverse, use each system where it excels: MySQL/PostgreSQL for relational data, Redis for key-value pairs, ElasticSearch for full-text indexing, MongoDB/CouchDB/CouchBase for schema-less documents, a network file system or cloud storage for binary files, and so on.

The relational model is pretty magical. Set up a model of your entities, pour data into it, and get answers.
If you don't know all the questions you might need to ask about your data, the safest thing to do is put them in an RDBMS. And when you first start a project, you almost never know all the questions you're going to need to ask. So my advice: always use an RDBMS. Just don't only use an RDBMS. [Voss 42]

ORMs #

When it comes to accessing relational databases, SQL queries are usually contrasted with the use of ORMs (Object-Relational Mapping).

ORMs offer:

While ORMs are generally seen as a modern and efficient design pattern, some consider them an anti-pattern.

I want to be very, very clear about this: ORM is a stupid idea.
The birth of ORM lies in the fact that SQL is ugly and intimidating. Let's just throw an abstraction layer on top of this baby and forget there's even an RDBMS down there. This is obviously silly. [Voss 42]
A defender of ORM will say that not everyone needs to do complicated joins, that ORM is an "80/20" solution, where 80% of users need only 20% of the features of SQL, and that ORM can handle those. All I can say is that in my fifteen years of developing database-backed web applications that has not been true for me. Only at the very beginning of a project can you get away with no joins or naive joins. [Voss 43]
ORM is a terrible anti-pattern that violates all principles of object-oriented programming. There is no excuse for ORM existence in any application, be it a small web app or an enterprise-size system with thousands of tables and CRUD manipulations on them.
I’m claiming that the entire idea behind ORM is wrong. [Bugayenko 44]
Any situation where complex functionality is wrapped in another layer runs the risk of increasing the overall complexity when the wrapping layer is itself complicated. This often comes along with leaky abstractions - wherein the wrapping layer can't do a perfect job wrapping the underlying functionality, and forces programmers to fight with both layers simultaneously. [Bendersky 45]
Personally, I think the only workable solution to the ORM problem is to pick one or the other: either abandon relational databases, or abandon objects. If you take the O or the R out of the equation, you no longer have a mapping problem.
Both approaches are certainly valid. I tend to err on the side of the database-as-model camp, because I think objects are overrated. [Atwood 46]

[ORMs] should be avoided, as they often introduce whole new languages, paradigms and systems, but provide no real benefit in either simple or complex mappings. In a simple one to one Object Relational mapping, the task is simple and no tool is required. In complex mappings, the ORM tool adds more complexity and manual intervention is often required, negating the usefulness of the automated ORM tool.

SQL code should be visible in your model objects. Programmers need to understand what is going on when objects are being retrieved or data representing objects is manipulated in the database. [Maffey 47]

ORMs are the devil in all languages and all implementations. Just write the damn SQL. [Kiehl 73]

Even the co-creator of Propel, one of the most prominent PHP ORMs of the 2010s, says:

I personally don't use ORMs anymore. [Zaninotto 48]

Writing queries exclusively in PHP might give the impression that SQL is an unnecessary language that should be hidden as much as possible. And yet, we constantly emphasize that every developer must know multiple languages — it's essential.

Learning more than one language is an excellent idea -- not only does that give you that much more flexibility in job hunting &c, but it simply broadens your mind, your vision of what programming is all about. [Martelli 49]

Why should SQL be the only language deemed unworthy of consideration? It's a powerful language that has proven its stability over the years and its effectiveness in handling relational data.

No other computer language has remained as popular and consistently extended its reach for 50 years. As programmers we constantly learn new languages, variations and concepts to keep up. Nothing stays still. SQL is a beautiful, refreshing exception to this mayhem. Learning SQL is likely to be the only technical skill you can rely on to remain useful, current and portable for a long time to come.
SQL is powerful. As a mostly declarative syntax, what can be achieved in a 3 simple lines of SQL may take 20 - 30 lines of procedural language. [Maffey 47]
SQL is almost always the best way to not re-learn something new from the beginning that will inevitably end up slowing you down or simply not working at all in the long run. [Righetti 50]
It's very hard to beat decades of RDBMS research and improvements. [Kiehl 73]

The way ORMs map data to objects is appealing when aiming for a 'full object' approach. However, as we've seen earlier, this approach is not inherently desirable.

Above all, by working exclusively with objects, developers lose sight of the underlying database reality, leading to three negative effects:

ORMs have significant performance limitations. While they work well for simple use cases, they struggle to optimize query generation when dealing with complexity, often leading to multiple queries where a single one would suffice.
The typical workaround for this issue is to write queries in a SQL-like language, which defeats the supposed advantage of not needing to know SQL syntax or the underlying database schema.

This leads naturally to another problem of ORM: inefficiency. When you fetch an object, which of its properties (columns in the table) do you need? ORM can't know, so it gets all of them (or it requires you to say, breaking the abstraction). Initially this is not a problem, but when you are fetching a thousand records at a time, fetching 30 columns when you only need 3 becomes a pernicious source of inefficiency. Many ORM layers are also notably bad at deducing joins, and will fall back to dozens of individual queries for related objects. As I mentioned earlier, many ORM layers explicitly state that efficiency is being sacrificed, and some provide a mechanism to tune troublesome queries. ORM's lack of context-sensitivity means that it cannot consolidate queries, and must fall back on caching and other mechanisms to attempt to compensate. [Voss 43]

ORMs encourage poor practices because of how easy it is to rely on host language logic to combine data.
ORMs are not as efficient as raw SQL queries. They are often a bit more inefficient, and in some choice cases, very inefficient.

The first issue is that ORMs sometimes incur massive computational overhead when converting queries into objects.
The second issue is that ORMs sometimes make multiple roundtrips to a database by looping through a one-to-many or many-to-many relationship. This is known as the N+1 problem (1 original query + N subqueries).

The biggest issue with ORMs is visibility. Because ORMs are effectively query writers, they aren’t the ultimate error dispatcher outside of obvious scenarios (such as incorrect primitive types). Rather, ORMs need to digest the returned SQL error and translate it to the user. [Chuong 51]

Moreover, ORMs do not take advantage of fundamental SQL database features such as views, stored procedures, or triggers, which can help improve database performance and data consistency.

Additionally, using an ORM introduces heterogeneity in data access. Entities are manipulated in a way that is specific to each ORM, with significant differences depending on whether it follows the Active Record or Data Mapper pattern. If the code also needs to access other data sources (such as a NoSQL database or an API), it will do so with entirely different logic and syntax.

As for the argument that DBMS abstraction makes switching databases easier, this is misleading. ORMs rely on the lowest common denominator. Changing databases only makes sense if you need to leverage specific features provided by a DBMS that others do not support. However, in such cases, the ORM is unlikely to support these features natively — or at least not fully.

In practice, it is extremely rare for a company to switch databases while still operating within a functional scope where an ORM would be sufficient.

In all my programming jobs, I have never come across a company porting from one SQL database to another. [Maffey 49]

If a platform ever needs to switch from a relational database to a NoSQL database or an API-based data source, all code relying on the ORM must be rewritten.

A simpler approach to data interfacing can provide more unified access to different types of data sources.

Query Builders #

In response to the challenges posed by ORMs, some argue that query builders offer a more elegant solution. They allow queries to be built entirely in code.

Here’s an example of using a query builder:

DB::table('Users')
    ->select('Groups.id')
    ->join('Groups', 'Groups.userId', 'Users.id')
    ->where('Users.created_at', '>', $date1)
    ->where('Groups.created_at' '<', $date2)
    ->get();

And the equivalent call without a query builder:

DB::select(
    'SELECT Groups.id
     FROM Users
          INNER JOIN Groups ON (Groups.userId = Users.id)
     WHERE Users.created_at > :date1
       AND Groups.created_at > :date2',
    ['date1' => $date1, 'date2' => $date2]
);

As you can see, the query builder is no more readable or comprehensible than the SQL query and, at best, merely tries to mimic SQL semantics in code.

Its abstraction is extremely limited—you still need to know SQL and have a clear idea of the final query to use a query builder effectively. And for more complex queries, it is unlikely to provide as much flexibility as native SQL.

I am now going back writing good old SQL queries and I’m surprised it took me that long. "Why?" you may wonder − well, mostly for the following reasons:

  • You don’t get 100% of the SQL expressiveness
  • It just does not work for complex queries, or loses its purpose
[Righetti 52]

Rethinking the Model with Data-Oriented Design #

The Data-Oriented Design (DOD) paradigm emerged in C++ video game development. Its goal was to move away from the traditional object-oriented model and adopt more efficient data structures, ensuring better processor cache utilization.

Its core principles focus on centering development around data, making it the most critical element of the code, and processing it through a clear and explicit execution flow.

The purpose of all programs, and all parts of those programs, is to transform data from one form to another.
If you don’t understand the data, you don’t understand the problem.
If you don’t understand the cost of solving the problem, you don’t understand the problem.
Everything is a data problem. Including usability, maintenance, debug-ability, etc.

The Three Big Lies:

  1. Software is a platform by itself.
  2. The code should be designed around mental models of the world.
  3. Code is more important than data.
[Acton 53]

Data-oriented design shifts the perspective of programming from objects to the data itself: The type of the data, how it is laid out in memory, and how it will be read and processed.

Programming, by definition, is about transforming data: It’s the act of creating a sequence of machine instructions describing how to process the input data and create some specific output data. [Llopis 54]

DOD programming prioritizes efficient data organization, performance optimization, and scalability. Unlike the traditional OOP paradigm that focuses primarily on objects and their behaviors, DOD programming places a strong emphasis on data and its manipulation. By centering on data, DOD programming offers a unique perspective that aligns perfectly with the data-intensive nature of modern projects.

The main principles are:

  • Data-centric Design: Applications are structured primarily around the data domain model, relationships, and access patterns rather than focusing on abstractions or object-oriented hierarchies.
  • Explicit Data Flow: The flow of data through the system is made obvious in code through shared data structures and stateless functions. Data moves between components rather than deep method calls.
  • Loose Coupling: By favoring data-driven decompositions instead of entity-oriented designs, DOD code naturally results in loosely coupled and independent functions/components that are easier to test, reason about, and parallelize.
[Dang 55]

This is fundamentally different from a strictly object-oriented approach. The code is stateless and serves only to process and transform data when needed.

Because data-oriented design puts data first and foremost, we can architect our whole program around the ideal data format. We won’t always be able to make it exactly ideal, but it’s the primary goal to keep in mind. Once we achieve that, most of the problems tend to melt away.

Data-oriented design is beneficial to both performance and ease of development. When you write code specifically to transform data, you end up with small functions, with very few dependencies on other parts of the code. The codebase ends up being very flat, with lots of leaf functions without many dependencies. This level of modularity and lack of dependences makes understanding, replacing, and updating the code much easier. [Llopis 54]

Traditional OOP approaches tend to model real-world concepts or abstractions as “objects” that encapsulate both data and behavior. In contrast, DOD separates data structures from business logic and places the primary emphasis on organizing and processing data in a cache-friendly manner. This different mindset leads DOD programs to have:

  • Independent data structures rather than bundled object types.
  • Stateless, isolated functions rather than methods.
  • Pass-by-value (immutable) data passing instead of object references.
  • Avoidance of shared mutable state between components.
[Dang 55]
  • Separates data from logic
    • Data is regarded as information that has to be transformed
  • The logic embraces the data
    • Does not try to hide it
    • Leads to functions that work on arrays
  • Reorganizes data according to its usage
    • If we aren’t going to use a piece of information, why pack it together?
  • Avoids “hidden state”
[Nikolov 56]

Ultimately, this model can be applied beyond video games, particularly in web development.

Remember that the job of your model layer is not to represent objects but to answer questions. Provide an API that answers the questions your application has, as simply and efficiently as possible. Sometimes these answers will be painfully specific, in a way that seems "wrong" to even a seasoned OO developer, but with experience you will get better at finding points of commonality that allow you to refactor multiple query methods into one. [Voss 43]

Frameworks: Good servants but terrible masters #

So, what exactly is a framework? #

Wikipedia defines frameworks as follows:

Frameworks [are] widely used for their ability to enhance developer productivity, offer structured patterns for large-scale applications, simplify handling edge cases, and provide tools for performance optimization. [Wikipedia 57]

Mozilla offers a more detailed definition:

Server-side web frameworks (a.k.a. "web application frameworks") are software frameworks that make it easier to write, maintain and scale web applications. They provide tools and libraries that simplify common web development tasks, including routing URLs to appropriate handlers, interacting with databases, supporting sessions and user authorization, formatting output (e.g. HTML, JSON, XML), and improving security against web attacks. [Mozilla 58]

To explain the difference between a library and a framework, it's often said that business code calls libraries, whereas a framework calls the business code.

A Brief History #

Frameworks have a long history, which can be summed up in three main phases.

The first frameworks emerged in the second half of the 90s (ColdFusion in 1995, WebObjects in 1996, WebLogic in 1997, WebSphere in 1998, Java EE in 1999). Primarily targeting large organizations such as banks and early e-commerce sites (a WebObjects license cost $50,000 before 2000), they were defined by complex infrastructures and widespread use of the Java language.
The Java EE ecosystem later expanded — and grew more complex — with components such as Tomcat, Struts, Hibernate, JBoss, and Spring.

In response, simpler frameworks emerged in the 2000s, built on scripting languages with looser typing, allowing for faster development (Ruby on Rails in 2004; CakePHP, Django, then Symfony in 2005; CodeIgniter and Zend Framework in 2006; Temma in 2007).
At the time, the Java ecosystem was seen as costly for businesses due to its complexity, which extended development times. This was driven by companies that had a vested interest in maintaining such an ecosystem—whether to sell solutions, training, or development time.

In the 2010s, the PHP community sought to establish its credibility by “professionalizing” its frameworks, increasingly drawing inspiration from the Java ecosystem.
This shift had several effects:

By competing with Java technologies, the PHP ecosystem has left the door open for Python and Node.js to dominate lightweight, fast development—making them the new “cool” platforms.

In the 2000s, Java was criticized for being backed by corporations, with its complexity and constant evolution allowing them to sell services and training.
Notably, many of the leading projects in the PHP world are also backed by companies with their own interests.

Should frameworks be reconsidered? #

As with all best practices in IT, it's worth asking whether a recommendation from a large company truly applies to you. A practice that works well elsewhere might create more problems than it solves. If a framework is designed for teams of several hundred developers, is it really the right choice for your team of 12?

Different problems require different solutions.
Solving problems you don’t have creates more problems you definitely do. [Acton 53]
It's crucial to remember that framework creators have their own priorities. They’re solving THEIR problems, not yours. Big Tech giants have led the way, building and open-sourcing tools that the rest of us now get to play with. But these tools weren’t built to be universally applicable. They were created to solve specific problems that most companies will never encounter. [Bobrov 60]

While frameworks have become an integral part of the PHP world, some voices are starting to challenge the dogma imposed by large, complex frameworks. Whether it's the rigidity of development, the steep learning curve, or the relentless update cycle forcing constant code changes, reconsidering established habits seems like a healthy approach.

Frameworks provide pre-designed templates and tools that simplify many development tasks. Used by startups and tech giants alike, frameworks are praised in the tech industry for their ability to streamline workflows and standardize solutions.
However, with the increasing prevalence of frameworks, there are growing concerns about their overuse and potential drawbacks for developers and software products. Are frameworks a golden solution or silently limiting creativity and technical growth?

A developer working exclusively with frameworks might need a deeper understanding of programming languages and core principles to solve more complex problems.
Frameworks are designed to follow strict conventions and patterns. While this is excellent for reducing complexity, it can limit creativity. The framework's rules often restrict developers, narrowing their innovation ability.
The wide variety of frameworks available often challenges developers trying to keep up with the latest tools, leaving programmers overwhelmed by the constant need to adapt. [.NET Expert blog 59]

Are frameworks becoming... overhyped? There’s a whole ecosystem pushing developers into these frameworks, only for them to realize later that, once they scale, they’re locked into expensive platforms. It feels a bit like a trap, doesn’t it?

Lately, there’s been a bit of a rebellion brewing—people are getting tired of frameworks. Developers are fed up with the constant churn. You’ve probably seen it: every time a major update drops, you have to rewrite significant chunks of your codebase just to stay relevant. And don’t get me started on the endless cycle of breaking changes.

This frustration has given rise to a revival of simpler, more stable stacks among developers who prioritize getting things done over staying on the bleeding edge of tech. Yeah, it might feel a bit “old-school,” but it’s far from obsolete. With a simpler stack, you can iterate fast and ship even faster. Sometimes you don’t need all the fancy stuff. Sometimes, sticking to what works can save you a whole lot of headaches. [Bobrov 60]

You must avoid forcefully using frameworks as they will later cause several problems instead of providing solutions.

If you heavily rely on frameworks, you might lose the opportunity to learn the underlying language associated with the framework. With a framework, you interact only with the higher levels of the system and have fewer chances to solve complex problems.

Any framework has many pre-built tools and packages to cater to varying user requirements. So, it includes a list of functions, features, and code that will be useless and irrelevant to your project. When you develop simple web applications, the extra files and unnecessary code hurt the overall speed and performance. [TechAffinity blog 61]

Over-reliance on libraries and frameworks can lead to a lack of understanding of the underlying code and principles. This can result in a lack of flexibility and difficulty in customizing the code to specific project requirements.

Using too many libraries and frameworks can increase the complexity of the codebase, making it harder to maintain and debug over time. It is debatable, but in some cases, using libraries and frameworks can result in slower performance compared to custom-built solutions that are optimized for specific use cases.

Libraries and frameworks can become outdated over time, which can lead to compatibility issues and the need to rewrite significant portions of code to keep up with new technology advancements. [Mishra 62]

Each framework enforces its own way of doing things. Maintaining a project requires more than just a PHP developer—you need a Laravel, Symfony, or CodeIgniter specialist. Even an experienced developer may not be familiar with all the features directly or indirectly tied to a framework, making it difficult to understand a codebase.

Frameworks introduce abstractions that obscure the programming language's native features. A first feature saves time on simple development tasks; then a second feature builds on the first, and so on, until the entire system turns into a complex stack of dependencies.
At some point, you realize that the initial feature is no longer useful because you've outgrown its optimal use case. Yet, you're forced to keep the entire technology stack, as it has seeped into every aspect of development, creating dependencies throughout the codebase.

Actually, my old CodeIgniter projects are now so glued to the framework that it became really hard to use something new or to add simple stuff like… Tests! So if I really want or need to move away, I’ll have some hard times ahead. And the problem is not just moving away. If the default template engine of the framework is not suitable for the project anymore, it will not be easy to get rid of it.

When you choose a full-stack framework, you are kind of choosing to couple your project to the framework. Ok, you can build a project decoupled from the framework, but this is not the usual way to go and is surely not easy. [Junior 63]

What about microframeworks? #

Microframeworks may seem like an appealing alternative to large frameworks. While their extreme simplicity makes them easy to grasp, they often struggle to withstand the real-world test of actual projects. Mixing routes with code scales poorly, and adding essential features can quickly become excessively complicated.

Well, for really, really small apps, with no database connection, no Apis consuming and basically no need for more than a few lines of code for each route (or prototyping with mock data), micro-frameworks will be really easier to use and will not require tons of config files and other services running like some full-stack ones require. [Junior 63]
Because of the rather declarative nature of microframeworks, and the typically 1:1 mapping of a route to a controller, microframeworks do not tend to promote code re-use. Additionally, this extends to how microframework applications are organized: usually, there are no clear guidelines on how to organize routes and controllers, much less separate them into multiple files. This can lead to maintenance issues as the application grows, as well as logistical issues whenever you need to add new routes and controllers. [O’Phinney 64]
I really like the syntax and style of Labstack’s Echo framework for Golang. But my experience changed when adding a database to my app. The simplicity fell apart in that putting a global variable makes it hard to test. Without state, I don’t have this problem. There are many microframeworks where this happens. You can almost predict it happening if you look at the table of contents for the documentation and see that they have no database story. [Dillon 65]

There are many microframeworks in every language but it started with Sinatra. The trade-offs of Sinatra are not obvious in the Hello World. To me, the simplicity trade-off is inticing to juniors when they don’t know what the trade-off is. The API surface is small which is easier to learn. But confusing things start to happen after that. So many other questions and concerns would appear after this most basic flow.

So, as the project grew or if you simply kept working with it you would have to solve, research or enhance Sinatra yourself. Many times, myself and others would copy whole files out of Rails default projects. Ideas like where to put configs, test fixtures or the concept of dev/test/prod. [Dillon 66]

The long-term viability of microframeworks is highly questionable. In the PHP world, Silex and Lumen have been discontinued in favor of their full-featured counterparts.

The microframeworks might churn a lot more because there is less to throw away or because they are easy to invent? It’s much easier to create a hobby microframework project that is small in scope. We can almost skip microframework attention because they will be the first to go. [Dillon 66]

The Right Approach #

Today, going back to building websites without frameworks seems nearly impossible. When used properly, they help reduce development time.

However, a framework should be chosen based on actual needs, not trends. The key is to strike the right balance—selecting a tool that provides structure while still allowing for custom code.

A framework should be an aid, not a constraint.

While frameworks are excellent for speeding up development, sometimes custom code is the better solution. Use frameworks for their strengths (like templating, routing, and basic functionality), but be bold and bold in stepping outside their limitations. Combining framework expertise with custom solutions will allow you to meet unique requirements effectively. [.NET Expert blog 59]
Remember, frameworks should serve YOUR architecture, not dictate it. With careful planning and strategic abstraction, you can reap the benefits of frameworks without getting trapped in long-term dependencies. The trick is staying in control. So next time you’re about to dive into a framework, take a step back and remind yourself: you call the shots here. [Bobrov 60]
If you decide that relying too heavily on libraries and frameworks is problematic, consider alternatives such as building custom solutions, using simple tools, focusing on mastering underlying technologies, or using a combination of pre-built components and custom solutions. [Mishra 62]
The problem here is that many beginners get used to one framework and tend to use it for everything. If you are a beginner, I recommend you to build stuff, a lot of stuff, with no framework at all. Build a lot of toy projects, then try to use some frameworks, more than one, and then you will be able to make better choices. [Junior 63]

Automated tests: Unit, integration, functional − Find your balance #

Quick Definitions #

In software development, automated testing refers to code that tests other code. There are three main types of automated tests:

Automated testing is a valuable asset for agile development. It allows you to modify the source code without worrying about breaking dependencies. Simply run the tests to catch any regressions early and fix them quickly.

The purpose of testing is to create information about your program. (Testing does not increase quality; programming and design do. Testing just provides the insights that the team lacked to do a correct design and implementation.) [Coplien 67]

How It Works in Real Life #

Unit tests are generally considered faster to write and execute than functional tests [Modus Create blog 68]. As a result, it is recommended to always write unit tests for every part of your code [Twilio blog 69]. The corollary is that you are supposedly better off writing significantly more unit tests than functional tests [Modus Create blog 68].

This holds true in theory, but practice often tells a different story.

When unit testing an object, it's often possible to write an extensive number of tests to verify its behavior in every conceivable scenario. The challenge is knowing when to stop—where to draw the line between thorough testing and excessive coverage.

Test coverage is a commonly discussed metric, but it's difficult to define accurately. Some argue that testing is pointless if less than 90% of the code is covered, but we all know that some testing is always better than none. Even a 100% coverage rate doesn't necessarily mean much; it could simply indicate that all methods are tested in their nominal cases—but are their edge cases and failure scenarios covered as well?

Few developers admit that they do only random or partial testing and many will tell you that they do complete testing for some assumed vision of complete. Such visions include notions such as: "Every line of code has been reached," which, from the perspective of theory of computation, is pure nonsense in terms of knowing whether the code does what it should.

Programmers have a tacit belief that they can think more clearly (or guess better) when writing tests when writing code, or that somehow there is more information in a test than in code. That is just formal nonsense. [Coplien 67]

Code coverage has absolutely nothing to do with code quality (in many cases, it's inversely proportional). [Kiehl 73]

Mocks are widely used, as they are often essential for unit testing. However, these fake objects can make tests more time-consuming to write, harder to maintain, and less reliable.

When writing tests for your code, it can seem easy to ignore your code's dependencies by mocking them out. However, not using mocks can sometimes result in tests that are simpler and more useful.

Overusing mocks can cause several problems: Tests can be harder to understand. Tests can be harder to maintain. Tests can provide less assurance that your code is working properly.
If you're trying to read a test that uses mocks and find yourself mentally stepping through the code being tested in order to understand the test, then you're probably overusing mocks. [Trenk 70]

On the other hand, a single integration or functional test can validate multiple layers of code. If such a test fails, you can focus on the objects involved in the execution path; with well-structured logs, identifying and fixing the error becomes much faster.
This may seem less rigorous than unit-testing every object individually. However, it is often more efficient, as the time saved by not writing every possible unit test usually outweighs the time spent pinpointing the exact source of an error.

About Test-Driven Development (TDD) #

Test-Driven Development (TDD) fully integrates unit testing into the development process by writing tests before producing the actual code. This approach makes tests less of a burden compared to writing them at the end of development. Additionally, TDD contributes to the technical specification of a project: by defining acceptance criteria for an object or method, we precisely determine its expected behavior—just as a technical specification would.

One of the hidden benefits of TDD is that having such a structured, well-defined technical specification shortens the development phase, as developers have a clear direction. There is less trial and error, leading to a more efficient workflow.

However, TDD is not a universal solution. It cannot be applied to existing codebases and is limited to unit tests, which—as discussed earlier—are not the only type of tests needed.
Moreover, TDD is not suitable for all situations, particularly when there is no sufficiently precise specification and development requires an exploratory approach.

Test-first fundamentalism is like abstinence-only sex ed: An unrealistic, ineffective morality campaign for self-loathing and shaming. The current fanatical TDD experience leads to a primary focus on the unit tests. I don't think that's healthy. Test-first units leads to an overly complex web of intermediary objects and indirection. [DHH 71]

What Strategy Should You Apply? #

In the end, as always, the key is to adopt a smart, pragmatic strategy that leverages all available tools without rigid dogma:

Automated tests must be continuously maintained as the code evolves. This maintenance comes at a cost.

Internalizing the benefits of testing is only the first step to enlightenment. Knowing what not to test is the harder part of the lesson.

Tests aren’t free. Every line of code you write has a cost. It takes time to write it, it takes time to update it, and it takes time to read and understand it. Thus it follows that the benefit derived must be greater than the cost to make it. In the case of over-testing, that’s by definition not the case. [DHH 72]

Low-Risk Tests Have Low (even potentially negative) Payoff. [Coplien 67]

In a codebase, not all objects evolve at the same pace. Some change very little, making their unit tests a worthwhile investment. These are often the most stable layers of the code—the ones where detecting regressions as quickly as possible is most critical.
For rapidly evolving objects, maintaining 100% test coverage can become prohibitively expensive. This often applies to objects closest to the user interface, for example. In such cases, unit testing should be kept to a minimum, supplemented by integration tests that help catch errors while keeping maintenance costs lower.

And let's not forget that, in an ideal world, automated tests should be complemented by manual testing, conducted by people other than the developers, to catch bugs that might otherwise go unnoticed.

Micro-services: Highly unlikely you’ll need them #

The Purpose of Microservice Architectures #

Microservice architectures emerged as a response to two key challenges: scalability and large-scale collaborative development.

The solution was to break applications down into smaller, highly autonomous components. This allowed teams to develop each part independently and allocate server resources to specific services without impacting others.

It works very well—in fact, in some cases, it's absolutely essential.

Micro/services oriented architecture is a prescription to break down an application into many smaller parts, run each of these parts as their own application, and then let the constellation solve the grand problem you really care about.

This is a great pattern. If you’re Amazon or Google or any other software organization with thousands of developers, it’s a wonderful way to parallelize opportunities for improvement. When you reach a certain scale, there simply is no other reasonable way to make coordination of effort happen. [DHH 74]

The Drawbacks of Microservices #

The problem is that—like always—some people assume that if a practice works well under certain conditions, it must be beneficial in all cases. As a result, monolithic development is often dismissed as outdated.

This is a misconception. Breaking an architecture into multiple independent components makes little sense for a small application:

The problem with prematurely turning your application into a range of services is chiefly that it violates the #1 rule of distribute computing: Don’t distribute your computing! At least if you can in any way avoid it.

Every time you extract a collaboration between objects to a collaboration between systems, you’re accepting a world of hurt with a myriad of liabilities and failure states. What to do when services are down, how to migrate in concert, and all the pain of running many services in the first place. [DHH 74]

The fallacies of distributed computing have been well understood since the 1990s:

  1. The network is reliable;
  2. Latency is zero;
  3. Bandwidth is infinite;
  4. The network is secure;
  5. Topology doesn't change;
  6. There is one administrator;
  7. Transport cost is zero;
  8. The network is homogeneous.

Unless you have a very specific need for a microservices architecture, stick with a monolithic approach. It's proven, reliable, maintainable, and scalable.

Monoliths remain pretty good. [Kiehl 73]
The vast majority of web applications should start life as a Majestic Monolith: A single codebase that does everything the application needs to do. This is in contrast to a constellation of services, whether micro or macro, that tries to carve up the application into little islands each doing a piece of the overall work. [DHH 76]
You must be this tall to use microservices.
[using a microservices architectural style] Developers enjoy working with smaller units and have expectations of better modularity than with monoliths. But as with any architectural decision there are trade-offs. In particular with microservices there are serious consequences for operations, who now have to handle an ecosystem of small services rather than a single, well-defined monolith. Consequently if you don't have certain baseline competencies, you shouldn't consider using the microservice style. [Fowler 77]
The Prime Video team at Amazon has published a rather remarkable case study on their decision to dump their serverless, microservices architecture and replace it with a monolith instead. This move saved them a staggering 90%(!!) on operating costs, and simplified the system too. What a win! |DHH 78]

Beyond the Monolith #

If a part of your application becomes so complex that it starts weighing down the entire system, that’s not a reason to switch to a microservices architecture. Instead, extract that component and manage it separately—without disrupting the rest of the monolith.

This is what David Heinemeier Hansson calls “the citadel”:

That next step is The Citadel, which keeps the Majestic Monolith at the center, but supports it with a set of Outposts, each extracting a small subset of application responsibilities. The Outposts are there to allow the Majestic Monolith to offload a particular slice of divergent behavior, either for organizational or performance or implementation reasons.

We didn’t try to carve the entire app up into little services, each written in a different language. No, we just extracted a single Outpost. That’s a Citadel setup.

As more and more people come to realize that the chase for microservices ended in a blind alley, the pendulum is going to swing back. The Majestic Monolith is here waiting for microservice refugees. And The Citadel is there to give peace of mind that the pattern will stretch, if they ever do hit that jackpot of becoming a mega-scale app. [DHH 76]

APIs: Challenge the habits #

There are many ways to build APIs. Yet, once again, successive trends have led to unnecessarily complex standardization. It's important to step back and ask the right questions to design APIs that are easier to develop and maintain.

The Simplicity of Webhooks When Possible #

Whether it's a webhook or an API, both ultimately involve making an HTTPS request to a URL, sending data, and receiving a response. The difference between the two is more philosophical than technical.

Webhooks are generally considered a subset of APIs (or even "pseudo-APIs") that are exclusively event-driven.

Originally, the term referred to a URL provided to a remote system, which would call it to notify that an event had occurred. This is more efficient than continuously polling an API to check if the event has happened.

Over time, the definition has expanded to include simple URLs that can receive data via POST (or more rarely, GET) and return data—typically in JSON format.
A webhook is essentially a single URL that contains everything needed for direct use—no complex authentication mechanisms, minimal configuration.

A common example is enterprise messaging platforms, which provide a full API for accessing all features, but also offer webhooks for easily sending messages to chat rooms.

Simply put, don't build an API when a webhook is enough. It only adds unnecessary complexity—both for your development and for the clients integrating with your system.

Many services offer webhooks, either as standalone solutions or alongside a REST API: Slack, Discord, Twilio, Stripe, PayPal, GitHub, GitLab, IFTTT, GoCardless, and more.

Handle Simple Data Like a Form Processor #

When processing incoming data—whether from a webhook or an API—we typically serialize it into a standard format, usually JSON. And in most cases, this works well.

However, when the incoming data is simple and doesn’t require complex structures, it’s even easier to handle it as if it were submitted through an HTML form:

Several services accept data via GET or POST parameters for part or all of their APIs, including Twilio, Vonage, OpenWeatherMap, Google Maps Static, Pingdom

Remote Procedure Calls Over REST #

In the history of distributed computing, REST architecture is a more recent development than the RPC approach.

RPC (Remote Procedure Call) involves invoking methods on remote objects, passing parameters, and receiving a result in return.
The REST philosophy, on the other hand, is based on the concept of resources, which are manipulated through a limited set of operations mapped to HTTP methods: GET (read), POST (create), PUT (replace), PATCH (modify), and DELETE (remove). In other words, basic CRUD operations.

REST works exceptionally well in many cases. The issue, once again, is that it has become such an entrenched best practice that some developers believe deviating from REST means you don’t know how to build a “real API.”

Let's take a concrete example: imagine a chat system.
If you want to retrieve a list of chat rooms, REST is a great fit. You would make a request to the URL:
GET /api/channels

To retrieve messages from a channel (ID 123):
GET /api/channels/123/messages

Now, to subscribe a user (ID 789) to the channel, you might expect to make the following call:
POST /api/channels/123/users/789

At first glance, this POST request seems to do nothing more than link the user to the channel. You wouldn’t necessarily expect it to update statuses or trigger email notifications.

However, in a client application, you’re more likely to write something like this:

$channelManager->subscribeUser($channelId, $userId);

This line of code clearly suggests that multiple actions might be triggered. It’s not about manipulating a resource, it’s about performing an action. And that makes all the difference.

So we wouldn’t think twice if the URL looked like this:
/api/channels/subscribeUser/123/789

In general, there’s no reason why API calls should work fundamentally differently from the ones we make inside our code. Whether we’re instructing a local object to perform an action or a remote one, the approach should remain the same.

REST limits the expressiveness of code. Would you design software with CRUD as your only vocabulary?

Today, more and more APIs are moving away from the REST model in favor of remote object calls. A few examples: Telegram Bot API, Slack API, MetaWeblog API, Google Cloud gRPC API, Bitcoin API...

Authentication: Say Yes to HTTP Basic, No to JWT #

To grant access to an API, user authentication is required. Several methods exist, but JWT tokens have become the popular choice.

JWTs are cryptographic tokens, meaning they contain information that cannot be altered (or the token becomes invalid). The idea sounds appealing: the client sends a username and password (or public and private keys) and receives a token containing their access rights. This token is then included in every API request, allowing the server to trust the embedded data without rechecking user permissions.

But in real-world scenarios, it's not that simple. If a user's access rights are revoked, you don’t want them to continue using the API. However, as long as the token remains valid, they will still be granted access.
A common workaround is to shorten the token's lifespan, forcing frequent regeneration. But this introduces unnecessary overhead, leading to three separate requests for even the simplest API interaction:

  1. The client attempts to use the API, providing the cached token.
    → The API responds that the token has expired.
  2. The client requests a new token by sending its credentials.
    → The API responds with a fresh token.
  3. The client retries the API request with the new token.
    → The API processes the request and returns the result.

To avoid this unnecessary complexity, API authentication can be handled using HTTP Basic Auth. This is a simple, universally supported mechanism where credentials are included in every request.

HTTP Basic Auth has a bad reputation, often considered less secure because credentials are sent with every request. This was indeed an issue before widespread SSL/TLS encryption. However, today, with free SSL certificates from Let’s Encrypt, there’s no excuse for not securing traffic. Since credentials are transmitted over an encrypted connection, they remain protected from prying eyes.

More importantly, this approach drastically simplifies the authentication flow:

  1. The client connects to the API, providing its credentials.
    → The API processes the request and returns the result.

Many major services rely on HTTP Basic Auth, either as their primary method or alongside token-based authentication: Azure API, Twilio, Stripe, GitHub, IBM MQ, IBM App Connect, and more.

Dual-Purpose Endpoints #

Also known as "content negotiation" or "API mode switch", this technique helps avoid developing APIs from scratch when it's unnecessary. It can be particularly useful for supplying data to a mobile application when a fully functional website already exists.

Some or all of the site’s pages can return a JSON response instead of the usual HTML. Several methods can be used to distinguish between the two:

Depending on how the site is structured, implementing this can be straightforward—often requiring only a plugin, middleware, or hook that switches from the usual HTML template engine to a JSON view. All data normally passed to the template is instead serialized as JSON.

API calls requiring authentication naturally inherit the website's user rights management system. Authentication can be handled in two ways:

This technique is more widely used than it may seem, with well-known examples including: Jekyll/GitHub Pages, Discourse, MediaWiki/Wikipedia, IBM UrbanCode Release, and others.

Security: The Non-Negotiable Basics #

Web application security should never be taken lightly. Security flaws generally fall into two categories:

Numerous resources focus on web security, including the OWASP Foundation (Open Worldwide Application Security Project), which identifies the ten most critical security risks affecting web applications [OWASP 79]. Reviewing these guidelines is highly recommended.

Basic Principles #

The golden rule when developing a system that interacts with other systems and external users is simple: never trust external input. Everything entering the system must be validated and sanitized before integration. Likewise, anything leaving must be processed to prevent malicious use.

Nearly nine in 10 attacks are related to input validation failures. [Vijayan 80]

HTTP Server Configuration #

Securing communication between the server and browsers is crucial. Encryption certificates prevent external observers from intercepting data flows. While these certificates used to be expensive, the Let's Encrypt project now provides them for free.

Over the years, multiple versions of SSL and TLS protocols have been released. It's recommended to disable outdated versions (SSLv3, TLSv1.0, TLSv1.1) as they are no longer secure. Unless you specifically need to support legacy browsers, it's even advisable to disable TLSv1.2 and rely solely on TLSv1.3. Several resources explain how to configure this properly [Better Stack 81, Nek 82].

Additionally, several security-focused HTTP headers should be properly configured to reinforce your site's protection: Content-Security-Policy, X-Frame-Options, X-XSS-Protection, X-Content-Type-Options, Referrer-Policy, Permissions-Policy, Strict-Transport-Security
You can find detailed guides on how to set them up [Starr 83, OWASP 84].

SQL Injection #

SQL injection remains the most common attack on web services.

SQL Injection Attacks Represent Two-Third of All Web App Attacks [Vijayan 80]

The principle of SQL injection is simple: a malicious user manipulates input data—for example, via a form submission—to alter the behavior of a database query.

For example, in an authentication form, it would be possible to enter the value "admin'; --", without providing a password. A vulnerable system would execute the following SQL query:

SELECT * FROM users WHERE login = 'admin'; -- AND password = '';

In SQL, two dashes (--) indicate the start of a comment, which means everything after them is ignored. As a result, this query would return data for the admin user—granting unauthorized access.

The solution is to escape all input data, either by using a function like mysqli_real_escape_string() or PDO::quote(), or by using prepared queries. The final query would then be:

SELECT * FROM users WHERE login = 'admin\'; --' AND password = '';

The query will return no results.

It's worth noting that prepared queries are often presented as the only best practice in this field, as they allow you to easily escape parameters. However, they were created with the idea that the same query could be used several times with different parameters.
Some complex, dynamically-generated queries can't be satisfied with prepared queries, so it's important to be familiar with the various techniques for escaping data.

Cross-site scripting (XSS) #

XSS is a security vulnerability that occurs when user-supplied data is inserted into a web page without proper sanitization, allowing it to be executed in another user's browser.

For example, consider a website that allows visitors to post comments. If a comment is stored in a database and displayed without filtering, a malicious user could insert a "<script>" tag containing JavaScript. This script would then execute in the browser of anyone viewing the page.
This opens the door to the recovery of personal information, the retrieval of session cookies (and therefore identity theft), and the execution of malicious programs (cryptocurrency mining, DDOS).

The appropriate solution depends on the type of data being processed.

If only plain text is needed, it should be properly escaped at the display stage to ensure that "<script>" tags appear as text ("&lt;script&gt;") rather than being interpreted by the browser.
For this, the htmlspecialchars() function is useful, unless you're using a template engine that escapes variables by default or on demand.

If HTML input is allowed (e.g., from a WYSIWYG editor), sanitize it before storage to ensure only permitted tags are allowed. Libraries such as HTMLPurifier help strip out unwanted scripts while preserving safe HTML formatting.

OWASP provides a Cross-Site Scripting Prevention Cheat Sheet, which covers less common XSS vulnerabilities and how to mitigate them.

Serverside request forgery (SSRF) #

SSRF attacks exploit vulnerabilities that allow an attacker to trick a server into making unintended requests. In most cases, the flaw occurs when unvalidated input is used blindly by the server.

For example, a site may receive a parameter URL to which it must connect in order to retrieve information. A hacker could then provide a URL pointing to a local file, or to a URL internal to the network, thereby exposing secret data.

In another example, a site can be given a path to a local file to include as a parameter. If the parameter points to an external URL, the system will fetch this content and interpret it, enabling malicious code to be executed on the server.

Once again, the solution is to never trust external data, and always check it before using it. And never include a file whose path is provided as a parameter (GET or POST).

OWASP provides a Server-Side Request Forgery Prevention Cheat Sheet, detailing common SSRF vulnerabilities and how to mitigate them.

Cross-site request forgery (CSRF) #

Unlike other attacks that rely on injecting corrupted data into a system, CSRF exploits user actions by tricking them into performing unintended operations without their awareness.

Consider an admin panel that requires authentication. Once logged in, a session cookie is stored in the user's browser. The panel allows deleting articles via links like: /article/delete/[articleId], which then redirects to the article list.
Now imagine a phishing email or malicious ad containing a link that redirects to: https://admin.mysite.com/article/delete/1234
If the user clicks the link while logged in, the article gets deleted automatically, without them realizing it—leaving them confused upon arriving at the article list.

The most well-known defense is token-based validation: A token is included in the URL, and the server checks it before executing any sensitive request. This solution is so widespread that many people seem to think it's the only possible solution.
Beware, however, as a poor implementation can generate a false sense of security. And a shaky, session-based implementation will prevent the same site from being used in several tabs at the same time.

There are, however, best practices that are much simpler to implement and just as effective.

The first is to only accept POST requests for delicate actions. Indeed, when redirecting (or clicking on a link in an e-mail), the browser is only capable of making a GET request.

The second best practice is to use the SameSite parameter when creating the authentication cookie. If this parameter is set to “Lax”, the cookie will not be sent in the event of a POST request from another site. The “Strict” value is even more restrictive: the cookie won't be sent for GET requests either.

The third, if needed, is to check the content of the REFERER header sent by the browser. During a redirect, the REFERER will indicate a different domain from the current one.

When used together, these best practices are highly effective.

Here again, OWASP offers the Cross-Site Request Forgery Prevention Cheat Sheet, for further information on CSRF vulnerabilities.

Resources #

References #

  1. The PHP Documentation Group. “History of PHP
  2. Rasmus Lerdorf (Creator of PHP). “25 years of PHP” (2019)
  3. Rasmus Lerdorf (Creator of PHP). "[PHP-DEV] Re: Generators in PHP” (2012)
  4. Kevin Yank (Principal Architect at Culture Amp). “Interview – PHP’s Creator, Rasmus Lerdorf” (2002)
  5. Rasmus Lerdorf (Creator of PHP). @rasmus tweet (2010)
  6. Avery Pennarun (CEO of Tailscale). “The New Internet” (2024)
  7. Scott Berkun (Author of The Myths of Innovation). “Two kinds of people: complexifiers and simplifiers” (2006)
  8. Wikipedia article. “Wirth's law” (2024)
  9. Marek Kirejczyk (Founder of vlayer Labs). “Hype Driven Development” (2016)
  10. Anouk Goossens (Consultant at The Learning Hub). “Why best practices aren’t the holy grail” (2023)
  11. Austin Knight (Design Manager at Square). “The Road to Mediocrity Is Paved with Best Practices” (2015)
  12. Matt Stancliff (Contributor to Redis). “Panic! at the Job Market” (2024)
  13. Hacker News discussion. “Why bad scientific code beats code following ‘best practices’” (2016)
  14. Jeff Atwood (Co-founder of StackOverflow and Discourse). “Why Objects Suck” (2004)
  15. Wikipedia article. “Object-oriented programming
  16. Brian Will (Senior software engineer at Unity). “How to program without OOP” (2016)
  17. Eric S. Raymond (Co-founder of the Open Source Initiative). “The Art of Unix Programming” (2003)
  18. Jeff Atwood (Co-founder of StackOverflow and Discourse). “Your Code: OOP or POO?” (2007)
  19. Jeff Atwood (Co-founder of StackOverflow and Discourse). “When Object-Oriented Rendering is Too Much Code” (2006)
  20. Rich Hickey (Creator of the Clojure programming language). Software Engineering Radio podcast #158 (2010)
  21. Asaf Shelly (AI & Cybersecurity expert). “Flaws of Object Oriented Modeling” (2008)
  22. Brian Will (Senior software engineer at Unity). “Object-Oriented Programming is Embarrassing: 4 Short Examples” (video, 2016)
  23. Elliot Suzdalnitski (CEO of Empire School of Business). “Object-Oriented Programming — The Trillion Dollar Disaster” (2019)
  24. Joe Armstrong (Co-creator of the Erlang programming language). “Why OO Sucks” (2011)
  25. Rob Pike (Co-creator of the Plan 9 operating system, UTF-8 encoding and Go programming language). Post on Google+ (2012)
  26. Gina Peter Banyard (PHP Core developer and Documentation maintainer). “PHP RFC: Unify PHP's typing modes (aka remove strict_types declare” (2021)
  27. Cheatmaster30 (Journalist at Gaming Reinvented). “Putting devs before users: how frameworks destroyed web performance” (2020)
  28. Mark Zeman (Founder of Speedcurve). “Best Practices for Optimizing JavaScript” (2024)
  29. Article on Easy Laptop Finder’s blog. “The relentless pursuit of cutting-edge JavaScript frameworks inadvertently contributed to a less accessible web” (2023)
  30. Eduardo Rodriguez (full-stack software engineer at they consulting). “Dependency management fatigue, or why I forever ditched React for Go+HTMX+Templ” (2024)
  31. David Haney (Founder of CodeSession). “NPM & left-pad: Have We Forgotten How To Program?” (2016)
  32. Adrian Holovaty (Co-creator of the Django framework). “dotJS - A framework author's case against frameworks” (video, 2017)
  33. Alex Russell (Partner Product Manager at Microsoft). “If Not React, Then What?” (2024)
  34. UK Government. “Building a robust frontend using progressive enhancement” (2024)
  35. Andy Bell (Founder of Set Studio and Piccalilli). “It’s about time I tried to explain what progressive enhancement actually is” (2024)
  36. Kelly Sutton (Co-founder of Scholarly Software). “Moving on from React” (2024)
  37. Kelly Sutton (Co-founder of Scholarly Software). “Moving on from React, a Year Later” (2025)
  38. David Heinemeier Hansson (CTO of 37signals, creator of Ruby On Rails). “Modern web apps without JavaScript bundling or transpiling” (2021)
  39. David Heinemeier Hansson (CTO of 37signals, creator of Ruby On Rails). “You can't get faster than No Build” (2023)
  40. Michael Stonebraker (Co-creator of the PostgreSQL database) “Comparison of JOINS: MongoDB vs. PostgreSQL” (2020)
  41. Mridul Verma (Senior Staff Software Engineer at Sumo Logic). “Database Performance: MySQL vs MongoDB” (2024)
  42. Laurie Voss (Co-founder of NPM). “In defence of SQL” (2011)
  43. Laurie Voss (Co-founder of NPM). “ORM is an anti-pattern” (2011)
  44. Yegor Bugayenko (Lab Director at Huawei). “ORM Is an Offensive Anti-Pattern” (2014)
  45. Eli Bendersky (Researcher at Google). “To ORM or not to ORM” (2019)
  46. Jeff Atwood (Co-founder of StackOverflow and Discourse). “Object-Relational Mapping is the Vietnam of Computer Science” (2006)
  47. Chris Maffey (Founder of PHP Lab). “Why SQL is still really important” (2020)
  48. François Zaninotto (Co-creator of the Propel ORM). Tweet from @francoisz (2019)
  49. Alex Martelli (Senior Staff Engineer at Google and Fellow of the Python Software Foundation). Answer on Stack Overflow (2010)
  50. Mattia Righetti (Systems Engineer at Cloudflare). “You Probably Don't Need Query Builders” (2025)
  51. Anh-Tho Chuong (CEO at Lago). “Is ORM still an 'anti pattern'?” (2023)
  52. Mattia Righetti (Systems Engineer at Cloudflare). “Can't Escape Good Old SQL” (2025)
  53. Mike Acton (Director of Engineering at Hypnos Entertainment). “CppCon 2014: Data-Oriented Design and C++” (video, 2014)
  54. Noel Llopis (Independent game designer). “Data-Oriented Design (Or Why You Might Be Shooting Yourself in The Foot With OOP)” (2009)
  55. Tan Dang (Writer at Orient Software). “Revolutionize Your Code: The Magic of Data-oriented Design (DOD) Programming” (2023)
  56. Stoyan Nikolov (Principal AI Software Engineer at Google). “CppCon 2018: OOP Is Dead, Long Live Data-oriented Design” (video, 2018)
  57. Wikipedia. “Web framework
  58. Mozilla contributors. “Server-side web frameworks” (2024)
  59. .NET Expert blog. “The Problem with Frameworks in Software Development” (2024)
  60. Kirill Bobrov (Senior Data Engineer at Spotify). “The Frameworks Dilemma” (2024)
  61. TechAffinity blog. “The Benefits and Limitations of Software Development Frameworks” (2023)
  62. Sushrut Mishra (Technical Writer at FuelEd). “Why you shouldn’t use Libraries/Frameworks for everything” (2023)
  63. Evaldo Junior (Senior developer). “Are micro-frameworks suitable only for small projects?” (2015)
  64. Matthew Weier O’Phinney (Senior Product Manager at Zend). “On Microframeworks” (2012)
  65. Chris Dillon (Senior Software Engineer at Mitre). “The Database Ruins All Good Ideas” (2021)
  66. Chris Dillon (Senior Software Engineer at Mitre). “Microframeworks Are Too Small” (2023)
  67. Jim Coplien (writer, lecturer, and researcher). “Why Most Unit Testing is Waste” (PDF)
  68. Modus Create blog. “An Overview of Unit, Integration, and E2E Testing” (2023)
  69. twilio blog. “Unit, Integration, and End-to-End Testing: What’s the Difference?” (2022)
  70. Andrew Trenk (Software Engineer at Google). “Testing on the Toilet: Don’t Overuse Mocks” (2013)
  71. David Heinemeier Hansson (CTO of 37signals, creator of Ruby On Rails). “TDD is dead. Long live testing” (2014)
  72. David Heinemeier Hansson (CTO of 37signals, creator of Ruby On Rails). “Testing like the TSA” (2012)
  73. Chris Kiehl (Senior Software Engineer at Amazon). “Software development topics I've changed my mind on after 10 years in the industry” (2025)
  74. David Heinemeier Hansson (CTO of 37signals, creator of Ruby On Rails). “The Majestic Monolith” (2016)
  75. Wikipedia article. “Fallacies of distributed computing
  76. David Heinemeier Hansson (CTO of 37signals, creator of Ruby On Rails). “The Majestic Monolith can become The Citadel” (2020)
  77. Martin Fowler (author and international public speaker on software development and agile methodologies). “Microservice Prerequisites” (2014)
  78. David Heinemeier Hansson (CTO of 37signals, creator of Ruby On Rails). “Even Amazon can't make sense of serverless or microservices” (2023)
  79. OWASP. “OWASP Top Ten
  80. Jai Vijayan (Award winning journalist for Computerworld). “SQL Injection Attacks Represent Two-Third of All Web App Attacks” (2019)
  81. Better Stack. “How can I disable TLS 1.0 and 1.1 in apache?” (2023)
  82. Dimitri Nek. “How to Enable TLS 1.3 in Apache and Nginx on Ubuntu and CentOS
  83. Jeff Starr (Developer, designer, author, and publisher). “Seven Important Security Headers for Your Website” (2024)
  84. OWASP. “OWASP Secure Headers Project