How I Want to Use LLMs in 2026
I would like to thank Xavier Van de Woestyne for his feedback and careful review.
Agentic tools are here, and they are here to stay. I don’t think it is an overstatement to say that LLMs are completely reshaping our day-to-day life. Even if mass adoption has yet to happenWe are seeing more and more public statements from key figures of our industry like Satya Nadella or Jensen Huang trying to shed positive light on AI and pushing for more people to embrace it—probably because they don’t see the increase in active users they were hoping for. , the consequences are already here. In the software engineering industry, I already witness how they come with expectations whether we want to use them or not.
In 2025, I stayed away from the agentic hype. This month, I was made acutely aware of how transformative they can be. I don’t think there is a way back from there.
As we jump into 2026 headfirst, I consequently find myself at a crossroads. I will integrate LLMs in my workflow so that I can get the most out of what they can offer, but I want to do so consciously. Hence this article, whose tone and underlying motivation are different from my other pieces. I want to set a bar, to form a contract of sorts between me and my future self.
I want to be transparent about who—or what—produced the work I am exposing to others. I am an engineer; I can expect to generate tons of code and technical documentation using (meticulously prompted) agents. I am also publishing content online, like on this very website. The last thing I want is to trick people into thinking that I wrote something that was actually generated by a tool. Or, even worse, that they end up convinced something I genuinely wrote “the old way” has been generated.
This does not mean generated content is without value. Nor do I think there is a clear, objective line to draw between what are actually two ends of a spectrum. After all, I’ve been using ChatGPT to polish my articles for a year now, and I haven’t advertised it in the past. Still, agentic tools have become good enough: I will be in a position to describe complex tasks, fully delegate their implementation, and be confident enough to publish the result. I believe it is fair for my fellow humans to be aware of that fact when they read or review the result. Only then will they be able to calibrate their own expectations in light of this information.
As a concrete example, I have started to set up dedicated accounts for Claude Code . I am trying to come up with a reliable way to let it take over the execution of well-scoped tasks, up to responding to reviewers’ feedback on its own. That’s not an end I want to pursue unless I can be transparent about it.
I want to be deliberate about when I use or don’t use LLMs. And I have a surprising number of reasons why.
On a personal level, there are skills I don’t want to lose, craft I still wish to improve. I’m fine with never having to write a bug report directly again, but do I want to forgo authorship of my blog’s articles, for instance? Clearly not.
Besides, we are still grimly on track when it comes to climate change. Even after accepting that individual behaviors have a much less impact than what we’d like, is it really reasonable to have hours of back-and-forth with an agent every day from now on? A lot has been said and written about the impact of LLMs in that regard. I am under the impression that we recently read stories that are tragically similar to what was published about Bitcoin a few years backI had a risk management training once, and something the instructor said stuck with me: when something becomes safer, we humans tend to adapt our behavior to take more risks. Are we doing the same thing with climate change? When we manage to reduce our environmental impact, do we collectively interpret that news as a blank check to find new ways to consume more energy? .
I think one answer to this is to refuse to make the LLM the default, obvious choice. Using an agent should have weight to it. There will be times when an agent will be an enabler—achieving something outside of my immediate reachBecause it would take me too much time, because it would be extremely tedious and error-prone, or for an infinite number of valid reasons. . And there will be times when I will want to use it to write a trivial patch that I could come up with myself in a matter of minutes. I want to cultivate the discipline to make the distinction between the former and the latter, so that I avoid falling into wasteful habits.
I want to be respectful of the people who will be confronted with my use of LLMs. The matter is too complicated to approach any other way. What is acceptable for some feels like an attack on others.
Yes, agents can be powerful accelerators. No, adopting them is neither easy nor the obvious, right thing to do. We are seeing too many people being hurt by the behaviors enabled by agents’ capabilities: open-source maintainers closing their projects from external contributions, junior engineers struggling like never before to find jobs, artists witnessing in real time their art being regurgitated by models trained on their portfolios. The list goes on.
I cannot commit to never using generated art, or to close my eyes to what LLMs can bring me. But I want to be aware—and to care—about the consequences of my choices, as well as the broader context behind them. And sometimes, that needs to mean renouncing the convenience that a technology can bring, even if for just a moment.
Only time will tell how I end up using LLMs, and whether the principles I claim today as my own in this article will stand the test of time. I am rather curious to reread this piece in a year—I hope it will prompt (ah!) me to write a retrospective.