sometimes twenty-nine. That’s February: a short month.
Roughly four standard weeks. About twenty workdays. On a grand scale, not much progress happens in these 4 x 5 days. And yet, as always, quite a lot gets done from day to day. A few experiments run. A few ideas get rejected. A few discussions move things forward. A few code changes turn out to matter more than expected.
Looking back on this past month, I found three lessons that stood out to me from the world of ML research and engineering:
- Exchanges with others are important,
- documentation is often underestimated until it is too late,
- and MLOps only makes sense if it actually fits the environment in which it is supposed to be used.
1. Exchanges with others
If you read ML papers regularly, you know the pattern: in citations, usually only the first author’s name is shown. The other names only appear in the references section. Does that mean the first author did it all by himself?
Rarely. Only in the special case where a single author solely wrote the paper.
Most research lives from exchange. It lives from discussions with co-authors, from comments by colleagues, from questions that force you to sharpen your thinking, and from adjacent disciplines bringing in ideas that your own field would not have produced on its own. Good research often feels a bit like stepping into other people’s territory and learning just enough of their language to bring something useful back.
But this is not just true for academic papers. It is equally true for everyday engineering work.
A brief exchange with a colleague can save you hours of wandering down the wrong path. A five-minute conversation at the coffee machine can give you the one missing piece that makes your setup click. Even informal talk matters. Not every useful discussion starts in a scheduled meeting with a polished slide deck. Sometimes it starts with “by the way, I noticed something odd in the logs.”
This month reminded me of that again. A couple of small exchanges clarified things much faster than solitary pondering would have. Nothing dramatic, nothing worthy of a keynote — just the normal, quiet value of talking to other people who think about similar things.
2. Documentation
Have you ever made some changes to your code?
Sure you have.
And could you still remember the next day why you made those changes? Hopefully yes — it is only one day, after all. But what about a week later? One month later? Half a year later?
That is where things become less obvious.
Most changes to a codebase are small and benign. Not every tiny bug fix deserves a long explanation. If you rename a variable, fix a typo, or correct a harmless logging issue, that usually does not need special documentation. The same often goes for bug fixes that do not alter any relevant conclusions from prior results.
But some changes are different.
Some changes alter assumptions. Some change how data is preprocessed. Some affect training characteristics, evaluation logic, or even the meaning of the outputs. Those changes are worth noting down, because they are exactly the ones you will have forgotten when you return to the project later.
This month I was reminded, again, that documentation is not mainly for some abstract future collaborator. It is for your future self. Today, while you are deep in the code, everything feels obvious. In three months, it won’t. Then you will look at a line, or a config, or a mysterious data transformation, and ask yourself: “Why on earth did I do it this way?”
That is an easily avoidable question.
3. MLOps put to practice
The goal of most ML research is, in one form or another, to produce trained models.
But I would bet that only a small minority of these models are ever actually used.
Many models remain where they were born: in notebooks, on research servers, in internal presentations, or in papers. To move beyond that and put a model into productive use, you need more than the model itself. You need infrastructure, processes, monitoring, reproducibility, deployment strategies — in other words, tools and principles from MLOps.
If you read job advertisements in that direction, MLOps often appears closely tied to cloud providers: AWS, GCP, Azure, cloud-native pipelines, managed services, distributed deployment environments. And yes, those tools matter. They are important, and in many settings they are exactly the right choice.
But it is worth asking a simple question: is the target environment actually a cloud environment?
Take automated quality control in an industrial setting. Suppose a model is used directly in production, close to the machines that create the product. Do we really think all relevant data is simply streamed from the company into some cloud? Especially if that data reflects the company’s core processes and thus part of its competitive edge? I doubt that many companies are fully comfortable exposing production-critical environments that way.
This is where a more grounded view of MLOps becomes important.
MLOps is useful, yes. But it’s not a set of specific tools, but more a collection of how to replicate a tool under changing conditions. And, it has to fit the environment in which it is meant to be used — and not the other way round. The goal is not to force every deployment problem into the mold of whatever tooling is fashionable. The goal is to make models useful under real constraints–creating the necessary tools for the problem at hand.
Sometimes that means cloud pipelines. Sometimes it means on-premise deployment. Sometimes it means restricted environments with limited connectivity, strict access control, or hardware constraints at the edge. In all of these cases, the principles remain similar: versioning, reproducibility, monitoring, safe rollout, robust operation. But the implementation can look very different.
Concluding thought
February was short, but not empty. As with every other month of the year, there are plenty of lessons to learn:
- progress in ML often depends on exchange with others, not just solitary thinking,
- documentation matters most exactly when you think you will not need it,
- and MLOps only becomes valuable when it is adapted to the actual environment.
I bet that, next month, there will be another set of those lessons. Not necessarily flashy ones, but the quiet “oh, yes, that’s probably a good way” lessons that dictate daily doings.

