Never miss a new edition of The Variable, our weekly newsletter featuring a top-notch selection of editors’ picks, deep dives, community news, and more.
Many of the issues practitioners encountered when LLMs first burst onto the scene have become more manageable in the past couple of years. Poor reasoning and limited context-window size come to mind.
These days, models’ raw power is rarely a blocker. What remains a pain point, however, is our ability to extract meaningful outputs out of LLMs in a cost- and time-effective way.
Previous Variable editions have devoted a lot of space to prompt engineering, which remains an essential tool for anyone working with LLMs. This week, though, we’re turning the spotlight on more recent approaches that aim to push our AI-powered workflows to the next level. Let’s dive in.
Beyond Prompting: The Power of Context Engineering
To learn how to create self-improving LLM workflows and structured playbooks, don’t miss Mariya Mansurova‘s comprehensive guide. It traces the history of context engineering, unpacks the emerging role of agents, and bridges the theory-to-practice gap with a complete, hands-on example.
Understanding Vibe Proving
“After Vibe Coding,” argues Jacopo Tagliabue, “we seem to have entered the (very niche, but much cooler) era of Vibe Proving.” Learn all about the promise of robust LLM reasoning that follows a verifiable, step-by-step logic.
Automatic Prompt Optimization for Multimodal Vision Agents: A Self-Driving Car Example
Instead of leaving prompts entirely behind, Vincent Koc’s deep dive shows how to leverage agents to give prompting a substantial performance boost.
This Week’s Most-Read Stories
In case you missed them, here are the three articles that resonated the most with our readers in the past week.
The Great Data Closure: Why Databricks and Snowflake Are Hitting Their Ceiling, by Hugo Lu
Acquisitions, venture, and an increasingly competitive landscape all point to a market ceiling.
How to Maximize Claude Code Effectiveness, by Eivind Kjosbakken
Learn how to get the most out of agentic coding.
Cutting LLM Memory by 84%: A Deep Dive into Fused Kernels, by Ryan Pégoud
Why your final LLM layer is OOMing and how to fix it with a custom Triton kernel.
Other Recommended Reads
From data poisoning to topic modeling, we’ve selected some of our favorite recent articles, covering a wide range of topics, concepts, and tools.
- Do You Smell That? Hidden Technical Debt in AI Development, by Erika Gomes-Gonçalves
- Data Poisoning in Machine Learning: Why and How People Manipulate Training Data, by Stephanie Kirmer
- From RGB to Lab: Addressing Color Artifacts in AI Image Compositing, by Eric Chung
- Topic Modeling Techniques for 2026: Seeded Modeling, LLM Integration, and Data Summaries, by Petr Koráb, Martin Feldkircher, and Márton Kardos
- Why Human-Centered Data Analytics Matters More Than Ever, by Rashi Desai
Meet Our New Authors
We hope you take the time to explore excellent work from TDS contributors who recently joined our community:
- Gary Zavaleta looked at the built-in limitations of self-service analytics.
- Leigh Collier devoted her debut TDS article to the risks of using Google Trends in machine learning projects.
- Dan Yeaw walked us through the benefits of sharded indexing patterns for package management.
The last few months have produced strong results for participants in our Author Payment Program, so if you’re thinking about sending us an article, now’s as good a time as any!

