
For a long time, I'm sure I'm not the only one who has been curious about what former OpenAI CTO Mira Murati's new venture, Thinking Machines Lab, would bring to the table.
The answer, it turns out, is simple. Last week, Thinking Machines Lab finally unveiled its first product, Tinker. In a nutshell, it's a platform designed for developers to fine-tune AI models, helping them save significant effort and costs associated with these complex tasks.
The core value of Tinker is to significantly lower the barrier to entry for high-performance model fine-tuning. Previously, fine-tuning an LLM required developers to have deep AI expertise and also deal with complex infrastructure management, resource scheduling, and expensive hardware costs. With Tinker, the goal is to substantially reduce these laborious tasks.
First, it allows developers to focus on experimentation and algorithms, not infrastructure. Tinker is a managed service that handles all the tedious backend work for developers, such as scheduling, resource allocation, and fault recovery. This enables developers to concentrate their energy where it matters most: algorithm design and data application, thereby accelerating the pace of innovation and experimentation.
Second is flexibility and control. Tinker provides a highly flexible API that gives developers deep control over their algorithms and data. Notably, this API also offers low-level commands like forward_backward and sample, allowing researchers and developers to conduct more fundamental and granular experiments.
A third, and perhaps more unique, aspect of Tinker is its claim to have built-in, state-of-the-art fine-tuning techniques, integrating methods like LoRA (Low-Rank Adaptation). Simply put, LoRA allows for updating only parts of a model instead of re-training the entire model at great expense. It's like upgrading a car by just getting better tires instead of buying a whole new vehicle.
The emergence of Tinker also signifies two important trends in the future of AI.
The first is the democratization of model customization. As tools like Tinker become more accessible, having a high-performance model fine-tuned for specific needs—such as customer service for a particular industry or copywriting in a specific style—will no longer be the exclusive domain of large tech companies. Small and medium-sized enterprises, and even individual developers, will be able to create customized AI that better fits their unique requirements.
The second trend is the shift from "Model-as-a-Service" to "Fine-tuning-as-a-Service." In the past, developers primarily used general-purpose models provided by large model companies via APIs (Model-as-a-Service). Tinker's approach ushers in the era of "Fine-tuning-as-a-Service" (FaaS). This signifies a shift in focus from providing a single, massive model to offering a flexible platform where users can create countless customized models.
Finally, you might ask: don't the Big Tech companies have similar services?
Of course, they do; Big Tech wants a piece of everything.
Tech giants like Google, AWS, and Azure offer powerful and mature model fine-tuning services, but it's clear that Tinker is carefully carving out its market segment. The platforms from these giants are industrial-grade production lines built for "production environments." They prioritize stability, security, and deep integration with their own cloud ecosystems, offering an all-encompassing "toolbox" for enterprise users.
In contrast, Tinker is more like a precision "laboratory" built for researchers, top-tier developers, and startups. It focuses on open-source models, sacrificing some enterprise-grade integration convenience in exchange for maximum "experimental flexibility." By providing low-level commands like forward_backward, Tinker gives developers more granular control, aiming to be the preferred platform for driving algorithmic innovation rather than serving large-scale enterprise deployments.
AI infrastructure is becoming stratified. Big Tech provides the stable foundation and one-stop solutions for large enterprises. Meanwhile, companies like Thinking Machines Lab are building on top of this foundation to offer more targeted tools for specialized communities. This reflects the continued expansion and maturation of the AI field, as it's only when a domain becomes large enough that such niche markets, designed for the "experts' expert," can emerge.
As for whether such a product can justify Thinking Machines Lab's current stunning valuation, only time and the market will tell.