Anthropic currently has, hands down, the best AI models for coding and knowledge work. But, the closed-source AI harnesses (the ‘Harness’) Claude Code and Claude Cowork are fundamentally handicapped due to being closed-source software.
The future of the AI harness is open-source because only then is the Harness User-First. A User-First Future is the logical endpoint for AI technology. Anything else is a losing bet.
What makes Anthropic models so effective and a Harness like Claude Code so useful is they enable users to solve a wide variety of coding and non-coding problems, both of which radically improve one’s knowledge-work capabilities. The Harness and AI models that help the User solve their problems are what users will use.
A closed-source Harness is a paradox that prevents users from using AI to solve problems they encounter with the Harness.
Need a feature Claude Code or Cowork doesn’t have in order to solve your problem? You can pray the Corporate Gods will hear your pleas buried beneath thousands of comments and criticisms and praises and group think platitudes and the insulated echo chamber that is corporate culture and structure, and the mass of AI bots that quip and comment on X.com and everywhere else, where you try over weeks and months to ‘get heard by leadership’ who might decide to eventually develop the feature you need or something like it that might work ‘well enough’ to partially meet your needs.
But if history has shown us users anything it’s to not count on a large company hearing one user out of a mass of thousands or millions, let alone being responsive to one by delivering a timely solution.
Even if a user succeeds once to get heard and get a fix, what about the next time?
A User-First AI Harness is an Interdisciplinary Development Environment And Will Outperform Traditional Development Teams
The cool thing about Claude Code is it lets a non-coder build software, and a coder to build faster, bigger, and better than ever before. The advent of the non-coding developer working in collaboration with the experienced coding developer is a new and confusing experience, but at the same time it is thrilling and rewarding.
It enables interdisciplinary software development that leverages the education, experience, and expertise of people from a wide-range of disciplines, with a wider range of sub-domains in each discipline, with the AI acting as a universal translation layer facilitating the collaboration between everyone. The non-coder can’t write or read code, but their AI agents can, and can describe coding issues as code to coders and as natural language to non-coders.
The products of this interdisciplinary collaboration are being integrated into the development stacks of open-source software projects today. What’s starting now is a deluge of new ideas to spur innovation and drive development forward. A deluge that traditional development teams have no intake process to ingest; it needs an outlet, and it will find it in open-source projects.
When software doesn’t do what you need for your work, you can now build what you need. No pleading for a software as a service company to fix their product. No trying to convince a company that they can make money developing a product that helps you do what’s best for you and society as a whole. You ask once, and if they don’t deliver a solution, you move on by building what is needed and ditching the SaaS that was holding you back. While SaaS is not dead, closed-source SaaS operated by companies that are unresponsive to user needs, or prioritize their bottom line over what is best for users, soon will be.
“Lead, follow, or get out of the way” – The End of Unresponsive Closed-Source SaaS
Users will not wait weeks or months for maybe-it-gets-fixed when they can fix it in hours or days using an AI Harness, publish their code, and connect with a community of users who are also working to solve the same problems, and coordinate their efforts using AI agents that help orchestrate the entire project. Why pay a company to develop and own your stack and prioritize their bottom-line over your needs when you and your peers can invest in your own future to do your work, your way, to your benefit?
Claude Code is a paradox. It helps you build anything you need, even a replacement for Claude Code, but will not let you build a better Claude Code.
That’s why an open-source AI harness isn’t just better, it is the logical endpoint for all AI harnesses for coding and non-coding work. The purpose of the Harness is to help the User solve Problems. It cannot do that effectively when the user has no meaningful, controlling interest in the Harness.
I think some companies think they’ll use AI to create a closed-source loop where user feedback gets sent to an AI system that manages their AI harness and develops features on the users behalf. But, it’s the users ideas, their designs, their problems and the solutions they developed; it’s the one thing they have innate ownership of. It’s the thing users can do that AI cannot; being human with human problems that need fixing. It’ll be our primary contribution in an AI dominated world. To take that away from users, leaves us with what to derive purpose from?
A Future Without a User-Owned Harness Forfeits Innovation and Competitive Advantage
A user found a problem, and worked with AI to fix it, but the company owning the Harness claims full ownership and control of the solution developed. Users end up paying a company to develop fixes that the user created. This will feel as awful as it sounds for a lot of users, resulting in a frustrating adversarial user experience.
The user essentially built it, but has no ownership or control. The company can terminate your account on a whim (the common ‘convenience’ clause in a ToS) and take away all your hard work. From a user perspective, investing in such an ecosystem is unwise.
So even if companies create a way for users to use AI to customize the closed-source harness, it doesn’t make any sense for users to develop the company’s product for them and in return have to endure platform insecurity and restrictions which compromise how well the user can solve problems.
Using AI with an open-source Harness like OpenCode lets one modify OpenCode to build the features they need and then users collectively own that open-source code. It’s User-First by default. If the OpenCode company Anomaly decides they don’t want you as a user, you fork the code you want and keep building and solving problems. It’s your AI harness optimized to solve your problems from start to finish and works as well as you and other users make it work. With a User-First open-source Harness, you own your future.
Anthropics Opportunity to be King in the User-First Future
I think Anthropic has an opportunity it is missing out on. If Anthropic optimized its AI models to work the best they can in all platforms, not just in their closed-source Harnesses, then Anthropic models would be the best models to use everywhere.
An example:
Anthropic could create auto-compaction logic for OpenCode that is optimized for Anthropic models and submit it as a PR. It would improve all AI models with limited context windows, but users would notice how well it works with Anthropic’s models, and be disposed to optimizing OpenCode for use with Anthropic models.
Anthropic could even create its own fork of OpenCode with a wide array of Anthropic specific optimizations, and effectively wrest control of OpenCode as a project from Anomaly; may the best fork win. The OpenCode community decides what they would use and actively develop for. A mass of people with their own customized AI workflows actively improving the features Anthropic published. Anthropic could have thousands of users paying Anthropic hand-over-fist in token usage to develop a Harness that is optimized for Anthropic models and make Claude the defacto choice for coding and knowledge work everywhere.
The user-first optimized harness is what is the most sticky at retaining users. For an AI model to be sticky it needs to be the best model to use in the User-First Harness. That is what AI companies need to be competing for; that is first place; the place of greatest stability and control.
Right now Anthropic has created a lot of friction for users trying to use Claude in OpenCode. The problems start with enforcing terms of service to prevent Claude Code MAX subscription plans from working in OpenCode, but extend to other issues that cause Claude models to be less-performant than they could be in OpenCode. The OpenCode community expends effort making workarounds to attain core function instead of building optimizations.
I think competitors to Anthropic will soon produce closed-source and open-source AI models that match Anthropic well enough that when a Harness is optimized to use those models they will outperform Anthropic models on real-world problem solving despite Anthropic models testing better in standardized benchmarks.
The User-First Future using an open-source Harness doesn’t need the most cutting-edge model. It just needs a model that is good enough which the Harness can be optimized for. That less-smart and technically inferior-to AI model will end up performing better than the latest and greatest because the Harness is optimized by users for solving user problems.
I like Anthropics models and hope to see Anthropic succeed in staying on top. But I don’t see a future where they succeed while trying to limit their users to closed-source AI Harnesses.
I think Anthropic could use this opportunity to pivot and dominate the User-First Harness frontier. But their window is closing. And once closed, the friction to open it again is far greater than the friction they can create for users wanting to transition out of a closed-source AI Harness to one that lets them most effectively solve problems.
User Problems Control User Tool Use and AI Harness Design
AI is a crucible; the Future arrives, it doesn’t ask for permission. But what we build influences what future arrives and how brutal the crucible is. Build well or suffer is the inescapable conclusion that logic demands.
I hope for a User-First Future arriving soon so that I can more effectively work the problems I need to solve in order to enforce the laws which Defend The Disabled from systemic neglect, abuse, exploitation, and fraud that is occurring within the Medicaid program and healthcare system. I think having human rights by doing what is required to defend your rights is something we can all agree is a good use-case for AI. Unfortunately, I cannot at this time do that work in a closed-source Harness. They’re not built to do the work I need to do, and they do not let me build them to be able to do the work I need to do. Purely out of necessity I use and develop for OpenCode. To have human rights I need a User-First Harness.
This is why it’s so important to help Users solve Problems; why a User-First Harness is the logical endpoint. The problems users face are the controlling variable. They dictate not simply what tools they will use, but what tools can be used, and by proxy, the outcomes that users must live with. I Will Not Live In A World Where I Do Not Have Human Rights. If you try to build such a world, I will tear it down. Build User-First to Secure a Future Worth Living In and to Secure Your Competitive Advantage in the AI Industry.