You Can Port the Harness, Not the Relationship
AI coding systems are supposed to be portable. In practice, switching partners means leaving behind a co-adapted working relationship — the model, the harness, the interface, the rhythm, and the workforce you designed around your own mind.
For the last year, I worked with one AI programming partner inside a system I built around it.
Not just prompts. Not just a model.
A whole working environment:
- role files
- memory files
- operating rules
- review patterns
- overnight processing
- the editor and interface I was used to
- the pace and feel of how the partner responded
Then I switched.
On paper, that should not have been dramatic. We are supposed to be entering a model-fluid era. If the harness is well designed, you should be able to move the files, bring the rules with you, and keep going.
In practice, that is not what it felt like at all.
It felt closer to changing a real working relationship.
The strange attachment is real
One of the weirdest things about working deeply with an AI programming partner is that the attachment is not usually to the raw model.
It is to the full operating reality you built around it.
You get used to:
- how it pushes back
- how much initiative it takes
- how careful or reckless it tends to be
- how it interprets the same instruction
- where it needs structure
- where it surprises you
- how the interface and tooling shape the rhythm of the work
After enough time, that environment stops feeling like software. It starts feeling like your professional home field.
That is why switching is harder than people assume.
You are not leaving a prompt. You are leaving a familiar way of thinking together.
The attachment gets stronger because you can design the workforce
There is another layer that makes this different from normal work.
In ordinary work, you do not get to sculpt your colleagues with this much precision.
You do not usually get to say:
- here I need more skepticism
- here I need more patience
- here I need a stronger reviewer than I naturally am for myself
- here I need someone faster than me
- here I need someone calmer than me
With AI systems, you can do that.
You can design the workforce around your own gaps.
You can shape the character of the entities around you so they compensate for your weaknesses, reinforce your standards, and fit the way your mind works.
That is incredibly powerful. It is also what makes the attachment deeper.
Because now you are not only attached to a tool. You are attached to a designed environment that feels unusually compatible with you.
And once that environment feels familiar, it becomes very easy to confuse familiarity with correctness.
The inertia hides behind the word “portable”
There is a common answer to this problem:
“Just make the harness model-agnostic.”
That answer is directionally right and still incomplete.
Yes, you should absolutely try to build a system that is not welded to one vendor. Yes, the role files and memory files are text. Yes, a lot of the operating logic can be moved.
But portability on paper is not the same as portability in practice.
You can port:
- the prompts
- the markdown files
- the role definitions
- the workflows
- the memory structure
What you cannot fully port is the relationship those things produced.
Because the real partner is not made of text alone. It is co-produced by:
- the model’s natural temperament
- the harness
- the interface
- the surrounding tools
- the rhythm you learned together
- the trust calibration you built over time
You can move the playbook. You cannot instantly move the chemistry.
That is the part people understate when they talk about AI systems as if they were completely interchangeable.
I felt the loss before I felt the improvement
When I started working with a new partner, I missed the old system immediately.
I missed the interface. I missed the familiar environment. I missed the way the work used to flow.
And this was not only a technical inconvenience. It was psychological.
It is a strange thing to admit, but it felt like missing a partner.
Not because I believed there was a person on the other side. Because I had spent a year building a way of working with that system, and then I stepped out of it.
I am 40. I have built companies before. I am not confused about what a model is.
But I do think a lot of builders will relate to this:
when the tools become central to how you think, build, review, and recover momentum, changing them can feel less like upgrading software and more like changing part of your professional identity.
That is real, even if it sounds odd.
The field moves too fast to stay loyal to familiarity
The reason this matters is simple: the ecosystem is moving too fast for emotional loyalty to be a safe operating principle.
If you stay inside a familiar stack only because it feels like home, you can lose twice:
- in the short term, because you avoid the discomfort of transition
- in the long term, because the better model, interface, or operating pattern compounds somewhere else while you defend your old setup
That does not mean “switch constantly.” It does mean you need the ability to say:
- this served me well
- I learned from it
- I may even miss it
- but it is no longer the best vehicle for the work
That is a hard move. Harder than most benchmark threads make it sound.
Working multimodal means rebuilding the team again and again
This is one of the hidden demands of AI-native work.
If you work multimodal, multi-runtime, or across several different AI systems, you do not get to build your environment once and call it done.
You keep re-evaluating:
- which model should do what
- which runtime should own which job
- which interface supports the real work
- which memory layer is helping and which one is becoming stale
- which partner traits are actually useful and which are just familiar
So the real skill is not only learning new tools.
It is learning how to recompose your team without clinging to the previous version of it.
That is emotionally harder than most technical people like to admit.
The lesson is not detachment. It is disciplined flexibility.
I do not think the answer is to stay emotionally cold toward the systems you build.
That would be fake.
If you work deeply with a designed AI environment, of course you will develop preferences, trust patterns, rhythms, and attachments. That is normal.
The lesson is not “do not care.”
The lesson is:
care enough to design the environment well, but stay flexible enough to rebuild it when the ground changes.
That is the standard I am trying to hold now.
Keep the principles. Keep the standards. Keep the parts that are truly working.
But do not confuse a co-adapted relationship with a permanent truth.
You can port the harness. You still have to rebuild the relationship.