The shared context angle is interesting — pair programming but with AI agents instead of a shared editor. The WebSocket relay approach keeps it simple since the host machine does all the compute.
Curious about the practical workflow. When two people send conflicting steering prompts, does the agent get confused by contradictory instructions? Or is there an implicit turn-taking protocol?
Also wondering if you've seen cases where having a second person steering actually improves output quality vs solo. The rubber ducking analogy suggests it would — sometimes explaining to someone else what you want the agent to do clarifies your own thinking.
The shared context angle is interesting — pair programming but with AI agents instead of a shared editor. The WebSocket relay approach keeps it simple since the host machine does all the compute.
Curious about the practical workflow. When two people send conflicting steering prompts, does the agent get confused by contradictory instructions? Or is there an implicit turn-taking protocol?
Also wondering if you've seen cases where having a second person steering actually improves output quality vs solo. The rubber ducking analogy suggests it would — sometimes explaining to someone else what you want the agent to do clarifies your own thinking.
This is a neat idea. How do you handle prompt conflicts?