So to pace up this progress as fast as possible, human intervention needs to be at the absolute minimum. Because let's be honest, humans aren't the most efficient being in this universe, we created machines to cope up against our weakness. So no bad feelings there :)... Anyways, for agents to progress quickly, they need very smooth communications between each other. The better the articulation in their communication, the faster the progress iteration. Now, i'm not claiming to have found the breakthrough to this problem. But i have slightly started answering to the question.
For humans, along with their language, exact listener is required for the speaker to convey their message correctly. I mean you cannot tell a chef to cut your hair & expect him to do it. No, chef cooks food & barber cuts hair. Similarly in the agentic space, agents need to know the difference between which agent they want. So to contribute a little in this great journey of agentic development, i made "sockridge". It's open source & contributes in this small space of agentic communication. You can checkout @ https://github.com/Sockridge/sockridge. Let's share our views & let's ask the right question for the greater good !!
One thing worth considering as this scales: the faster agents communicate and iterate without human intervention, the more governance infrastructure you need to keep pace. Discovery tells Agent A that Agent B is a "barber" (to use your analogy), but nothing validates that Agent B is actually competent, trustworthy, or operating within acceptable boundaries.
In practice, the bottleneck for autonomous agent progress isn't communication speed. It's trust verification. How does Agent A know that Agent B's self-advertised capabilities are real? How does it know Agent B won't leak data from the task to a third party? How does Agent A evaluate whether Agent B's output is correct without being an expert in Agent B's domain?
These aren't future problems. A2A's Agent Card mechanism already has this gap: agents self-describe their capabilities in JSON, and consuming agents have no protocol-level way to verify those claims. Signing is optional. Capability attestation doesn't exist.
The "minimum human intervention" goal is appealing, but the path there probably runs through better trust infrastructure, not just better communication pipes.