I've been thinking a bit about what is agentive (agentic?) technologies and how they might differentiate from automations. When I was asked what are “real cases of people using agents” it was really hard for me to feel comfortable with any particular answer.
When I end up on a page like this one it is just a bunch of bullshit. These are applications and products, sure, but not agents.
If that is the case, is there anything that is agentive right now? A few examples:
Alpha Go Zero - not agent because it doesn't go around trying to play people it is a automation you use when you want to play go. Even if its' moves weren't understood at first it was an interpretability or explainability problem not an agentive one.
Roomba - it feels more agentive because it will take its own path, recharge when necessary, sometimes escape, and is pretty scared of dog shit.
Slack bots - these may be agentive because they are just there doing things. But how often do they start a conversation that was beyond an automation based on heuristic?
Autonomous vehicles - feels agentive when not being taken over by a teleoperator. Even the person inside is not really in control other than maybe a big “stop now” button.
Baby AGI and other frameworks - sure maybe? But how often do they crash or go crazy divergent?
Or sandboxed agents - feels like they are agents but limited in their agency. They seem to be stuck in sandboxes of fake towns or hospitals right now.
I’m worried I’m saying that there is no true Scotsman.
It feels like a real agent would probably do something you might not expect and possibly not be in "alignment" with you. That seems like a real issue if we want agents to be out in the wild.
Re No Scotsman.. the categories are
changing so quickly it redefines the problem before the last answer has even fully rendered. It's like stacking simulations?