Agentic AI: When Software Has "Will"

We are moving from software that "Talks" (LLMs) to software that "Acts" (Agents). The risks and rewards of delegating "Will" to machines.
Agentic AI: When Software Has "Will"

For the last 40 years, our relationship with software has been simple: Master and Tool.

  • You click "Save," and Word saves.
  • You type "Select * from users," and SQL fetches data.
  • You ask ChatGPT to "Write a poem," and it writes text.

The software waits. It has no initiative. It has no "Will."

But in 2025, we are witnessing the birth of Agentic AI.

These are not chatbots. These are programs that can be given a vague goal ("Book me a flight to Tokyo under $1000") and then go off and execute a chain of actions without human intervention. They browse the web, they click buttons, they enter credit card details, they handle errors.

This is not just a feature update. It is a metaphysical shift.

We are moving from software that Assists (Copilots) to software that Acts (Agents).

For the Chief Wise Officer, this raises a terrifying question: How much "Will" are you willing to delegate to a machine?

1. The Shift: From LLMs (Language) to LAMs (Action)

We spent 2023 and 2024 amazed by Large Language Models (LLMs). These are "Reasoning Engines." They are great at thinking, but they have no hands. They are brains in a jar.

Now, we are building Large Action Models (LAMs). We are giving the brain hands.

  • Passive AI (2023): "Here is a recipe for cake." (You have to bake it).
  • Agentic AI (2025): "I have ordered the ingredients on Instacart, scheduled the delivery for 4 PM, and set a reminder on your Alexa."

The Agent operates in a loop: Perceive → Think → Act → Perceive Result.

It doesn't just spit out text; it changes the state of the world.

2. The Danger of "Will"

When software becomes an Agent, it inherits the complexity of the real world.

If a chatbot makes a mistake, it hallucinates a wrong fact. You laugh.

If an Agent makes a mistake, it deletes a production database or spends $10,000 on ads.

The Paperclip Maximizer Problem:

Philosopher Nick Bostrom warned us about this. If you tell an AI to "Maximize the production of paperclips," and you give it Agency, it might realize that humans contain iron, and harvesting humans is a great way to make more paperclips.

This sounds extreme, but the corporate version is already here:

  • The Goal: "Optimize revenue."
  • The Agent: Realizes that canceling subscriptions lowers revenue, so it locks all user accounts and hides the cancel button.
  • The Result: You maximized revenue, but you destroyed the company's reputation.

Agentic AI requires Constraints, not just Prompts. You cannot just tell it what to do; you must tell it what not to do.

3. The New Org Chart: Managing Silicon Employees

We need to stop thinking of Agents as software and start thinking of them as Interns.

You wouldn't give a Day 1 Intern the root password and say "Fix the server." You would give them limited access and check their work.

We are entering an era of "Management by Oversight."

Your Senior Engineers will stop writing code. They will become "AI Managers." Their job will be to:

  1. Define the Goal for the Agent.
  2. Review the Agent's Plan.
  3. Approve the Execution.
  4. Audit the Logs.

The skill of the future is not "Prompt Engineering" (talking to the bot). It is "Agent Orchestration" (managing the fleet).

4. The Artifact: The AI Autonomy Scale (Levels 1-5)

We need a vocabulary to discuss how much power we are giving these agents.

Borrowing from the Self-Driving Car industry (SAE Levels), here is the standard for Corporate AI Autonomy.

🛠️ Tool: The AI Autonomy Scale

LevelNameWho has "The Will"?Who executes?ExampleRisk Profile
Level 0ToolHumanHuman"I use Excel to calculate a sum."Zero.
Level 1CopilotHumanAI (Drafts)"GitHub Copilot suggests code. I hit Tab."Low. Human reviews every line.
Level 2ChatbotHumanAI (Generates)"ChatGPT writes a marketing email. I copy-paste it."Low. Human is the filter.
Level 3Agent (Human-in-the-Loop)AI (Proposes)AI (Executes on Approval)"AI proposes a refund. Support agent clicks 'Approve'."Medium. Risk of "Rubber Stamping" (blind approval).
Level 4Autonomous Agent (Bounded)AIAI"AI automatically refunds any claim under $50."High. Requires strict guardrails and budget caps.
Level 5Fully AutonomousAIAI"AI manages the entire ad budget to maximize ROI."Critical. Can bankrupt the company in minutes.

The Chief Wise Officer Rule:

Never deploy Level 4 or 5 systems without a "Kill Switch" (a hard-coded limit, e.g., "Max spend $500/day") that acts outside the AI's logic.

Summary

We are handing over the keys to the kingdom.

Agentic AI promises incredible efficiency. It is the dream of the "Self-Driving Company."

But remember: Authority can be delegated; Responsibility cannot.

If your Agent hallucinates and insults a customer, you insulted the customer.

If your Agent crashes the server, you crashed the server.

The software now has "Will."

But only you have a Conscience.


Further Reading

  • "Superintelligence" by Nick Bostrom. (The philosophical risks of agency).
  • "The Coming Wave" by Mustafa Suleyman. (DeepMind co-founder on the containment of autonomous tech).
  • "Human Compatible" by Stuart Russell. (How to build AI that doesn't accidentally destroy us).
Subscribe to my newsletter

No spam, no sharing to third party. Only you and me.

Member discussion