Managed Personal Intelligence

Notes on a category, forming.

Personal AI agents are arriving for the technically fluent. Their managed, mainstream form is not yet here — but it is coming, and it will demand something of the providers who build it, the platforms that distribute it, and the people who come to rely on it. These are notes on what that form might require.


Personal agents have arrived.
Access has not.

LLMs have changed the game. They remain sketchy for important work, but tethering them together in an agentic network is potentially the killer app that many are banking on. Google, Meta, Apple, OpenAI and others are preparing to grab as much of the market share as they can. OpenClaw and open-source options will remain only for the technically proficient. Whichever route ends up being the most beneficial, Personal AI Agents — acting on behalf of an individual — have moved from demonstration to deployment in the span of a year and remain poised to disrupt the technology we know.

Today they are the province of the technically fluent. Tomorrow, history suggests, they will be operated by managed services, paid for monthly, and used by hundreds of millions. That tomorrow has no name yet, no settled shape, and no consensus on what it ought to require.

It will need a name to gather around. The notes that follow use Managed Personal Intelligence.

Managed Personal Intelligence.

A new category of service that operates personal AI agents on behalf of an individual — abstracting its complexity, governing its behaviour, and standing accountable for its outcomes.

Managed cloud abstracted the server.
Managed databases abstracted the operation.
Managed personal intelligence abstracts the agent.

Where the difficulty lies.

Some of the questions raised by Managed Personal Intelligence are old questions in new clothes — the ones that should keep any consequential technology honest. Others are what makes it genuinely new: the patterns of interaction, attention and trust that have not had to be invented before.

  1. i.

    Safety

    How does a managed agent stay aligned with its user across years and contexts? What is the fail-safe behaviour when uncertainty rises, when instructions conflict, when the world shifts under a long-running plan? Who is accountable when an agent acts wrongly — the user, the operator, the model?

  2. ii.

    Privacy

    A personal agent must know its user. What data flows, where, for how long? What is the consent surface for a system that observes continuously and remembers selectively? What does deletion mean once a model has been shaped by you?

  3. iii.

    Cost

    If personal intelligence is to be a utility — like electricity, like search — its economics matter. What price makes it ordinary? What subsidises whom, and at what cost to attention or autonomy? Is there a public interest in keeping a competent agent within reach of every household?

  4. iv.

    Interaction

    The chat window will not be the final form. What replaces apps when the agent is the surface? How do we instruct, correct, and trust a system that acts while we are not watching? What new patterns — for attention, for permission, for living with information that arrives through an interpreter — will become as ordinary as the scrollbar?

  5. v.

    Society

    A technology this widespread reshapes more than the lives it touches directly. What becomes of a labour market when the small tasks of life are routinely done on someone's behalf — when whole categories of paid work, from assistants to advocates, exist to do-on-behalf-of? What of civic life, of public discourse, when citizens read, decide, and reply through an interpreter? And what becomes of shared experience once each of us is mediated by a different one?

Write.

These notes will be revised in public. If any of it resonates — or, more usefully, if some part of it strikes you as wrong — a reply is welcome.

Send a letter

hello@mpi.rocks