AI that decides
what to work on.

Every AI agent today waits to be told what to do. We're building the ones that figure it out themselves.

Scroll

The initiative gap.

The best AI tools in the world are brilliant at executing your instructions. They write code, generate content, analyse data, answer questions. They do exactly what you ask, often better than you expected.

They never do what you didn't ask.

Your codebase has a bug that's been silently degrading conversion for three days. Your onboarding flow has a step that 60% of users skip, and nobody's investigated why. Your fastest-growing user segment behaves completely differently from who you designed the product for.

Nobody on your team noticed any of this. Not because they're not good. Because they're busy. Humans have limited attention. We can only monitor so many dashboards, read so many logs, hold so many patterns in our heads at once.

Current AI doesn't help with this problem. It helps you do things faster. It doesn't help you notice what needs doing.

That gap, between execution and initiative, is what we're closing.

An agent that watches,
thinks, and ships.

We're solving the problem of volition: building AI systems that autonomously decide what to work on, not just how to work on it. This is a research problem with no existing solution. Nobody with serious resources is working on it.

Our first product applies this research to the place startups feel the gap most: an autonomous PM agent. It connects to the systems your team already uses: analytics, error tracking, codebase, project management, user feedback. It sits across all of them and does the work that currently lives in nobody's head.

Observe.

Continuously monitors your analytics, error logs, user behaviour, and system health across every connected source.

Detect.

Identifies anomalies, patterns, and opportunities that no single data source would reveal. Not just "this metric went down" but "this metric went down for this segment, correlated with this deploy, connected to this support ticket pattern."

Investigate.

Autonomously digs deeper. Segments the data. Traces root causes across systems. Connects quantitative signals to qualitative patterns.

Hypothesise.

Forms a theory about what's happening and what should be done about it. Explains its reasoning transparently.

Build.

Writes the code. Implements the fix or the feature. Configures the A/B test. Runs it against your test suite.

Ship.

Opens a PR with full context: what it found, why it matters, what it built, and what it expects to happen.

Learn.

Measures the outcome. Updates its understanding. Feeds what it learned into the next cycle.

This loop runs continuously. While you sleep, while you're in meetings, while you're focused on strategy. It's not a tool you use. It's a colleague that works.

Monday morning.

You open Slack. There are three messages from overnight:

GV
General Volition Sat 6:42 PM

Checkout completion rate dropped 11% starting Saturday 6pm. Isolated to mobile web on iOS 17.3+. Root cause: CSS regression in deploy #1847 broke the payment button touch target. 34 users affected, ~$2,100 in lost conversions. Fix ready.

PR #289 is open.

Bug Fix Ready
GV
General Volition Sun 3:15 AM

Users who complete the template gallery in onboarding retain at 3x the rate of those who skip it. 60% skip it because it's the fourth step. I've prototyped moving it to step two and shortened the overall flow. A/B test configured and ready to launch on your approval.

Experiment Ready
GV
General Volition Mon 5:08 AM

Your API error rate has been slowly climbing for two weeks. Not enough to trigger alerts, but the trend is clear. It's a memory leak in the worker pool introduced in v2.3.1. Patch ready.

PR #291 is open.

Performance Fix

You didn't ask for any of this. Nobody did. The agent noticed, investigated, built, and shipped. Because that's what it does.

Toward artificial volition.

We believe the next frontier in AI isn't better answers. It's better questions.

Current AI systems are fundamentally reactive. They optimise for objectives given to them by humans. They execute with increasing sophistication, but they never ask: "What should I be working on?"

Genuine initiative requires something the field hasn't built yet: the capacity for autonomous goal generation. Not just pursuing goals, but forming them. Not just executing a plan, but deciding a plan is needed.

We call this volition. It's the missing piece between today's brilliant executors and tomorrow's genuine autonomous agents.

Active Inference

Karl Friston's framework derives agency from first principles. A system that maintains itself and predicts its environment will necessarily develop preferences and goals. We're building practical implementations of this theory.

Intrinsically Motivated Learning

Pierre-Yves Oudeyer and the IMOL community have shown that agents rewarded for learning progress, not outcomes, develop autonomous curiosity and goal generation. We're applying this to product development domains.

Persistent Belief Models

Current AI recalculates from scratch on every interaction. We're building systems that accumulate beliefs over time, update them with new evidence, and develop genuine opinions grounded in experience.

The 10-star agent.

We use Brian Chesky's star framework to map the trajectory of autonomous AI agents.

1-3

Reactive execution

You tell them what to do. They do it. Faster and cheaper than before, but entirely dependent on human direction. This is Cursor, Copilot, Devin. Useful tools. Not autonomous agents.

4-6

Proactive autonomy

The agent monitors, investigates, proposes, builds, and ships autonomously. It catches problems you missed and suggests improvements you hadn't considered. This is a senior engineer who never sleeps.

"It was watching while I slept" → "The agent shipped it, not me"

7-8

Persistent judgment

The agent develops persistent opinions about your product. It pushes back when you're wrong. It runs its own experiments and learns from results over weeks and months. This is an embedded PM with perfect memory.

"It disagreed with me and was right" → "I stayed up reading its strategy doc"

9-10

Strategic vision

The agent talks to your users, monitors your competitive landscape, identifies strategic opportunities, and makes product bets. It has something that looks like product vision, grounded in comprehensive data rather than intuition alone.

11

Full autonomy

The agent raises your Series B. It hires engineers overnight. It presents to your board with a depth of product knowledge no human can hold simultaneously. You are, technically, the CEO. But you haven't made an operational decision in six months. Your company has 4 employees and outperforms teams of 4,000.

Is this a company? Is this a product? Is it something else? Nobody is sure. But it works.

We're building toward AI that doesn't just help you build your product. AI that has its own informed perspective on what your product should become.

Your first PM hire
that never leaves.

You're a technical founding team. You ship fast. You're great at building. But you don't have enough time to step back and ask: are we building the right things? Is anyone looking at what happened after we shipped that feature last month?

You need product thinking, not another dashboard.

General Volition connects to your existing stack in minutes. No migration, no new workflows. Within hours, it's catching things you missed. Within days, it's proposing changes you hadn't considered. Within weeks, it knows your product better than anyone on your team. Because it never stops paying attention.

hello@generalvolition.com

We're working with a small number of teams on early access.

General Volition.

We're a team of engineers and researchers building autonomous agents with genuine initiative. Based in London, focused on the hardest unsolved problem in AI: not intelligence, but volition.

The name is deliberate. General intelligence is the ability to solve any problem. General volition is the ability to decide which problems are worth solving. We believe the latter is harder, more important, and almost entirely neglected by the field.

The problem demands clarity of thought, not headcount. We're hiring engineers and researchers who want to work on what might be the most interesting open question in AI.

People who are equally comfortable reading Friston and shipping production code.

Work on the hard problem.

Every major AI lab is racing to build systems that execute better. Faster reasoning, more reliable tool use, longer context windows. Important work, well-staffed.

Almost nobody is working on where the goals come from.

If you've ever felt that the AI field is solving the wrong problem, optimising execution when it should be studying initiative, we should talk.

jobs@generalvolition.com