Measurement

Forecasts from real data. Not developer estimates.

Spaces projects delivery dates and costs from actual execution data — not story points, not sprint velocity, not gut feel. You get a projection the moment you build a plan. As tasks complete, real data replaces initial estimates and the forecast updates automatically.

Delivery projection
NOWMar 12BEST$18.50Mar 14EXPECTED$22.00Mar 19WORST$31.40COMPLETION PROJECTION

Projections that learn from your team

Your first projection comes from plan structure and model pricing. As tasks complete, real execution data — duration, cost, iterations, agent, model — replaces those defaults. The more your team ships, the more grounded the forecast becomes.

Iteration-cycle tracking

Spaces logs every implement → review → fix cycle per task. Over time, this reveals which categories of work need more rounds and which converge quickly — shaping future projections.

Per-agent profiles

Different agents perform differently. The system tracks which agents are fast at which task types — and what they cost — so projections reflect your actual team.

Cost calibration

Initial estimates use published model pricing. As real token usage accumulates per task type, projected costs shift from list-price math to empirical patterns.

Task-type segmentation

Not all tasks are equal. The system learns relative complexity — which categories take longer, which models are more efficient for which work — from your own completion history.

Prediction interval
Confidence
Week 13 tasks
±6 days30%
Week 312 tasks
±3 days58%
Week 525 tasks
±1.5 days78%
Week 842 tasks
±0.5 days94%

As more tasks complete, the prediction interval narrows and confidence increases

A forecast from day one that updates itself.

There are no story points for AI-driven work. No sprint velocity to extrapolate from. The first time a stakeholder asks "when will this be done?", most teams guess. Spaces gives you a projection the moment you create a plan — grounded in task structure, dependencies, and model pricing — that recalculates every time a task completes.

01

Forecast

The moment you build a plan, Spaces generates a delivery projection — timeline and cost — from the dependency graph, task types, and model pricing. Before any work begins, you have an answer.

02

Record

As tasks complete, Spaces captures actual duration, LLM token spend, iteration count, agent ID, and model used. Zero manual input — the data comes from execution itself.

03

Recalculate

Every completed task triggers an updated projection. Real data replaces initial estimates — best, expected, and worst case — grounded in what your team actually delivered, not what anyone guessed.

Timeline and cost, together

Spaces forecasts both timeline and cost together — because shipping on time but 3x over budget isn't a win.

Timeline

When will this plan finish? Best, expected, and worst-case completion dates — recalculated on every task completion, aware of your dependency graph and critical path.

Best / expected / worst-case completion dates
Recalculates on every task completion
Critical-path aware — blocked tasks don't shrink the forecast
Confidence intervals from observed variance

Cost

What will the remaining work cost? Per-task projected LLM spend based on real token usage, broken down by model — updated as actuals come in.

Per-task projected spend from real data
Model-level cost breakdown
Running total vs. original projection
Trend tracking: is spend accelerating or decelerating?

Stop flying blind. Start forecasting.

Spaces captures the execution data your current tools ignore — iteration counts, LLM spend, per-agent performance — and turns it into forecasts that actually reflect how your team works.

Free during beta · No credit card required