Skip to main content

Enterprise Services · Capacity Pilot

Prove it on your own team 
before you bet the org on it.

Twelve weeks. One team. A baseline, a delta, and a case study your organization owns. Real numbers, your people, your context. The proof point you need before the next executive review.

Every scaled rollout needs evidence. Most don't have it. The pilot is how you get yours, on your data, before you commit to anything bigger.

Capacity isn't headcount. It isn't bandwidth. It isn't throughput.

Capacity is how much of their own skill set your people can actually access right now. It shifts hour to hour. The same team can execute brilliantly at 10 AM and produce an expensive error at 4 PM. The skills didn't disappear at lunch. Access to them dropped.

That gap, between what your team can do in theory and what they can reach today, is the variable nobody is measuring. The pilot measures it. Then changes it.

2026 Design Partner Cohort · Three Slots

Three organizations will produce the case studies that define this category.

Emergent Skills launched in 2026. The methodology is operational. The first pilots will produce the case studies that define the category and refine the measurement architecture against real organizational data.

Three design partner slots in 2026. 35% off standard pilot pricing in exchange for anonymized case study participation. For leadership teams comfortable being early on a new category.

Apply for a Design Partner Slot →

Applications close when the slots fill.

What the pilot actually does

Twelve weeks. One team. Three phases. The methodology operates on four levers — margin, demand, assessment, and individual capacity — but the pilot doesn't deploy them in parallel. It rebuilds the team's operating system in sequence, because some of these moves require recovery to land.

Weeks 1 to 2. Baseline.

We measure where the team is now, before we change anything. Decision quality, sampled against a four-dimension rubric. Meeting hours and decisions-per-meeting-hour. Self-reported zone distribution across Green, Yellow, Red, and Can't-Even. Turnover intent on the Colarelli scale. The team-specific business metric your executive sponsor cares about most. And the organizational capacity cost top-line — the dollar number that travels up to the C-suite.

The baseline isn't a survey. It's a measurement architecture. Everything that comes after gets measured against it.

Weeks 3 to 10. The work.

Margin design first. We rebuild the team's operating rhythm so recovery is built into the week, not crammed around it. Meeting cadence, communication norms, response-time expectations. Not efficiency optimization. Capacity protection. The team keeps its deliverables. It stops paying the attrition tax to deliver them.

Demand architecture next. We map what's flowing through the team against what the team can actually absorb. Then we redesign the intake. Which requests land, which get deferred, which get redirected, which get declined. The goal isn't less work. The goal is work matched to capacity, so what does land gets done at full capability.

Capacity-informed assessment runs from Week 5 forward. The Zones Framework™ becomes shared language. The manager learns to read state and time decisions against it. High-stakes conversations get scheduled into capacity-matched windows. Feedback lands when people can absorb it.

Underneath all of that, individuals on the team have access to the Emergent Skills app for the duration of the pilot. State-matched restoration tools, the ten life-skill pillars, the Capacity Coach. Private, text-based, no manager visibility, no employer surveillance. The individual layer that complements the organizational redesign.

Weeks 11 to 12. The case study.

We re-measure against the baseline and write it up. What moved. What didn't. What the organization now knows it didn't know before. A 20-30 page case study, co-authored with your leadership, owned by you. Plus a 20-slide deck your internal sponsor can present to the executive team without us in the room.

Total employee time beyond normal work: about 30 minutes a week. Manager time: 2 to 3 hours a week through Week 10, 3 to 4 hours in Weeks 11 and 12.

The wrong pilot team

Most organizations want to pilot capacity work with the team that needs it most. That's the wrong instinct. The team that's already collapsing needs immediate relief, not structured measurement. A 12-week pilot run on a team in crisis becomes a rescue operation, and rescue operations don't produce clean case studies.

The right pilot team is under enough pressure for capacity to be a felt problem, but not so depleted that the work becomes triage. The manager has to actively engage with the methodology, not just permit it. The executive sponsor has to defend the team's time during the pilot, including against their own competing priorities.

If your honest answer is "the team I'd pilot with is barely keeping its head above water," tell us in the scoping call. We'll likely recommend a different team for the pilot and a different intervention for the team you were thinking of. Putting the wrong team in the pilot is the most common reason these engagements underperform, and it's the easiest mistake to avoid before contract.

What gets measured

The pilot commits to measurement against a baseline. Not satisfaction scores. Not engagement proxies. Business-relevant numbers your executive team will recognize as real.

Decision quality

Consequential decisions sampled at baseline and at endpoint, scored against a four-dimension rubric: information completeness, stakeholder alignment, downstream rework, and six-month decision stability. The delta is the lift.

Meeting efficiency

Total meeting hours, decisions per meeting hour, cancellation and decline rates, sync-versus-async ratio. The most legible business metric to operations leaders, and usually the one that moves fastest.

Capacity state distribution

Self-reported zone distribution at baseline versus endpoint. The percentage of the team's working week in Green, Yellow, Red, and Can't-Even. The core variable this whole system exists to manage.

Turnover intent

Colarelli scale at baseline and endpoint. The single most predictive measure of near-term attrition. One of the largest hidden costs of unmanaged capacity.

Team-specific business metric

Agreed in Week 1 with the executive sponsor. Cycle time, throughput, error rate, revenue per employee, NPS, customer response time — whatever your team's actual operating outcomes are. The metric your CFO cares about most.

Organizational capacity cost

The pilot team's capacity cost top-line at baseline and endpoint. Built from the Five Capacity Taxes, priced against your own labor and opportunity valuation. The number that travels up to executive discussion. Also the number that anchors any downstream license economics.

You also receive: an implementation playbook documenting the specific changes made to your team (so you can replicate the approach internally without re-engaging us), the baseline report at Week 2, the case study and executive deck at Week 12, and Manager Capacity Certification™ for the pilot manager. The manager finishes the pilot credentialed and capable of running capacity-informed management without us.

An illustrative outcome

Representative example. Not a client engagement.

A 22-person product engineering team at a mid-market SaaS company. VP of Product as executive sponsor. 12 weeks. Headline number: $1.8M annualized capacity cost reduction, driven mostly by reduced rework and a shift in when consequential decisions got made.

Capacity state shifted from 58% Yellow/Red at baseline to 34% Yellow/Red at endpoint. Total meeting hours dropped 31%. Decision-per-meeting-hour ratio went up 2.4x. Sync-to-async conversion on status updates returned about four hours per engineer per week.

Decision quality, on the rubric, improved 1.3 points on a 5-point scale, with the largest lifts in downstream rework reduction and stakeholder alignment at decision time. Turnover intent on the Colarelli scale dropped from 3.1 to 2.2, and two resignation risks flagged at baseline were no longer flagged at endpoint.

Sprint velocity up 18%. Critical-path ticket cycle time down from 9.4 days to 6.1.

Capacity cost top-line: $3.4M annualized at baseline, $1.6M at endpoint. The $1.8M delta came predominantly from the Recovery Debt and Decision Density components. Forfeited Upside sampling surfaced three additional opportunities currently in evaluation, not counted in the delta.

Every pilot case study follows this shape. Baseline, intervention, endpoint, annualized delta, recommended scale path. Co-authored with your leadership. Owned by your organization.

Who delivers it

Pilots are founder-architected and senior-consultant delivered. The methodology is built and continuously iterated by Jim Wilde. Senior consultants trained in the methodology run the engagement on the ground — scoping, baseline measurement, the four-lever implementation, manager certification, and case study production.

The senior consultant who runs your pilot is the same person available for follow-on engagement and license expansion. You are not handed off mid-engagement. You build a relationship with one delivery lead, and that relationship continues if you choose to scale the work across the organization.

Jim is directly involved at four review points across the twelve weeks: scope approval before contract, baseline review at Week 2, mid-engagement check at Week 6, and final deliverable review at Week 11. He is also available for executive-level escalation throughout the engagement. The pairing protects founder-grade methodological judgment while ensuring full-time delivery presence on your engagement.

About the methodology architect. Jim Wilde brings 45 years of enterprise systems experience, including nine years as lead architect on mta.info, the public-facing digital infrastructure for the Metropolitan Transportation Authority. He is the author of CAPACITY: The Variable No One Measures, which documents the full framework. The first version of this work failed — a sleep app that asked depleted people to do the work of non-depleted people. That failure became the design rule for everything that followed: build for the most depleted user first. The methodology, the app, the audit, and the pilot all stress-test against that rule.

About the senior consultants. Senior delivery consultants at Emergent Skills are recruited from senior management consulting backgrounds — typically 12 to 20 years of experience, with operating exposure inside knowledge-work organizations and demonstrated comfort in CFO-level conversations. They are trained against a six-week curriculum and operate under founder review for their first three engagements before reaching standard cadence. The methodology is precise; the consultants who deliver it are senior enough to apply it with judgment.

By pilot end, the manager of your pilot team is an internal champion, certified, and capable of carrying the work forward. You leave the engagement with a credentialed manager, an owned case study, and a senior consultant who knows your context and is available for whatever comes next.

Investment

Pilot investment scales with team size and complexity. These ranges are directional.

Team Pilot

$75K – $125K

Single team, up to 25 people. Full methodology, full measurement, full case study. 12 weeks.

Business Unit Pilot

$150K – $250K

25 to 100 people across one or two interconnected teams. Cross-team pattern measurement. 12 to 14 weeks.

Division Pilot

$300K+

100+ people. Multi-team measurement framework. Leadership cohort certification included. Custom scoping.

Organizations that complete a Capacity Audit first receive audit fees credited against pilot engagement. Design partner slots (35% off) limited to three organizations in 2026.

Questions we get asked

Do we need to complete a Capacity Audit first?

Not strictly. Organizations that have run the audit come into the pilot with clearer baselines and a faster start, and we recommend it. But you can also enter directly at the pilot stage with the right executive sponsorship and a clearly scoped team. If you skip the audit, the Week 1-2 baseline work is more extensive.

How does the pilot connect to a license decision?

Directly. The pilot produces the team's capacity cost top-line at baseline and at endpoint. If you proceed to an organizational license, the quarterly capacity reviews track progress against that same figure, scaled to the licensed population. A license bought without a pilot baseline starts from vendor benchmarks. A license following a pilot starts from your own data. The pilot is how you replace "trust the methodology" with "here's what it did for our team."

What if the pilot doesn't produce clear results?

We tell you that. And we tell you why. The case study documents what did and didn't work with the same rigor either way. Most pilots produce measurable results because the methodology is designed to. When they don't, the reasons are usually diagnosable: wrong team selection, insufficient sponsor engagement, a competing organizational initiative we didn't know about. You get the analysis and the data. The case study is honest either way.

Can we keep the app after the pilot?

Yes, via organizational license. Individual app access for the pilot team continues uninterrupted if you extend. Scaling to additional teams requires license expansion. License pricing is discussed at pilot conclusion, once there's data to inform the investment.

Who owns the case study?

You do. It's your internal asset. We request the right to reference the engagement in general terms with anonymized outcomes, subject to your approval. Named case studies with attributed outcomes are possible but require separate agreement and typically come with design partner pricing.

Who actually runs the engagement day-to-day?

A senior delivery consultant from Emergent Skills, trained in the methodology and operating under founder review at four checkpoints across the twelve weeks. The consultant is your primary point of contact throughout the engagement and remains your point of contact for any follow-on work. Jim Wilde is involved at scope approval, baseline review, mid-engagement check, and final deliverable review, and is available for executive-level escalation throughout. This pairing is intentional: the methodology is precise enough to require senior judgment, and the engagement is operational enough to require full-time delivery presence.

Start with a conversation

Tell us what you're seeing, what you've tried, and which team might be the right first test. We'll tell you whether a pilot fits, what it would cost in your case, and what a realistic outcome picture looks like for your organization. If a pilot isn't the right move, we'll tell you that too.