Schybo

L8 OKR Guide

Back to home

L8 OKR Guide

This document is the practical companion to docs/work-ethos.md.

The ethos doc is about how I want to think and operate. This doc is about where I should point that energy if I want my work to add up to Senior Staff (L8) scope in Growth.

It is not a personal KPI sheet. It is a way to connect three things that are easy to keep separate when they should not be:

  • the business outcomes Growth is trying to move
  • the engineering rubric for L8
  • the kinds of bets and evidence that actually compound into larger-scope impact

How I Want To Use This

When I am deciding whether to take on work, shape a roadmap, sponsor a technical direction, or explain impact, I want to ask:

  1. Which business outcome or capability does this actually support?
  2. Does this work map cleanly to L8 behavior, or is it only locally useful?
  3. If this succeeds, what evidence will exist that the impact was real?

If I cannot answer those three questions, the work may still be worth doing, but I should be honest that it is probably not one of my highest-leverage bets.

The Targets I Should Keep In View

I do not need to personally own every top-line number in Growth. I do need to understand which outcomes the org is trying to create, what capabilities those outcomes depend on, and where engineering leverage can materially change the curve.

1. Company-Level Context

At the company level, the durable frame is long-term profitable growth. In practice that means I should care about:

  • whether our work helps the business make better allocation decisions
  • whether our systems improve the quality and speed of experimentation
  • whether the org is building durable value, not just short-term narrative wins

This is the backdrop for everything else. Even when local teams are goaled on more specific metrics, I want to keep the larger question in view: does this work improve the company's ability to grow profitably and make sound tradeoffs?

2. Growth Business Outcomes

The specific Growth OKRs will evolve, but the durable categories seem to be:

  • Marketplace growth: top-line growth and run-rate impact
  • IC+ growth and engagement: membership adoption, habit formation, and value realization
  • Early lifecycle and retention: activations, resurrections, and stronger user habits after first conversion
  • Enterprise / retailer growth: whitelabel foundations, retailer adoption, and campaign capability expansion
  • Growth efficiency: better allocation of spend, better experimentation, and better decisions per unit of engineering and marketing effort

These are not all equal at all times, but together they describe the field I am operating in.

3. Enabling Capability Targets

As an L8, I should care not just about outcome metrics but about the capabilities that make those metrics more movable:

  • experimentation velocity and quality
  • measurement quality and decision quality
  • production excellence, latency, resilience, and safe rollout
  • reusable platform or tooling leverage
  • AI-native systems that reduce toil or widen leverage
  • cross-channel orchestration instead of siloed optimizations

These are often the places where engineering can create disproportionate impact even if the top-line business number is shared across many teams.

Directly Own Vs. Understand And Shape

One trap in senior work is acting as if every important org metric has to become a personal metric. I do not think that is right.

Numbers I Directly Own

These are the metrics where I have obvious accountability because I lead the strategy, system, or execution path closely enough that success or failure should trace back to decisions I materially shaped.

Examples:

  • a platform capability shipping on time with clear adoption and reliability outcomes
  • a technical or AI system that improves campaign performance, experimentation quality, or productivity
  • an operational quality metric tied to systems I help define, such as latency, resilience, rollout safety, or debugging time

Numbers I Should Understand And Shape

These are org-level Growth outcomes that I may not own directly, but should still use to decide where to spend time and where to influence sequencing.

Examples:

  • iGTV or run-rate growth categories
  • IC+ and lifecycle habit metrics
  • enterprise retailer growth and onboarding milestones
  • spend efficiency or decision quality improvements

If I understand these well, I can help the org choose better leverage points rather than only improving my local area in isolation.

Evidence That Still Counts Even When The Metric Is Shared

Sometimes the business metric is real but attribution is collective. That does not mean the work is invisible. It means I should be explicit about the kind of evidence that shows L8-level contribution:

  • I shaped the roadmap or sequencing that made the result possible
  • I created a platform, abstraction, or operating mechanism that multiple teams used
  • I improved the org's decision quality, not just one launch
  • I reduced major risk, latency, toil, or fragmentation in a way that changed what the org could attempt next

How To Read Growth OKRs Like An L8

An L8 should not read an OKR as just a number to chase. I want to read each one through five lenses.

1. Outcome Lens

What real business or customer outcome is the org trying to create?

If I cannot state that clearly, I am at risk of optimizing a proxy without understanding the reason it matters.

2. System Lens

What systems, abstractions, workflows, or constraints actually govern this outcome?

This is often where engineering leverage lives. Sometimes the best path to moving a metric is not a local feature but a better platform, feedback loop, or operational mechanism.

3. Sequencing Lens

What needs to happen first, and what work is pretending to be urgent before the foundations are ready?

L8 scope often shows up in sequencing. Strong senior leadership is often less about saying yes to important work and more about putting important work in the right order.

4. Reusability Lens

Does this create a one-time win, or does it leave behind leverage that multiple teams can use?

I want to keep asking whether the work compounds.

5. Evidence Lens

If this works, what proof will exist?

I want to prefer work where the answer is legible: better business outcomes, better reliability, better decision speed, better adoption, clearer architectural direction, or broader reuse.

What L8 Looks Like In Growth

The most useful way for me to use the rubric is not as a generic checklist, but as a lens on Growth-specific behavior.

Technical Execution

At L8, technical execution is not just writing strong code or landing hard projects. It is defining and driving pillar-wide technical direction.

In Growth, that likely looks like:

  • setting direction for major systems that affect multiple teams or channels
  • raising the bar on production excellence, latency, observability, resilience, and incident quality
  • creating durable abstractions that simplify future work across the pillar
  • identifying and solving technical problems that are bottlenecks to Growth's next phase, not just today's roadmap

Evidence that counts:

  • multi-team systems or foundations with broad adoption
  • a major technical strategy or modernization effort that changes what the pillar can ship
  • measurable improvement in reliability, debugging, latency, rollout safety, or operating cost
  • architectural direction that other teams now follow

Common trap:

Shipping several hard projects that are impressive but still mostly local in scope.

Product Thinking And Ownership

At L8, the bar is not only understanding product goals. It is helping define org-level roadmaps, problem selection, and sequencing for maximum business impact.

In Growth, that likely looks like:

  • understanding the relationship between company north star, Growth outcomes, and engineering investment
  • identifying which leverage points matter most across marketplace growth, lifecycle, IC+, enterprise growth, and experimentation quality
  • shaping roadmaps with Director+ engineering and product leaders
  • turning ambiguous opportunities into concrete, sequenced technical bets

Evidence that counts:

  • a roadmap, strategy, or sequence that multiple teams rally around
  • evidence that a high-leverage problem got chosen because of your framing
  • cases where you linked user problems, technical constraints, and business outcomes into one coherent narrative
  • clear tradeoff decisions that improved org focus

Common trap:

Being highly strategic inside your own area while leaving pillar-level framing to other people.

AI Fluency

At L8, AI fluency is not just using tools personally. It is helping define how the pillar should use AI to create durable advantage.

In Growth, that likely looks like:

  • AI systems that improve decision quality, experimentation, debugging, targeting, content, or analytics access
  • better standards for robust and reusable AI-assisted development
  • AI that reduces toil across engineering, PM, DS, or marketing, rather than just creating a demo
  • shaping where AI should and should not sit in the Growth stack

Evidence that counts:

  • a system, framework, or operating pattern that others adopt
  • AI-assisted workflows that materially reduce cycle time or widen access to insight
  • technical direction that makes AI use safer, more reliable, or more reusable
  • clear product or platform leverage rather than isolated novelty

Common trap:

Confusing “interesting AI work” with durable business or platform leverage.

Collaboration

At L8, collaboration means influencing the pillar and helping leadership make better decisions, not just being easy to work with.

In Growth, that likely looks like:

  • strong partnership with Director+ engineering and product leaders
  • clear cross-team execution on business-critical initiatives
  • protecting team time by clarifying ownership and reducing churn
  • mentoring senior engineers and helping others expand their influence
  • becoming someone whose judgment is trusted in ambiguous, cross-functional situations

Evidence that counts:

  • cross-team initiatives that stayed aligned because you helped define the decision frame
  • visible trust from senior leadership or adjacent teams
  • stronger senior engineers around you because of your coaching or sponsorship
  • conflicts or ambiguity resolved earlier because you intervened well

Common trap:

Being helpful everywhere without increasing actual clarity, leverage, or org decision quality.

A Practical Decision Filter

When I am evaluating a project, roadmap item, escalation, or new area of ownership, I want to ask:

  • Does this map to an important Growth outcome or enabling capability?
  • Is this a high-leverage problem, or just a loud one?
  • Does this create pillar-level benefit, or only local usefulness?
  • Does this improve the org's ability to make decisions, ship safely, or scale?
  • Am I choosing this because it matters, or because it is visible?
  • If this succeeds, will the resulting evidence be legible in the L8 rubric?
  • Is this a systems bet, a sequencing bet, a product bet, or a temporary patch?
  • Does this leave behind reusable leverage for other teams?
  • Is there a clearer owner, and should I support rather than compete?
  • If I spend a quarter on this, will it move me toward pillar-level scope?

A Lightweight L8 Scorecard

I do not want to manage myself by spreadsheet, but I do want a simple way to test whether my quarter is adding up.

1. Business Relevance

  • Can I point to the Growth outcomes my work is meant to influence?
  • Do I understand the org context well enough to explain why these bets matter now?

2. Pillar Leverage

  • Did I improve a system, capability, abstraction, or operating mechanism that multiple teams benefit from?
  • Did I shape sequencing or direction beyond my immediate area?

3. Rubric Legibility

  • Do I have evidence in all four L8 rubric dimensions, not just technical execution?
  • Would an outside reader see pillar-level scope, or just strong local execution?

4. Evidence Quality

  • Do I have artifacts that prove the work mattered: docs, metrics, adoption, design direction, launch outcomes, quality gains, or leadership trust?
  • Have I captured them while the context is still fresh?

If the answer is weak in one of these areas, I probably need to rebalance my next set of bets.

Quarterly Bet Pattern

I want most quarters to have a small number of deliberate bets rather than many unrelated efforts.

A good pattern is probably:

  • One pillar-shaping bet: a system, strategy, or capability that changes how Growth works
  • One business-facing bet: a clear lever tied to important Growth outcomes
  • One force-multiplier bet: something that improves decision speed, measurement quality, reliability, or AI leverage for others

This is not a rule. It is a reminder that my portfolio of work should have shape.

Evidence I Should Log Continuously

I want to capture evidence as I go, especially when it is easy to forget later.

Useful evidence includes:

  • strategy or sequencing docs I drove
  • decisions where I clarified tradeoffs or changed direction
  • platform or architecture work with broad reuse
  • reliability, latency, cost, or debugging improvements
  • experiments, launches, and follow-through on what we learned
  • examples of senior partnership, mentorship, or org conflict resolution
  • AI systems or workflows that created durable leverage

If I wait until review season, I will remember the loud work and forget the shaping work. That would distort the picture.

Closing Reminder

I do not need every important metric to belong to me in order for my work to matter. I do need to keep aiming at the right leverage points, make the larger system clearer and stronger, and choose work that is legible as pillar-level impact over time.

That is the standard I want this doc to help me hold.