Summaries > Self-improvement > Experience > Why 10 Years of Experience Might Tea...
TLDR Skills-based career growth powered by AI replaces title-based roles with outcome-driven practice. Five repeatable skills—Judgment, Orchestration, Coordination, Taste, and Updating—are built via rubric-driven evaluation of concrete artifacts, starting with one key document like a product decision and expanding across teams and even hiring. AI serves as a consistent scorer to speed feedback while humans define criteria and guide progress, keeping heavy thinking in the loop. The approach targets fuzzy outcomes, delayed/noisy feedback, and low repetition, using repeatable drills and team-wide rubrics to scale improvement over time.
The main shift is moving from a jobs-based format to a skills-based framework for roles and career growth, where progress is measured by outcomes rather than titles. In practice, map roles to core repeatable skills—such as Judgment, Orchestration, Coordination, Taste, and Updating—and let AI help measure outcomes. This approach is inspired by the idea that athletes train and knowledge workers don’t, enabling skills to exist independently of titles. The first practical step is to identify a critical artifact that represents decision-making or work quality and use it as the anchor for skills development. Over time, outcomes become the primary measure that guides advancement rather than job labels.
Practically, replace vague feedback with a concrete rubric-driven process that applies to real artifacts. Artifacts include decision documents, architecture docs, CSMS call summaries, and pipeline plans. Create a 1–5 rubric with criteria such as clarity, at least two real options, explicit stakes and metrics, a clear recommendation, and surfaced risks and trade-offs. Score artifacts against the rubric, annotate the rationale, and use this scoring as the basis for feedback. This artifact-based rubric makes skill development scalable and trackable across teams.
Collect 3–5 relevant artifacts and create a rubric (1–5) with criteria like clarity and risk, then annotate and score. Provide the rubric and annotated examples to an LLM to score new documents, citing the reacting parts and explaining each score. Ask the AI to suggest edits to raise a given dimension and publish the rationale for future reference. This creates a repeatable pattern for evaluating work and tracking progress over a quarter. Over time, you build a scalable skill-building loop that reduces reliance on sporadic praise.
Design weekly drills focused on Judgment, Orchestration, Coordination, Taste, and Updating. For example, write a one-page decision document for a messy situation and compare it to a model-generated stronger version to identify gaps. Define specifications with goals, inputs, outputs, and constraints to train orchestration skills. Use executive updates to practice updating rationales and communicating evolving plans. Repeat weekly to gradually improve judgment and related capabilities with AI feedback.
At the team level, define the rubric collaboratively and wire up an LLM to critique documents before human review. Run short, 10-minute weekly practice sessions on AI-flagged growth areas to strengthen both team and individual capabilities. Treat rubric scores as directional signals rather than precise measures or promotion criteria. Use the same rubric in hiring by assigning realistic take-home tasks and live sessions to assess how candidates think under pressure. Align hiring with ongoing development so growth is embedded in the workflow.
Recognize that much AI usage is shadow AI, and openly acknowledge it so teams can improve rather than hide it. Focus on outcomes and skill development, not surveillance or punishment. Keep conversations constrained to reveal whether a candidate or teammate maintains a healthy relationship with AI while delivering results. Rubric scores will be noisy and should not be treated as precise promotions criteria. Start small by picking one habit, naming and measuring a skill, and using AI to train so knowledge work becomes a repeatable practice. AI coaching can make growth more accessible for individuals and teams.
Shift from a jobs-based format to a skills-based format for roles and career growth, enabled by AI and measured by outcomes rather than titles.
Judgment, Orchestration, Coordination, Taste, and Updating.
Fuzzy outcomes, delayed or noisy feedback, and low repetition of meaningful work.
Judgment appears in decision documents; Orchestration in handoffs and project plans; Coordination in emails and meeting notes; Taste in UX and design choices; Updating in evolving plans and rationales.
Pick one important artifact (e.g., a product manager decision document) and have trusted colleagues specify concrete criteria to judge its quality (e.g., decision stated, two real options, explicit stakes and metrics, clear recommendation, risks/trade-offs surfaced).
Collect 3–5 real examples, annotate and score them with a rubric (1–5) across criteria like clarity and risk; then give the rubric and annotated examples to an LLM to score new documents, explain scores, and suggest edits; repeat to create a pattern for evaluating work and tracking progress.
Weekly tasks to write a one-page decision document for messy situations, compare to a model-generated stronger version, identify gaps; develop drills for orchestration and executive updates; define rubric together at team level and use a team LLM to critique docs before human review; hold 10-minute weekly practice sessions on AI-flagged growth areas.
Use the same rubric for hiring: give candidates a take-home task to produce or repair a decision document, conduct a live session to adjust constraints, critique an AI-generated doc to assess performance under pressure; aim to align hiring with ongoing development rather than solely evaluating static skills.