Skip to main content
  1. Posts/

When Your Team Can't Estimate: Use a Spike Instead

· loading · loading ·
Jared Lynskey
Author
Jared Lynskey
Emerging leader and software engineer based in Seoul, South Korea

In Agile development, teams regularly face questions that can’t be answered through estimation alone. Should we use GraphQL or REST? Can our system handle 10,000 concurrent users? Is this third-party library production-ready? These are the moments where spikes become invaluable.

What is a Spike?
#

The term “spike” originates from Extreme Programming (XP), where it referred to a “very simple program to explore potential solutions.” Think of it as driving a spike through your problem—a focused, time-boxed effort to gain the knowledge needed to move forward confidently.

A spike is not a user story that delivers customer value. Instead, it’s a research task designed to:

  • Answer a specific technical or functional question
  • Reduce uncertainty before committing to an approach
  • Provide data for more accurate estimation

The output of a spike is knowledge, not working software. This distinction is crucial for proper backlog management and sprint planning.

Types of Spikes
#

Understanding the two main types of spikes helps teams apply them appropriately:

Technical Spikes explore implementation approaches:

  • Evaluating frameworks or libraries
  • Prototyping architectural patterns
  • Performance testing under specific conditions
  • Investigating integration complexity with external systems

Functional Spikes explore user requirements:

  • Clarifying ambiguous user stories
  • Validating assumptions about user behavior
  • Testing UI/UX approaches with prototypes
  • Understanding domain complexity

Why are Spikes Important?
#

Addressing Uncertainties: In complex projects, attempting to estimate work with significant unknowns leads to either padded estimates or missed deadlines. Spikes convert “I don’t know” into actionable information, enabling teams to make commitments with confidence.

Informing Decision Making: When facing architectural decisions or technology choices, spikes provide evidence rather than opinions. The findings feed directly into decision documents, ensuring choices are backed by hands-on investigation.

Reducing Risk: By investing a small, bounded amount of time upfront, teams avoid costly pivots later. A two-day spike that reveals a library’s limitations is far cheaper than discovering the same issue three sprints into development.

Improving Estimation Accuracy: Story points for unfamiliar work are often guesses. After a spike, teams can estimate with actual data about complexity, dependencies, and potential obstacles.

When to Use a Spike
#

Spikes are appropriate when:

  • The team cannot confidently estimate a story due to technical unknowns
  • Multiple viable solutions exist and data is needed to choose
  • A new technology, framework, or integration is being considered
  • Performance characteristics are uncertain and critical
  • Requirements are ambiguous despite stakeholder discussions

Spikes are not appropriate for:

  • Work the team already knows how to do
  • Delaying decisions that could be made with existing information
  • Replacing proper requirement gathering or user acceptance testing

A Framework for Conducting Spikes
#

1. Definition

  • Title: A clear, question-focused name (e.g., “Evaluate Redis vs. Memcached for Session Storage”)
  • Objective: The specific question to answer—not “research caching” but “determine if Redis cluster can handle our 50ms latency requirement”
  • Scope: Explicit boundaries on what will and won’t be investigated
  • Time-box: Typically 1-3 days; if it needs more, the spike may be too broad
  • Success Criteria: How you’ll know the spike achieved its goal

2. Research & Exploration

  • Data Gathering: Review documentation, case studies, and existing implementations
  • Prototyping: Build minimal proof-of-concept code—disposable, not production-ready
  • Consultation: Engage with experts, vendors, or community forums
  • Experimentation: Test hypotheses with actual code and measurements

3. Documentation

  • Findings: Concrete results with data where possible
  • Recommendations: Clear next steps with rationale
  • Risks & Challenges: Identified obstacles and their mitigation strategies
  • Code/Artifacts: Links to any prototypes (clearly marked as spike output, not production code)

4. Review

  • Presentation: Share results with the team and stakeholders
  • Feedback: Gather questions and alternative perspectives
  • Backlog Adjustments: Create, refine, or remove stories based on findings

5. Closure

  • Integration: Ensure knowledge is captured for future reference
  • Retrospection: Reflect on the spike process itself in your next retrospective

Spike Template
#

## Spike: [Title]

**Time-box**: [X days]
**Owner**: [Name]
**Sprint**: [Sprint number/name]

### Question to Answer
[Single, focused question this spike will answer]

### Background
[Why this spike is needed; what triggered the uncertainty]

### Assumptions
- [Assumption 1]
- [Assumption 2]

### Scope
**In Scope**:
- [Item 1]
- [Item 2]

**Out of Scope**:
- [Item 1]

### Success Criteria
- [ ] [Criterion 1]
- [ ] [Criterion 2]

### Findings
[To be completed during spike]

### Recommendation
[To be completed after spike]

### Follow-up Stories
- [ ] [Story 1]
- [ ] [Story 2]

The Role of Assumptions in Spikes
#

Every spike begins with assumptions—the foundational beliefs that frame the investigation. Making these explicit serves several purposes:

  • Establishes context: Stakeholders understand the starting point
  • Reveals biases: Assumptions can be challenged before investigation begins
  • Focuses effort: The spike tests assumptions rather than exploring aimlessly
  • Clarifies outcomes: Results are interpreted against stated assumptions

For example, a spike on database selection might assume: “Our data model is primarily relational” and “Read operations will outnumber writes 10:1.” If these assumptions prove incorrect during investigation, that itself is a valuable finding.

Common Pitfalls
#

Scope Creep: Spikes should answer a specific question, not become open-ended research projects. If new questions emerge, document them for future spikes rather than expanding the current one.

Goldplating Prototypes: Spike code is meant to be thrown away. The moment you start adding error handling or tests to spike code, you’ve likely crossed from exploration into implementation.

Skipping Documentation: A spike’s value extends beyond the immediate decision. Future team members facing similar questions benefit from recorded findings.

Treating Spikes as Commitments: A spike that reveals an approach won’t work is successful. The goal is learning, not validating a predetermined answer.

Integrating Spikes into Sprint Planning
#

When planning your sprints, consider:

  • Spike before you estimate: If a story has significant unknowns, schedule a spike in the current sprint and the actual work in a future sprint
  • Limit concurrent spikes: Multiple spikes dilute focus; one or two per sprint is typically sufficient
  • Don’t point spikes: Since spikes produce knowledge rather than working software, many teams track them by time-box rather than story points
  • Include spike findings in refinement: Present spike results before the team estimates related stories

Conclusion
#

Spikes transform uncertainty from a source of anxiety into an opportunity for learning. They acknowledge a fundamental truth of software development: we often don’t know what we don’t know until we investigate.

When used appropriately, spikes enable teams to make informed technical decisions, provide accurate estimates, and avoid costly mid-project pivots. They embody the Agile principle of responding to change—by investing in understanding before committing to action.

The next time your team faces a story that generates more questions than answers, consider whether a spike might be the most valuable work you could do. Sometimes the fastest path forward is to pause and learn.

Further Reading
#

Related Articles on This Site:

External Resources: