Subscribe to get news update
Product Research
December 14, 2025

NPD process vs agile product development: Which wins?

"Stage-gate NPD vs Agile: when to use upfront validation vs continuous discovery — how research timing, methods, and risk change product decisions."

The way your company builds products determines when you talk to users, what questions you can ask, and how you act on what you learn. Project management and the choice of project management methodology play a crucial role in shaping how research is conducted in both NPD and Agile approaches.

Stage-gate NPD concentrates research upfront, using a structured approach with predefined project phases. You do big studies early, make decisions, then execute.

Agile spreads research throughout development. You do smaller studies constantly, learn as you build, and adjust direction. Business leaders are increasingly considering how to integrate both methodologies to optimize research and development outcomes.

Neither is universally better for research. They’re optimized for different situations and require different research approaches.

Research in traditional stage-gate NPD

Stage-gate divides development into distinct phases, each with its own objectives and criteria, using the stage-gate model. Careful planning and detailed documentation are essential components of the stage-gate process, supporting clear requirements and smooth onboarding. Each phase has specific research activities that feed into gate decisions.

After describing the checkpoints, it's important to note that each gate serves as a risk management point, where resource allocation decisions are made based on predefined criteria such as market viability and technical feasibility. The stage-gate process is particularly suitable for complex projects involving large teams across multiple departments.

Discovery phase research

This is when you do foundational generative research understanding user problems, market opportunities, and unmet needs. Market research is also a key activity during the discovery phase, helping teams understand target audiences, analyze competitors, and assess feasibility before moving forward.

Procter & Gamble’s discovery research for Swiffer involved ethnographic studies watching people clean their homes. They spent six months observing cleaning behaviors before even thinking about product concepts. This deep upfront research revealed that people hated wringing out mops more than the actual mopping.

Your research methods here include:

  • Ethnographic field studies observing users in context

  • Contextual inquiry understanding workflows and pain points

  • Diary studies tracking behavior over weeks

  • Exploratory interviews identifying unmet needs

  • Jobs-to-be-done research understanding user motivations

The goal isn’t validating a specific solution. It’s understanding the problem space deeply enough to identify opportunities worth pursuing.

Medtronic’s research team spends 8-12 months in discovery for new medical devices, conducting hospital observations, physician interviews, and patient journey mapping before they propose specific product concepts.

Scoping phase research

Once you've identified an opportunity, scoping research tests whether specific concepts resonate with users and are technically feasible using appropriate research design principles.

You're doing concept testing, competitive analysis, and early prototype evaluation. The research question shifts from "what problems exist?" to "would this specific solution work?"

Dyson's scoping research for the Airwrap involved showing early concepts to hairstyling enthusiasts, testing whether the Coanda effect concept made sense to consumers, and validating that people would pay premium prices for easier styling.

Research methods include:

Philips tests 15-20 initial concepts during scoping, conducting quick evaluation studies with 30-50 users per concept. They kill most concepts at this stage based on user response.

Business case phase research

Now you're validating market size, refining the target audience, and confirming demand justifies investment. Research becomes more quantitative and financially focused.

You need to answer: How many people have this problem? How much would they pay? How often would they use it? What would make them switch from current solutions?

Tesla conducted extensive survey research during business case development for Model 3, quantifying demand at different price points and feature configurations before committing to production setup.

Research methods shift to:

  • Large-scale surveys (500+ respondents) quantifying opportunity

  • Conjoint analysis understanding feature priorities and price sensitivity

  • Market segmentation identifying target user groups

  • Competitive benchmarking quantifying your advantage

  • Purchase intent studies predicting conversion rates

The research rigor increases because you're justifying potentially millions in development investment. Executives want confidence before committing resources.

Development phase research

Once approved, development begins. In traditional stage-gate, this phase has surprisingly little user contact. You’re executing the plan validated in earlier phases. Decisions made in earlier phases impact the entire project lifecycle, limiting flexibility during the development phase.

Some companies do periodic validation studies checking that execution matches concepts users responded to. But fundamental direction changes are rare because you’ve already committed to tooling, manufacturing, and timelines.

This is stage-gate’s weakness from a research perspective. You validated concepts 12-18 months ago. Markets change. User needs evolve. But you’re locked into decisions made before users saw the real product.

Samsung’s stage-gate process includes quarterly validation checkpoints during development where survey design and other evaluation methods are used research teams test prototypes with users. But changes are limited to software and minor adjustments. Core hardware design is locked.

Testing and validation phase research

This is the testing phase of the stage-gate process, where you do final validation to ensure the product works as intended and matches user expectations. Frequent testing during this phase—such as regular quality checks, reviews, and iterative assessments—helps catch execution problems before moving to the next stage, reducing the risk of costly errors later.

Research methods include:

  • Usability testing identifying friction points

  • Beta testing with real users in real contexts

  • Satisfaction studies measuring delight

  • Regression testing confirming you didn’t break anything

  • Safety and compliance validation (for regulated products)

This phase catches execution problems but rarely leads to major changes. If users hate something fundamental, you’re often stuck launching anyway because you’ve spent millions developing it.

Johnson & Johnson’s medical device validation includes extensive usability testing with clinicians. But if research reveals major issues at this stage, they face difficult choices between expensive redesigns or launching with known problems.

Research in agile development

Agile treats development as continuous discovery, guided by the agile manifesto and agile principles, which form the foundation of agile processes. Agile is an iterative methodology, while phase-gate is a sequential methodology. Research happens every sprint, informing immediate decisions rather than distant gate reviews.

The agile framework and agile approach emphasize collaboration, adaptability, and iterative progress, contrasting with the structured, predefined phases and meticulous evaluations of phase-gate. Agile product development is dynamic, flexible, and customer-centric, aiming to deliver value more frequently and effectively through iterative and incremental progress. Agile methodologies break down the product development lifecycle into smaller, manageable increments called sprints or iterations, with typical phases including concept and initiation, sprint planning, design and prototyping, development, testing, review and retrospective, deployment, and maintenance.

Cross functional teams and the development team play a crucial role in agile product development and software development, bringing together diverse expertise to enhance collaboration and deliver incremental value. Agile methodologies can lead to faster time-to-market due to their iterative nature, while phase-gate processes may slow down innovation. Both agile and phase-gate methodologies can be integrated to leverage the strengths of each approach in new product development, and team expertise is essential in selecting the right methodology for a project.

However, implementing agile can be complex and challenging, especially for larger projects or organizations with distributed teams. Agile requires constant involvement and collaboration from team members and stakeholders, which may strain resources and time. For teams in sectors like the market research industry, understanding industry trends and economic challenges can further inform agile adoption strategies.

Research in sprint planning

At the start of each sprint, research informs what to build next. Effective sprint planning requires careful planning and resource allocation to ensure the right research questions are addressed. Implementing agile processes also requires careful planning to avoid resource strain and maintain strategic oversight. This isn’t six-month studies. It’s quick validation of the next highest-priority question.

Spotify’s research team maintains a “research backlog” alongside the product backlog. Before planning a sprint focused on playlist creation, they might do a quick five-user study understanding current playlist workflows.

Research questions are specific and actionable:

  • “Do users understand the current playlist interface?”

  • “What’s confusing about sharing playlists?”

  • “How do mobile and desktop playlist behaviors differ?”

You’re not doing comprehensive research. You’re answering the specific questions needed to start building confidently.

Continuous discovery practices

Agile teams that do research well integrate it into their regular rhythm rather than treating it as occasional projects.

Notion’s product teams talk to 3-5 users every week. Not big formal studies. Just quick 30-minute conversations about how people use features, what’s confusing, and what they wish existed.

This continuous contact means you catch problems early when they’re cheap to fix. You’re not waiting for quarterly research reports. You’re learning constantly. Continuous discovery also promotes collaboration among team members by encouraging open communication and shared understanding of user needs.

Intercom uses weekly research sessions where engineers, designers, and PMs observe users working with recent features. Everyone sees firsthand where people struggle, eliminating the translation loss of research reports. Innovation teams play a key role in integrating research into agile product development, ensuring insights are rapidly applied to improve products.

Testing in production

Agile’s biggest research advantage is you can test things with real users quickly and measure actual behavior instead of predicted behavior. Frequent testing is a core principle in agile development, enabling teams to conduct regular quality checks and iterative assessments throughout the product lifecycle.

You ship features to small user percentages, measure usage, run A/B tests, and let data guide decisions. This works for digital products where changes are cheap. Visual representation tools, such as Kanban boards, help teams monitor progress and manage testing activities by visually depicting workflow, status, and task progress.

Netflix famously tests different UI variations with real subscribers, measuring which designs lead to more watching. They run thousands of A/B tests annually, learning from actual behavior at scale.

Research becomes more quantitative and behavioral. Instead of asking “would you use this?” in interviews, you measure whether people actually use it when available.

Airbnb tests pricing displays, search filters, and booking flows through production experiments with 1-5% of users before rolling out to everyone. Real behavioral data beats survey responses.

Rapid iteration cycles

When research reveals problems, agile lets you fix them quickly, accelerating time to market and enabling companies to bring products to market faster. No waiting for the next release cycle or gate approval. This rapid iteration approach can provide a competitive advantage by allowing teams to respond quickly to customer needs. Combining agile and phase-gate methodologies can further improve time to market and ensure product alignment with customer requirements.

Figma’s research team identifies usability issues through weekly testing sessions. Engineers fix problems the same week. Users see improvements in days, not months.

This tight feedback loop means you can take research risks. If an experiment fails, you revert quickly. In stage-gate, failed bets are expensive because you’ve committed more before learning.

Slack ships features flagged to small internal teams first, gathers feedback, iterates rapidly, then gradually expands to more users. Each expansion phase includes research validating improvements.

The research debt problem

Agile’s weakness is teams can get so focused on shipping that research becomes reactive rather than strategic. Agile methodologies can also be particularly complex and challenging to implement for complex projects and large scale projects, especially in organizations with distributed teams, where coordination and communication hurdles are amplified.

You’re answering tactical questions (“is this button confusing?”) but missing strategic questions (“are we building the right thing?”). The urgency of the next sprint crowds out deeper generative research.

Pinterest addressed this by dedicating one researcher to “strategic research” separate from sprint work. This person does longitudinal studies, exploratory research, and foundational work that doesn’t tie to immediate shipping deadlines.

How research methods differ between approaches

The same research technique works differently in each methodology. The choice of project management methodology and a structured approach directly influences how research methods are applied throughout the development process, shaping the way teams plan, execute, and adapt their research activities.

User interviews

Stage-gate interviews: Conducted in batches during specific phases. You might interview 50 users over two weeks during discovery, synthesize findings, then not talk to users again for months.

The interviews are comprehensive, covering broad topics to inform the entire development cycle. You're gathering everything you might need because this is your research window.

Agile interviews: Continuous and focused. You interview 3-5 users weekly about specific recent features or upcoming changes. Each interview addresses immediate questions rather than comprehensive understanding.

The interviews feel more conversational because you're checking in regularly rather than extracting all possible information in one session.

Usability testing

Stage-gate usability testing: Happens late in development, often during validation phase. You're testing nearly finished products, making major changes expensive.

Tests are formal with detailed protocols, large sample sizes (15-30 users), and comprehensive coverage. You want to find everything because this might be your last chance before launch.

Agile usability testing: Happens continuously with rough prototypes and half-built features. Tests are informal, small (3-5 users), and focused on specific interactions.

You're testing to learn what to build next, not validate something finished. Finding problems is good because you can fix them tomorrow.

Surveys

Stage-gate surveys: Large-scale validation tools (500-5000 respondents) during business case development. You need statistically significant data for investment decisions.

Surveys are comprehensive, covering market size, feature priorities, pricing, and competitive positioning. They're expensive ($10,000-50,000) but justify big investments.

Agile surveys: Quick pulse checks (50-200 respondents) validating specific hypotheses. "Would you use this feature?" or "Which of these designs do you prefer?"

The surveys are cheaper ($500-2,000), faster, and more tactical. You're informing immediate decisions, not justifying long-term investments.

Prototype testing

Stage-gate prototype testing: High-fidelity prototypes resembling the finished product. You're validating that your planned solution works before committing to production.

Testing happens in phases, with prototypes getting increasingly polished. Early prototypes test concepts. Later prototypes validate execution details.

Agile prototype testing: Low-fidelity prototypes, sketches, or clickable mockups testing ideas before building anything. You might test paper prototypes or Figma mockups that take hours to create instead of weeks.

The goal is learning cheaply before writing code. You're willing to test rough ideas because rejection means saving development time, not wasting it.111

Where stage-gate research works better

Traditional NPD with concentrated upfront research makes sense in specific contexts. The stage-gate model’s structured approach and careful planning are essential for managing risk in high-stakes projects, such as those in regulated or capital-intensive industries. Detailed documentation supports compliance and risk management by ensuring clear requirements and minimizing misinterpretations. The phase-gate process also provides a framework for risk management and resource allocation, helping organizations reduce uncertainties and align projects with strategic goals.

Physical products requiring tooling

When manufacturing setup costs hundreds of thousands, you need confident research before committing. You can't iterate after you've built injection molds.

Dyson's vacuum development includes extensive user testing before finalizing designs because changing physical components post-tooling is prohibitively expensive. They do 50+ user studies during concept and business case phases validating every aspect.

Research must answer all critical questions before development begins. You can't ship a minimum viable vacuum and iterate based on user feedback.

Long development cycles

When development takes 2-3 years, comprehensive upfront research makes sense because you won't get feedback opportunities mid-development.

Automotive companies conduct massive research programs before designing new vehicles. Toyota might talk to 5,000+ potential customers during concept development because once design is locked, you're committed for 5+ years.

Regulated products

FDA and similar regulatory bodies want to see research documentation showing you validated user needs, tested prototypes, and confirmed safety before human trials.

Medical device companies maintain detailed research documentation at each gate. Research doesn't just inform decisions, it proves you followed proper development procedures.

Philips' medical equipment research includes formal usability validation with specific sample sizes and statistical analysis meeting regulatory requirements. Agile's informal continuous research wouldn't satisfy compliance needs.

Enterprise products with long sales cycles

When customers make purchase decisions based on 18-month roadmaps, you need research validating features before promising them.

Salesforce conducts extensive research with enterprise customers during planning, validating feature priorities before committing to roadmaps. Customers won't buy based on "we'll figure it out as we go."

Products where failure is expensive

When product recalls, safety incidents, or reputation damage from poor launches are costly, stage-gate's validation rigor pays off.

Automotive companies do extensive safety validation because recalls cost tens of millions plus reputation damage. The research thoroughness stage-gate demands prevents expensive failures.

Where agile research works better

Continuous lightweight research produces better outcomes in different situations. Agile processes and cross functional teams enable rapid adaptation in digital product development by fostering collaboration, flexibility, and efficient problem-solving across diverse areas of expertise. During sprints, agile breaks are used to help teams maintain focus, reflect on progress, and adapt plans as needed without disrupting the iterative workflow.

Digital products with cheap changes

Software's flexibility makes continuous research practical. Why predict everything upfront when you can test with real users and adjust quickly?

Notion, Figma, and Linear do continuous research because software changes cost almost nothing. They validate as they build rather than predicting perfectly before building.

Uncertain markets or user needs

When you don't know what users want or needs are evolving, agile's learning orientation beats stage-gate's prediction orientation.

TikTok's algorithm development relies on continuous experimentation measuring actual usage patterns. No amount of upfront research could predict which algorithm changes increase engagement. They need to test and measure constantly.

Products where usage reveals insights

Sometimes you can't understand user needs until they use the real product. Interviews and prototypes miss emergent behaviors.

Instagram started as Burbn, a complex check-in app. Only after launch did research reveal users primarily cared about photo sharing. Agile's flexibility enabled the pivot to Instagram.

Fast-moving competitive environments

When competitors ship weekly, stage-gate's 18-month research cycles leave you behind. You need continuous research keeping pace with market changes.

Gaming companies like Riot (League of Legends) conduct ongoing research with players, testing balance changes, new features, and content updates weekly. Stage-gate research cycles wouldn't match their ship cadence.

Products requiring behavioral data

When you need to measure what users actually do rather than what they say they'll do, agile's production testing beats stage-gate's pre-launch validation.

Netflix learns more from A/B testing with real subscribers than from any amount of pre-launch research. Behavioral data at scale reveals truths interviews miss.

Hybrid research approaches that work

Smart research teams mix methods based on what they’re learning. Hybrid approaches to new product development combine agile processes with a structured approach, allowing organizations to leverage the strengths of both methodologies.

A hybrid approach integrates elements of Agile and phase-gate methodologies, enabling teams to benefit from the flexibility, collaboration, and efficiency of agile processes while maintaining the structured governance and risk management of phase-gate processes. Agile methodologies emphasize iterative development and customer collaboration, which can enhance the traditional phase-gate process by making it more adaptable and responsive to feedback. This combination helps organizations avoid the rigidity of traditional phase-gate processes while still providing the necessary structure for careful planning, stakeholder communication, and project success.

Dual-track agile

Run continuous research alongside development but periodically do deeper strategic research informing longer-term direction.

Amplitude's research team runs weekly tactical research supporting sprint work. But quarterly, they pause for two-week strategic research projects exploring broader user needs and market opportunities.

This combines agile's continuous learning with periodic deeper investigation avoiding pure short-term focus.

Staged research with agile execution

Do comprehensive research at major commitment points but execute agilely between them.

Stripe conducts extensive research before entering new markets or building major new product categories. But within those areas, teams ship continuously based on ongoing learning.

The gates provide strategic validation. Agile execution enables tactical flexibility.

Research sprints

Dedicate specific sprints to pure research rather than feature development. This builds strategic learning into agile rhythm.

Asana runs "research sprints" every quarter where teams focus entirely on understanding users rather than shipping features. This prevents the reactive research trap where you only answer tactical questions.

Continuous improvement in product research

Continuous improvement drives successful product development by enabling teams to adapt and enhance products based on feedback and market changes. Embedding learning loops in both stage gate and phase gate processes ensures insights from each phase inform the next, fostering a feedback-rich environment that promotes ongoing refinement and better outcomes.

Agile methodology emphasizes leveraging continuous user feedback to guide decisions, reducing costly rework and accelerating time to market. Frequent feedback loops help teams quickly address issues and deliver products aligned with customer needs.

Balancing rapid iterations with a clear long-term vision is essential. Structured approaches like stage gate provide frameworks to monitor progress and ensure each phase aligns with strategic goals. This balance ensures products are innovative, customer-focused, and aligned with business objectives for sustained success.

Adapting your research practice

Your research approach should match your development methodology. Team expertise is a key factor in choosing the most effective research and development methodology, as the skills and experience of your team can significantly influence whether an agile or traditional NPD process is more suitable.

If you're stage-gate

Front-load generative research. Do extensive discovery work understanding user needs, pain points, and jobs to be done before concepts are defined.

Plan for comprehensive studies. Each research phase needs to answer all relevant questions because you won't have another chance for months.

Document thoroughly. Research needs to inform gate decisions and convince stakeholders. Create detailed reports, presentations, and artifacts.

Validate completely before development. Once you enter development, changing direction is expensive. Make sure research has answered critical questions.

Budget for large studies. Stage-gate research needs statistical confidence. Plan for sample sizes and costs supporting that rigor.

If you're agile

Build continuous research into sprint rhythm. Make user contact a regular practice, not occasional events. Talk to users weekly.

Keep studies small and focused. Answer specific immediate questions rather than comprehensive understanding. Three users revealing a clear problem is enough to act.

Share findings rapidly. Skip lengthy reports. Show video clips, share quotes, and communicate insights immediately while decisions are still pending.

Test in production. Use feature flags, A/B tests, and gradual rollouts to learn from real usage rather than just predicted usage.

Balance tactical and strategic. Dedicate capacity to exploratory research alongside sprint-focused research preventing purely reactive practice.

Common research mistakes in each approach

Stage-gate research mistakes

Analysis paralysis. Teams keep researching rather than deciding, afraid to move to the next gate without perfect information.

Research theater. Going through research motions to check boxes rather than genuinely learning. The research validates predetermined decisions rather than challenging assumptions.

Ignoring late findings. When validation research reveals problems late in development, teams ignore findings because changes are expensive.

Nokia's research showed touchscreens were the future, but their stage-gate process meant acting on insights took years. By the time they shipped touchscreen phones, the market had moved on.

Over-reliance on stated preferences. Stage-gate research often happens before users experience products, forcing reliance on what people say they'd do rather than what they actually do.

Agile research mistakes

No strategic research. Teams answer tactical questions sprint by sprint but never step back to question strategic direction.

Research debt accumulation. Shipping without research because "we'll learn after launch" creates messes requiring expensive fixes later.

Cherry-picking data. With continuous testing, it's tempting to find data supporting what you want to build rather than genuinely learning.

Google has killed dozens of products that never found product-market fit despite constant iteration. Tactical research without strategic research sometimes means optimizing the wrong thing.

Mistaking activity for insight. Talking to users weekly feels productive but generates insight only if you're learning things that change decisions.

What actually makes research effective

Your methodology matters less than these fundamentals.

Effective research depends on collaboration between cross-functional teams and a dedicated development team. These teams, including product managers, developers, and designers, work together to ensure research is actionable, support continuous improvement, and deliver results aligned with business goals.

Actually talking to users

Teams that maintain regular user contact make better products regardless of methodology. Teams that don't stay disconnected regardless of process.

Superhuman succeeds with lightweight processes because they obsessively gather feedback. Companies with elaborate research programs fail if researchers don't talk to real users.

Acting on what you learn

Research only matters if it changes decisions. If findings sit in reports nobody reads, the methodology is irrelevant.

The best teams make research visible. They share video clips in Slack, invite observers to sessions, and discuss findings immediately while decisions are pending.

Asking the right questions

Good researchers know what questions matter for upcoming decisions and design studies accordingly. Poor researchers answer easy questions nobody needs answered.

Accepting negative findings

Research revealing you're wrong is more valuable than research confirming you're right. Teams that can't handle negative findings waste money validating bad ideas.

Matching rigor to risk

High-risk decisions need rigorous research. Low-risk decisions need quick validation. Teams that treat everything equally waste resources.

The honest answer for researchers

Stage-gate and agile aren’t inherently good or bad for research. They’re optimized for different situations requiring different research approaches. If you’re researching pacemakers, embrace stage-gate’s validation rigor. It matches regulatory needs and physical constraints. If you’re researching social media features, embrace agile’s continuous learning. It matches the speed of change and low cost of iteration. If you’re researching cars, figure out which aspects need which approach. Safety research needs stage-gate rigor. Infotainment research can be agile. Stop defending methodologies. Start thinking about what research practices your specific situation requires. The teams producing the best insights are pragmatic, borrowing from both approaches and adapting to their reality. Optimizing research practices and integrating the strengths of both methodologies can provide a lasting competitive advantage.

Ready to act on your research goals?

If you’re a researcher, run your next study with CleverX

Access identity-verified professionals for surveys, interviews, and usability tests. No waiting. No guesswork. Just real B2B insights - fast.

Book a demo
If you’re a professional, get paid for your expertise

Join paid research studies across product, UX, tech, and marketing. Flexible, remote, and designed for working professionals.

Sign up as an expert