Subscribe to get news update
Product Research
November 18, 2025

Idea screening: filter bad ideas fast (before they waste your time)

Master idea screening with proven frameworks. Learn how to filter bad product ideas fast using systematic criteria and scoring methods that work.

In the fast-paced world of product development, not every idea is worth pursuing. Companies must quickly identify which concepts have potential and which should be set aside. A robust screening process helps teams avoid wasting time and resources on ideas that are unlikely to succeed.

Effective idea screening is designed to eliminate product concepts that don't meet key criteria, ensuring only the most promising ideas move forward in the development process.

The $18 million idea that should have been killed

In 2013, a team at Google had a “revolutionary” idea: Google+, a social network to compete with Facebook.

They had 1,000+ engineers working on it. They integrated it into every Google product. Gmail, YouTube, Maps, everything required a Google+ account.

Cost: Over $585 million invested Result: Shut down in 2019 after years of single-digit market share

The idea failed to meet key criteria for successful product ideas:

  • Facebook had 900M users with strong network effects
  • No clear differentiation beyond “it’s by Google”
  • Users actively hated forced integration
  • No validated demand for another social network

A 30-minute screening session could have killed this idea and saved $585 million.

This guide shows you how to systematically screen product ideas so you only invest in concepts with real potential and kill the rest before they waste resources.

What is idea screening?

Idea screening is the process of evaluating and filtering product ideas using systematic criteria before investing in development. It’s the second stage of new product development, right after idea generation. Idea screening helps teams focus on the most promising ideas and reduce wasted effort by eliminating less viable options early.

The goal: Kill 80-90% of ideas quickly so you can focus resources on the 10-20% with real potential. A systematic screening process allows you to filter ideas efficiently and ensure only the best concepts move forward.

Why screening matters:

Without screening:

  • Teams pursue too many ideas simultaneously
  • Resources spread thin across mediocre concepts
  • Bad ideas waste months of development time
  • Good ideas get starved of attention

With rigorous screening:

  • Focus on highest-potential concepts and the most promising ideas
  • Kill bad ideas before they waste resources
  • Clear decision-making criteria
  • Identify and prioritize promising ideas for development
  • 5-10x higher success rate

The brutal truth: Most ideas should be killed. That’s not a failure that’s exactly what screening is supposed to do so you can identify and prioritize the most promising ideas.

The two stages of idea screening

Stage 1: initial screening (go/no-go)

Purpose: Quickly eliminate obviously bad ideas
Time: 15-30 minutes per idea
Criteria: Binary (pass/fail)
Kill rate: 60-70% of ideas

When to use: Right after idea generation, with dozens or hundreds of raw ideas, this stage is designed to handle as many ideas as possible before moving to more detailed evaluation. For platforms trusted by industry experts and professionals, see CleverX Reviews.

This process is used to quickly filter potential ideas, ensuring only the most promising concepts move forward for further screening and development.

Stage 2: detailed screening (scoring & ranking)

Purpose: Rank surviving ideas by potential using objective scoring models
Time: 2-3 hours per idea
Criteria: Weighted scoring across multiple dimensions
Kill rate: Additional 20-30% of remaining ideas

When to use: After initial screening, with 10-20 ideas that passed stage 1. For any questions about the platform, you can refer to the CleverX FAQs.

Scoring models are used at this stage to objectively rank ideas by assigning weights and ratings to key criteria such as user value, feasibility, and cost. This structured approach helps teams make data-driven decisions and ensures that the most promising ideas are prioritized for further consideration.

The selected ideas from this stage move forward to concept development or further validation.

Stage 1: initial screening framework

Kill ideas that fail any of these must-have criteria. Using a set criteria ensures consistency and objectivity in the screening process.

When describing the framework, it's important to establish specific criteria based on your company goals and market needs. This approach helps evaluate each idea against measurable standards before moving forward.

Must-have criterion #1: real problem exists

The question: Does this solve a problem that target users actually experience?

Identifying genuine customer problems is crucial—your idea should address real, unmet needs that customers face, not just assumptions.

How to evaluate:

Pass if:

  • You’ve talked to 5+ target users who have this problem
  • They can describe specific instances of experiencing it
  • It happens at least weekly for target users
  • They currently use workarounds (proves it’s real)

Fail if:

  • “I think people have this problem” (no validation)
  • Only founders/team members experience it
  • It’s a “vitamin” (nice-to-have) not a “painkiller” (must-have)
  • No current workarounds exist (suggests not painful enough)

Example:

Idea: “App that reminds you to drink water”

Screening:

  • Q: Do people actually forget to drink water and suffer consequences?
  • Q: Is this painful enough that they’d pay to solve it?
  • Q: Do dehydration symptoms drive behavior change?

Likely result: FAIL - Nice to have, not must-have. Free apps exist. No validated demand at price point needed.

Must-have criterion #2: strategic fit

The question: Does this align with our company strategy, capabilities, and resources?

How to evaluate:

Pass if:

  • Aligns with company mission and vision
  • Supports overall business objectives
  • Leverages existing capabilities or partnerships
  • Team has domain expertise
  • Fits within resource constraints (time/budget/team)

Fail if:

  • Requires capabilities you don’t have and can’t acquire
  • Contradicts current business model
  • Would require complete pivot of company
  • Team lacks relevant expertise

Example:

Company: B2B SaaS productivity tools
Idea: “Consumer gaming app”

Screening:

  • Different target market (B2B → B2C)
  • Different business model (subscription → ads/IAP)
  • No gaming expertise on team
  • Requires completely different GTM strategy

Result: FAIL - Strategic misfit despite potentially good idea

Must-have criterion #3: market size justifies investment

The question: Is the potential market large enough to warrant our investment?

How to evaluate:

Pass if:

  • TAM (Total Addressable Market) > $100M for VC-backed
  • TAM > $10M for bootstrapped
  • SAM (Serviceable Available Market) > $20M
  • SOM (Serviceable Obtainable Market) > $2M in 3 years
  • Market demand is validated through research, competitive analysis, and external factors (e.g., PESTEL analysis)

Fail if:

  • Niche market under $10M total
  • Market shrinking year-over-year
  • Too many established competitors with 90%+ share
  • Can’t realistically capture meaningful share

Quick market sizing:

TAM = [Total potential customers] × [Annual revenue per customer]

Example:

  • 50,000 potential customers
  • $500/year revenue per customer
  • TAM = $25M  (Passes for bootstrap)
  • Also consider the growth potential of the market to ensure future expansion opportunities and long-term success.

Must-have criterion #4: technical feasibility

The question: Can we actually build this with available technology and resources?

How to evaluate:

Pass if:

  • Technology exists or is achievable
  • Team has or can acquire necessary skills
  • Development timeline is reasonable (6-18 months)
  • No insurmountable technical barriers
  • All technical aspects (such as design, engineering details, and use of tools like CAD or prototyping) have been considered and are feasible

Fail if:

  • Requires breakthrough technology that doesn’t exist
  • Team completely lacks technical capability
  • Would take 5+ years to build
  • Dependent on external technology not yet available

Example:

Idea: “AI that predicts stock market with 99% accuracy”

Screening:

  • Not technically feasible with current AI
  • If it were possible, would break financial markets
  • False promise that can’t be delivered

Result: FAIL - Technical impossibility

Must-have criterion #5: no fatal legal/regulatory barriers

The question: Are there legal or regulatory issues that would prevent launch or scale?

How to evaluate: See why customer satisfaction is crucial for business success.

Pass if:

  • No obvious legal violations
  • Regulatory path is clear (even if lengthy)
  • Compliance requirements are achievable
  • IP concerns can be addressed

Fail if:

  • Violates existing laws
  • Requires regulatory approval with <20% success rate
  • IP is blocked by competitors' patents
  • Privacy/security requirements are insurmountable

Example:

Idea: "Uber for private jets without pilot licenses". If you want to validate this business concept, consider reviewing the Survey Optimization Guide: Design Strategy 2024 to maximize your customer feedback process.

Screening:

  • FAA regulations require certified pilots
  • Insurance requirements would be prohibitive
  • Liability exposure is massive

Result: FAIL - Regulatory impossibility

Initial screening checklist:

Use this for rapid go/no-go decisions:

The initial screening framework consists of five must-have criteria that each idea must pass to advance to the next stage. These criteria include: whether a real problem exists, strategic fit with the company, sufficient market size, technical feasibility, and absence of legal or regulatory barriers. For each criterion, the idea is evaluated and marked as pass or fail, accompanied by notes to provide evidence or considerations. The decision rule is clear: an idea must pass all five criteria to move forward to the detailed screening stage.

Result: 60-70% of ideas should fail initial screening. That’s good, it means the filter is working.

Tip: Define key performance indicators (KPIs) to measure the effectiveness and success of your screening process over time.

Stage 2: detailed screening (scoring matrix)

Ideas that passed Stage 1 now get ranked using weighted scoring, which is a critical part of the overall idea screening process.

The weighted scoring framework

How it works: Learn more about integrating human-centered approaches with AI in research.

  1. Define 5-10 evaluation criteria
  2. Assign weight to each (total = 100%)
  3. Score each idea on each criterion (1-10)
  4. Calculate weighted score
  5. Rank ideas by score

Recommended scoring criteria:

When evaluating new product ideas, it's essential to define key criteria for assessment to ensure your process is objective and aligned with business goals. The scoring matrix below helps you prioritize ideas, but be sure to establish a set criteria for each factor before scoring to maintain consistency and focus.

1. Market opportunity (weight: 25%)

Score 1-10:

  • 10: Massive underserved market ($500M+ TAM), growing rapidly
  • 7: Large market ($100-500M TAM), stable growth
  • 5: Moderate market ($20-100M TAM), slow growth
  • 3: Small market ($5-20M TAM), flat
  • 1: Tiny market (< $5M TAM) or shrinking

What to measure:

  • Total addressable market size
  • Market growth rate (CAGR)
  • Competitor market share concentration
  • Ease of reaching target customers

2. Competitive differentiation (weight: 20%)

Score 1-10:

  • 10: Breakthrough innovation, no direct competitors, 10x better
  • 7: Clear differentiation, 3-5x better than alternatives
  • 5: Incrementally better, some unique features
  • 3: Similar to existing solutions, minor improvements
  • 1: Worse than current alternatives or “me-too” product

What to measure:

  • Unique value proposition clarity
  • Superiority vs. current solutions
  • Defensibility (moats)
  • Difficulty for competitors to copy

3. Strategic fit (weight: 15%)

Score 1-10:

  • 10: Perfect alignment, leverages all core strengths
  • 7: Strong fit, uses most capabilities
  • 5: Moderate fit, requires some new capabilities
  • 3: Weak fit, mostly new territory
  • 1: Complete misalignment with strategy

What to measure:

  • Alignment with company mission
  • Leverage of existing assets/capabilities
  • Team expertise match
  • Brand consistency

4. Customer demand & validation (weight: 20%)

Score 1-10:

  • 10: Strong validated demand, customers asking for this, pre-orders
  • 7: Clear demand signals, positive validation tests
  • 5: Moderate interest, some validation
  • 3: Weak signals, limited validation
  • 1: No validation, assumption-based

What to measure:

  • Problem validation interview results
  • Concept test scores (purchase intent)
  • Pre-sales or LOI count
  • Competitor success with similar concepts

5. Resource requirements (weight: 10%)

Score 1-10:

  • 10: Minimal resources, can ship MVP in 1-2 months
  • 7: Moderate resources, 3-6 month timeline
  • 5: Significant resources, 6-12 month timeline
  • 3: Major resources, 12-18 months
  • 1: Massive resources, 24+ months

What to measure:

  • Development cost estimate
  • Team size required
  • Time to first customer
  • Opportunity cost vs. other projects

6. Revenue potential (weight: 10%)

Score 1-10:

  • 10: $10M+ ARR potential in Year 3
  • 7: $5-10M ARR potential
  • 5: $2-5M ARR potential
  • 3: $500K-2M ARR potential
  • 1: < $500K ARR potential

What to measure:

  • Pricing validation results
  • Customer LTV estimates
  • Market share assumptions
  • Monetization model clarity

Decision thresholds:

  • 8.0+: Must-build, top priority
  • 7.0-7.9: Strong idea, build if resources available
  • 6.0-6.9: Keep in backlog, revisit quarterly
  • < 6.0: Kill unless something changes dramatically

Alternative screening methods

1. Ice score (simpler alternative)

ICE = Impact × Confidence × Ease

Impact (1-10): How much will this move the needle?
Confidence (1-10): How confident are we this will work?
Ease (1-10): How easy is it to implement?

Formula: ICE Score = (Impact + Confidence + Ease) / 3

When to use: Quick prioritization with small teams, less formal screening

2. Rice score (detailed alternative)

RICE = Reach × Impact × Confidence / Effort

Reach: How many customers will this affect per quarter?
Impact: Scoring 0.25 (minimal) to 3 (massive)
Confidence: Percentage (0-100%)
Effort: Person-months of work

Formula: RICE = (Reach × Impact × Confidence) / Effort

3. Kano model (customer satisfaction)

Classifies features into categories using an idea based framework:

Basic needs (must-have):

  • Expected by customers
  • Absence causes dissatisfaction
  • Presence doesn’t increase satisfaction
  • Example: Email encryption, app doesn’t crash

Performance needs (more is better):

  • Satisfaction increases linearly with quality
  • Example: Faster load times, better search results

Delighters (wow factor):

  • Unexpected features
  • High satisfaction when present
  • No dissatisfaction when absent
  • Example: Dark mode, delightful animations

Survey customers with paired questions:
Use idea screening surveys to gather feedback on potential features and concepts for Kano analysis.

  1. How would you feel if [feature] was present? (Positive)
  2. How would you feel if [feature] was absent? (Negative)

Answers:

  • I like it
  • I expect it
  • I’m neutral
  • I can tolerate it
  • I dislike it

Learn more about how AI is transforming market research.

Map responses to identify category.

Screening decision: Prioritize performance and delighter features after covering basic needs.

Screening criteria by product stage

Early-stage startup screening

Prioritize:

  • Speed to market (can ship in 3 months?)
  • Founder passion (will you work on this for 10 years?)
  • Clear early customers (can you name 10 potential users?)
  • Minimal resources (can build with <$50K?)

De-prioritize:

  • Market size (can start small and expand)
  • Perfect strategic fit (you're still finding strategy)

Growth-stage company screening

Prioritize:

  • Strategic fit (builds on existing strengths)
  • Retention impact (reduces churn or increases engagement)
  • Revenue potential (clear path to incremental ARR)
  • Competitive differentiation (strengthens moat)

De-prioritize:

Enterprise company screening

Prioritize:

  • Market leadership (maintains or grows market share)
  • Platform integration (works with existing products)
  • Risk mitigation (defensive moves vs competitors)
  • Regulatory compliance (meets enterprise standards)

De-prioritize:

  • Innovation for innovation's sake
  • Small market opportunities (<$100M)

Common screening mistakes

1. No kill criteria defined

The mistake: "Let's just build everything and see what works!"

The reality: Without screening, resources spread thin and nothing succeeds.

The fix: Define must-have criteria upfront. Stick to them even when you love an idea.

2. Founder bias overrides data

The mistake: CEO loves an idea, so it bypasses screening.

The reality: Founder-loved ideas fail at the same rate as any others.

The fix: Separate idea advocates from screening committee. Use blind scoring when possible.

3. Analysis paralysis

The mistake: Spending 10 hours screening ideas worth 1 hour of development.

The reality: Screening should be fast. If you can't decide, the idea probably isn't strong enough.

The fix: Time-box screening sessions:

  • Initial screening: 15-30 min per idea
  • Detailed screening: 2-3 hours per idea
  • Final decision: 1 hour meeting

4. Sunk cost fallacy

The mistake: "We already spent 20 hours on this idea, we can't kill it now!"

The reality: Sunk costs are irrelevant. Future waste matters.

The fix: Evaluate ideas only on forward-looking potential, not past investment.

5. Screening too late

The mistake: Building a prototype before screening the core idea.

The reality: Screening is meant to prevent building, not validate what's already built.

The fix: Screen immediately after idea generation, before any development work begins.

Real-world screening examples

Case study 1: Amazon's "working backwards" screening

Method: Write a press release before building anything

Screening questions:

  • Can you write a compelling press release?
  • Does the customer benefit come through clearly?
  • Would you share this with friends?
  • Is there a "wow" moment?

If answers are "no," the idea is killed.

Example: Amazon Prime screening

  • Clear benefit: Free 2-day shipping
  • Wow moment: Unlimited for $79/year
  • Strong press release
  • Result: Approved, became $35B+ business

Case study 2: Apple's product screening

Known criteria (from interviews with former PMs and expert networks):

  • Does it advance our strategic narrative?
  • Is it 10x better than alternatives?
  • Can only Apple build this well?
  • Does it integrate with our ecosystem?
  • Will customers understand the value in 10 seconds?

Famous kills:

  • Apple car (failed strategic fit)
  • Apple TV content production (eventually revived)
  • Countless iPhone features (failed "10x better" test)

Case study 3: Google's "toothbrush test"

Screening question: "Would you use this product once or twice a day?"

Logic: Products used daily become habits. Habits drive retention.

Examples:

  • Gmail: Daily use (Passes)
  • Google Search: Multiple times per day (Passes)
  • Google Glass: Weekly use at best (Fails)

Result: Simple screening question killed ideas before $100M+ investment.

Your idea screening action plan

Step 1: define your screening criteria (1 hour)

Choose 5-7 specific criteria that matter most for your company, based on your company goals and market needs:

  • Must-have (go/no-go) criteria
  • Weighted scoring criteria
  • Decision thresholds

Document and share with team.

Step 2: gather ideas (ongoing)

Collect product ideas from:

  • Customer requests
  • Team suggestions
  • Competitive analysis
  • Market trends
  • Input from multiple stakeholders to ensure a diverse idea pool

Keep in idea backlog.

Step 3: monthly screening session (2-4 hours)

Agenda:

  • 15 min: Review new ideas
  • 60 min: Initial screening (kill 60-70%)
  • 90 min: Detailed scoring of survivors
  • 30 min: Prioritize top 3-5 ideas

Output: Ranked list of screened ideas ready for further evaluation or development

Step 4: quarterly strategy review (4 hours)

Agenda:

  • Review last quarter’s screening decisions
  • Update scoring criteria based on learnings and recent market shifts
  • Revisit killed ideas (market changes?)
  • Confirm next quarter’s build priorities

Output: Updated roadmap

Conclusion: kill bad ideas fast

The best product teams aren't prolific builders they are ruthless screeners.

Apple: Kills 90% of ideas before prototyping
Amazon: Requires press release before building
Google: Uses toothbrush test to filter

The worst teams: Build everything and hope something works

The math:

  • 100 ideas generated
  • 80 killed in initial screening (15 min each = 20 hours)
  • 15 killed in detailed screening (2 hours each = 30 hours)
  • 5 advanced to concept development

Total screening time: 50 hours
Development time saved: 2,000+ hours (by not building 95 bad ideas)
ROI: 40x time savings

Your screening checklist:

- Must-have criteria defined
- Weighted scoring matrix created
- Monthly screening sessions scheduled
- Kill criteria applied ruthlessly
- Decision documented with rationale

The ideas you kill matter as much as the ideas you build. Screen fast, screen often, and only build winners.

Ready to systematize your idea screening?

CleverX helps product teams screen ideas efficiently with built-in frameworks, scoring matrices, and decision tracking. Screen faster and build smarter.

👉 Start your free trial | Book a demo | Download screening templates

Ready to act on your research goals?

If you’re a researcher, run your next study with CleverX

Access identity-verified professionals for surveys, interviews, and usability tests. No waiting. No guesswork. Just real B2B insights - fast.

Book a demo
If you’re a professional, get paid for your expertise

Join paid research studies across product, UX, tech, and marketing. Flexible, remote, and designed for working professionals.

Sign up as an expert