Back to guides
6
6 min

Production & D-Wave

Real Hardware, Benchmarks & Honest Limitations

From Simulation to Reality

Everything you've built so far runs on classical hardware — the simulated annealing solver uses thermal fluctuations, not quantum effects. In this module, you'll bridge the gap to real quantum hardware and honestly assess what that means.

D-Wave Quantum Annealers

D-Wave Systems builds the only commercially available quantum annealers. Their current processor, Advantage, has:

  • ~5000 qubits arranged in a Pegasus graph topology
  • ~35,000 couplers connecting nearby qubits
  • ~20 microsecond anneal time (the actual quantum computation)
  • ~10 millisecond total per sample (including readout and overhead)
  • The Embedding Problem

    This is the critical constraint that textbooks often gloss over. D-Wave's qubits are not fully connected — each qubit connects to ~15 neighbors in the Pegasus graph. Your QUBO might require connections between any pair of variables.

    Minor embedding maps logical variables to chains of physical qubits, where each chain acts as one logical variable. A 30-variable fully-connected QUBO might need 100+ physical qubits after embedding.

    This means:

  • Your effective problem size is much smaller than 5000
  • Longer chains are less reliable (the chain might break)
  • Embedding overhead adds noise to the solution
  • D-Wave Leap

    D-Wave offers free access through Leap:

  • 1 minute of QPU time per month (free tier)
  • That's roughly 6000 samples (at ~10ms each)
  • Enough for experimentation, not production workloads
  • https://cloud.dwavesys.com/leap/
  • Integration Approach

    D-Wave's Ocean SDK is Python-based. From TypeScript, you have two options:

  • Python bridge: Spawn a Python subprocess that submits the QUBO and returns results
  • REST API: D-Wave's SAPI (Solver API) accepts HTTP requests directly
  • The QUBO format is the same — you're just changing the backend from your local SA solver to D-Wave's QPU.

    Rigorous Benchmarking

    A portfolio-worthy project needs benchmarks that answer: "When does quantum optimization help?"

    What to Measure

  • Solution quality: Energy of the best solution found
  • Consistency: Standard deviation of energy across runs
  • Time to solution: Wall-clock time to find a solution of given quality
  • Scaling: How do metrics change as problem size increases?
  • Problem Sizes to Test

  • N = 10: Brute force is instant. All methods should find optimal.
  • N = 20: Brute force takes ~1 second. SA should match.
  • N = 30: Brute force is slow (~minutes). SA and greedy diverge.
  • N = 50: Brute force is infeasible. Only heuristics remain.
  • Methods to Compare

  • Brute force (N ≤ 20): The ground truth. Guaranteed optimal.
  • Random selection: No intelligence. The floor.
  • Greedy: Local intelligence. Fast, decent, can't handle interactions.
  • Simulated annealing: Global intelligence. Slow but thorough.
  • D-Wave (if available): Quantum intelligence. Fast per sample, but noisy.
  • Expected Results (Being Honest)

    At the scales in this project (N = 30-50):

  • SA will likely match or beat D-Wave due to embedding overhead
  • Both will beat greedy on problems with strong variable interactions
  • The gap between SA and greedy grows with problem size and interaction strength
  • D-Wave's advantage (if any) appears at N > 100 where SA struggles with rugged landscapes
  • Honest Limitations

    This is the most important section of your portfolio piece. Intellectual honesty about limitations signals maturity.

    No Proven Quantum Advantage (at this scale)

    For problems under ~100 variables, well-tuned classical heuristics (SA, tabu search, genetic algorithms) are competitive with or better than current quantum hardware. The overhead of minor embedding and quantum noise offsets any tunneling advantage.

    QUBO Formulation is Lossy

    Encoding feature selection as MI + correlation + cardinality is an approximation. It doesn't capture:

  • Non-linear feature interactions
  • Feature-target relationships beyond pairwise MI
  • The specific classifier that will use the selected features
  • Hardware is Noisy

    Current quantum annealers have:

  • Limited qubit connectivity (requires embedding)
  • Analog control errors (the energy landscape isn't exactly what you specified)
  • Thermal noise (the system isn't perfectly quantum — it partially thermalizes)
  • The Framework Has Value

    Despite these limitations, the QUBO framework itself is valuable:

  • It's a principled way to think about combinatorial optimization
  • It separates problem formulation from solution method
  • It's future-proof — as quantum hardware improves, the same QUBOs run faster
  • It generalizes — feature selection, graph partition, scheduling, routing all use the same framework
  • Portfolio Packaging

    A strong portfolio piece includes:

  • README — Problem statement, approach, results, limitations (all in ~500 words)
  • Dashboard screenshots — Show the interactive tool in action
  • Benchmark table — Methods × metrics at multiple problem sizes
  • "What I Learned" section — The non-obvious insights from doing this work
  • Code quality — Clean TypeScript, meaningful variable names, comments where the math is non-trivial
  • The most impressive thing you can show an interviewer isn't "I used quantum computing" — it's "I understood when it helps and when it doesn't."

    This is chapter 6 of Quantum Optimization for AI.

    Get the full hands-on course for $100 and build the complete system. Your projects become your portfolio.

    View course details