Sergey Kopanev - Entrepreneur & Systems Architect

Go Back

User Intent Prediction #20: Same Pattern. Different People.


I expected this to fail.

Different people.

Different context.

Different problems.

Same funnel.

So I did the simplest test that usually kills models.

Train on Group A.

Test on Group B.

Then swap.

The result

It didn’t collapse.

It barely moved.

Cross-group AUC changed by less than 1% in both directions:

  • Group A → Group B: −0.93% AUC
  • Group B → Group A: +0.94% AUC

That’s not “it kind of works.”

That’s “the pattern survives the audience.”

Why this surprised me

Most funnel models are fragile.

They learn the easy shortcuts.

They look brilliant inside the same population.

Then you shift the population and the model turns into a random number generator.

This time it didn’t.

So the model wasn’t leaning on cohort quirks.

It was leaning on behavior.

What generalized

Not “who they are.”

How they move.

  • speed
  • friction
  • hesitation
  • loops
  • drop-off shape

Same funnel creates the same failure modes.

Different people still get stuck in the same places.

And that creates a stable signal.

The uncomfortable conclusion

The funnel is more predictive than the person.

That’s not a compliment.

It means the system shapes behavior hard enough that intent becomes legible.

The model isn’t reading minds.

It’s reading friction.

Why this mattered

Two reasons:

  1. It reduced complexity.

One model can generalize without bespoke per-cohort tuning.

  1. It made the signal feel real.

If it survives an audience swap, it’s less likely to be a dataset trick.

The lesson

I expected “different people” to require “different models.”

The data disagreed.

Same funnel.

Same friction.

Same pattern.

Different people.

And once that was proven, the next question was brutal:

If the signal is stable… why did the fancy stuff lose?


Next: The Dumb One Won.