User Intent Prediction #2: How Does One Number Defeat a Neural Network?
I spent three weeks building a neural network. I used sequence model encoders. I used 32-dimensional embeddings. I used 44,000 training examples.
Result: 0.52 prediction accuracy. A coin flip.
Then I spent one hour writing one dumb formula:
motivation = (dwell_time + idle_gaps) / position
Result: 0.96 prediction accuracy.
Three weeks of deep learning lost to one hour of common sense.
The Neural Network Trap
The theory: Neural networks are mythology. Feed them raw data, and they find the patterns humans miss. The reality: Neural networks optimize for shortcuts. They choose the easiest path, not the meaningful one.
I fed my model sequences of screens.
- “User visited Quiz → Weight → Plan.”
The model learned to cluster users by where they went. It grouped all the “Quiz skippers” together. It grouped all the “Plan readers” together.
But it missed the only thing that mattered.
The Invisible Signal
Weight-loss funnels aren’t clean sequences. They’re hesitation patterns.
I looked at two users who visited the exact same screens.
User A:
- Stares at paywall: 30 seconds.
- Leaves, comes back.
- Buys.
User B:
- Skips through paywall: 2 seconds.
- Never returns.
- Drops out.
One is deciding. One is escaping.
Some people stare. Some rage-tap. Some drift off and return.
To the neural network, these users were identical. Their screen sequence was [Paywall, Exit].
To a human, they were opposites. One was thinking. One was bouncing.
The signal wasn’t in the screens. It was in the hesitation.
The One-Number Solution
I wasn’t trying to solve ML. I was trying to sell a product. I didn’t need a neural network to find this. I just needed to measure “hesitation.”
So I wrote a 1-line formula:
const motivation = (total_dwell_time + total_idle_gaps) / current_position;
- Dwell Time: How long you stare.
- Idle Gaps: How long you pause (thinking).
- Position: How deep you are in the funnel.
High score = Deep in the funnel, spending lots of time thinking. Low score = Rushing through or quitting early.
The Showdown
| Feature | Complexity | Prediction Accuracy |
|---|---|---|
| Neural Network | 160,000 parameters | 0.52 |
| Motivation Score | 1 line of code | 0.96 |
The neural network was 160,000 times more complex. And it was 100% worse at the job.
The Lesson
Deep learning is lazy. Founders can’t be.
I expected the model to “figure it out.” But I gave it the wrong data (screens instead of time).
The model didn’t fail. I failed. I assumed that “more data” meant “more signal.” But 44,000 sequences of screen IDs contain zero information about hesitation.
Models don’t miss intent. Data does. And founders pay for that mistake.
This connects to why 99.5% accuracy was useless—high quality on the wrong input is still garbage.