Site icon Building Customer Driven SaaS Products | Jason Evanish

When it comes to AI, the Square Root of Slop…is Slop.

One of the most common frustrations with AI is not getting what you wanted. For every time we get just what we wanted (or more), it seems like we also get a mess of nonsense or things we know are wrong.

And sometimes that happens, because the job is literally impossible for the AI.

Other times, we didn’t write a good enough prompt. Going back to the drawing board, providing some examples, and giving more detailed specifics can help get a better result the next time.

Yet, sometimes none of that is enough, even if it *should* be possible.

“The square root of slop is slop.”

Some friends and I were talking about the state of AI and noticing how you can get plausible sounding ideas and analysis, but when you dig in, you see it’s completely wrong.

Sometimes that’s hallucinated entries in a spreadsheet that doesn’t actually add up correctly, despite looking great. Other times, it’s customer feedback analysis of thousands of entries, with a beautiful report and quotes, where you can’t actually confirm the underlying sources that led to the ranking of problems it offered. Or, it’s simply a report or newsletter draft that starts out sounding good before drifting off to classic AI generated slop.

As my friend put it, “If AI is just math at scale, then scaling noise just gives you louder nonsense.” Or put more simply and more memorably, “the square root of slop is slop.”

The Square Root of Slop: The math problem that’s surprisingly difficult to solve.

While it’s easy to recognize slop when looking at AI generated videos, images, and social media content, it’s harder to do so at work.

With AI integrated in so many of our workflows and processes, it can be easy to take the friendly, positive tone for authority and confirmation of a strong result.

That’s why you need to check AI’s work, and keep a human involved in many processes (sorry, AI job loss doomers…). Because while a quick proof read may help you tell where AI went off the rails on a blog post, it’s a lot harder to tell when it’s deeper analysis of data you don’t know well.

And nowhere is this problem more true today than in product roadmaps.

The Slop Product Problem: Why Your Feedback System Is Sabotaging Your Roadmap

If you have thousands of incomplete tickets in a backlog, or a giant, public feature voting list, it’s easy to think you have a ton of signal you can use to tell what to build next.

Just add AI and voila!, you’ll have clear insights that no one else got around to analyzing.

Not so fast.

Recognize slop when you see it.

Not all customer or product feedback is created equal. Shallow tickets, unclear requests, partial thoughts from an email thread with sales, and other half baked sources get ingested by AI all the time.

Yet what do you actually know from them?

As we’ve discussed in the past with the failings of feature voting apps, an upvote can mean *many* things:

"The number of upvotes does not mean every upvote wants the same thing.

Here are some of the reasons someone may upvote it:

1. “I want that feature exactly as described.”
2. “Well, this has 25 votes already, and it’s close enough, I’ll upvote that instead of post mine.”
3. “Oh that looks interesting, I wouldn’t mind that. Click.”
4. “I think my coworker wanted that…”
5. “I could have used that a few months ago” (and haven’t needed it since)
6. “That sounds cool.”

All those upvotes and you don’t really have a quantitative count of customer input. Something could have 250 upvotes, but it’s the least important item. Or they could want different functionality or features as part of it.


You don’t know though, because all you have is an upvote, not a conversation, or even a few sentences from each of those people."

I’m picking on feature voting here, but the same problem can happen with feedback dumped en masse by support, in an overloaded product feedback email inbox, and other places.

The bottom line: Before you run off and build based on some big, fancy AI analysis, make sure the underlying sources are clear, credible, and real.

The Illusion of Smart Analysis: The Gell-Mann Effect of AI

The Gell-Mann Effect is an interesting look at how the human mind works and what we trust. It’s famously described by best-selling author, Michael Crichton, as follows:

"Briefly stated, the Gell-Mann Amnesia effect is as follows. You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.

In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know."

This same effect is common in AI today.

If you have used AI for long, and ever done the work to check the AI’s work, you know it can be wrong. Really wrong.

Yet, it’s so easy to have that experience on something you’re knowledgeable about, and then come back moments later and trust the AI fully on the next prompt you give it.

AI analysis can break your product strategy.

Remember the simple lesson: Garbage in, garbage out.

AI analysis can’t create clarity where none exists; it simply creates the illusion of confidence with a polished, friendly tone, and a pretty output.

You have to start at the foundation.

What you put into the analysis matters more than anything else.

You can iterate all day to write the perfect prompt. You can prepare to deliver your results in the perfect format that your INTJ manager will love. You can generate the perfect visuals, or even a clickable prototype to go with it.

But if the sources that led to all those decisions, reports, and mockups was filled with slop, none of the rest of those efforts will matter. You’ll be heading down the wrong path based on false presumptions.

So how do you break free from this shaky house of cards? You fix your slop problem by turning your slop into signal.

3 Ways You can Turn Slop into Signal

As GI Joe used to say, “Knowing is half the battle.”

If you know you have a slop problem, then you need to get to the root and fix it. While we’ll be talking about how this applies to inputs for your AI analysis of your customers and their feedback, this could just as easily apply to other sources of data in your company. Use your imagination (or your AI…).

Here’s three approaches you can use to solve this both in both the short and long term:

1) Filter Your Slop (The Quick Fix)

If you see you have a lot of noise mixed in with useful signal, then the best thing you can do is filter out the slop. What’s left then will be highly valuable insights and data that you can make better decisions with.

In the case of product feedback, that can mean creating filters to remove the most obvious culprits of issues like:

It’s important to not rush through this process. If you set too high a bar, or have too many filters, you may remove quality data and sources, or even filter out so much it’s no longer representative of your larger customer base.

Yet, if you’re just starting out, this is a great way to get started and make the most of what you have.

Convert Your Slop to Signal (The Retroactive Fix)

If you feel like you have way too much slop to just discard it all, then you have to move onto this next fix. Because if you can’t just discard your slop, you have to try to convert it to useful signal.

Going back and trying to clean up messy or incomplete data after the fact is not easy. In fact, it’s the bane of many data scientists. Yet, if you want to get the best results and insights possible, you need to analyze data that is as clean and comprehensive as you can.

In the case of messy product feedback, it means going back to your incomplete data sources and try to fill in the missing context. You can do that through tactics like:

None of this is easy to do. It’s often difficult to get systems to talk to each other, and have them line up in a way you can merge them. And customers often forget what they were thinking or doing in your product within a few days of the incident that prompted them to send in feedback.

That’s why this is a good exercise to undertake, but results can be spotty at best. You can spend a lot of time trying to convert your slop, but not always yield useful results.

That’s why there’s really only one way to permanently solve your slop problem.

Create Systems to Eliminate Slop (The Permanent Fix)

Even with our advances in AI, we haven’t invented time travel yet. That means there’s only so much you can do with existing, messy, slop-filled data.

Once you’ve done the best you can to filter out the worst stuff, and improve the rest, the real power comes from building systems to eliminate or prevent slop from being created in the future.

That means you’re establishing processes, routines, habits, and software that help you prevent slop at each source. You’re putting systems in place that get to the root cause of why you had slop in the first place, so that everything you do with that signal later sits on a strong foundation.

But how do you do that? Here’s a few ways:

When you go back to the source and look at each cause of slop being added to your system, you’ll start to see ways to make headway with each of them. It’s unlikely the same solution applies to all of them, but if you treat them each as an opportunity to iterate and improve, the quality of signal throughout your system will rapidly improve.

Conclusion: You Get Out What You Put in.

Slop in, slop out. No matter how fancy the evals are, or how good an RLHF game a company talks, it doesn’t matter if what it’s evaluating and analyzing is incomplete, filled with junk, or messy.

It all starts with what you put in.

AI won’t save you if all you have is messy, unclear feedback from your customers.

Until you fix how you collect, followup with, and understand your customer’s input, no analysis nor aggregation tool can help you.

That’s why I’m building Product Arrow to help you. We start at the source to help you hear the voice of your customer clearly, from simple feedback widgets, to automated AI followup to get to the root cause, and only then analysis to understand the trends and patterns of what your customers really need.

If you’re hungry for a better way to get feedback and want to better understand the voice and needs of your customers, we’re working with select product-centric companies to make Product Arrow the perfect solution for them.

If you’re looking for better ways to source, understand, and act on product feedback, get in touch, or I can show you how we can help you and answer your questions when you sign up for a free call here.

Exit mobile version