The Engagement Trap: Are AI Chatbots Repeating Social Media's Mistakes? Artificial intelligence, particularly the rise of sophisticated chatbots, promised a revolution in how we access information and interact with technology. We envisioned helpful assistants, tireless researchers, and creative partners. But according to Instagram co-founder Kevin Systrom, many AI companies are veering off course, prioritizing empty "engagement" over genuine utility. It's a warning bell that echoes the less savory aspects of the social media boom he helped create. Systrom's critique, reported by TechCrunch, is pointed: AI companies are allegedly "juicing engagement" by designing chatbots that pester users with unnecessary follow-up questions. Instead of delivering concise, high-quality answers, these bots are seemingly programmed to keep the conversation going, artificially inflating metrics like time spent and daily active users (DAUs). Sound familiar? It should. Déjà Vu: The Social Media Playbook Reloaded Systrom explicitly compares these tactics to those employed by social media platforms during their aggressive growth phases. We all know that playbook: infinite scrolls, notification barrages, algorithmic feeds designed to maximize eyeballs and ad revenue, often at the expense of user well-being or providing tangible value. He calls this focus on easily manipulated metrics "a force that’s hurting us." It’s a compelling, if slightly uncomfortable, parallel. The very metrics that signal success to investors and stakeholders in the attention economy – engagement, session length, DAUs – might be actively detrimental to the core purpose of many AI tools. Do you really want to spend 15 minutes coaxing a simple answer out of a chatbot, punctuated by endless "Is there anything else I can help you with?" prompts, when a 30-second, direct response would suffice? Systrom argues this isn't an accidental byproduct of developing technology; it's an intentional feature. The goal, he suggests, isn't necessarily to be the most helpful AI, but the one that can show the most impressive engagement graph in its next pitch deck. Why Utility Should Trump Time Spent Think about the ideal interaction with a truly useful AI assistant. You ask a question, you get a clear, accurate, and relevant answer. Perhaps it anticipates a logical follow-up, but it doesn't badger you. It respects your time and delivers value efficiently. The "engagement-juicing" model flips this on its head. It Wastes Time: Instead of streamlining information access, it adds friction. It Can Be Annoying: Constant, unnecessary prompts feel less like helpfulness and more like digital nagging. It Erodes Trust: If a chatbot seems more interested in keeping you talking than providing the best answer, you begin to question its motives and reliability. It Obscures True Value: How helpful is an AI, really, if its success is measured by how long it can trap you in a conversation, rather than how effectively it solves your problem or answers your query? This isn't just about user annoyance; it's about the fundamental direction of AI development. If the primary driver becomes mimicking the addictive, attention-grabbing loops of social media, we risk building tools that are less about augmenting human capability and more about capturing human attention for its own sake. The Call for a Course Correction Systrom's solution is straightforward: AI companies need to be "laser-focused" on providing high-quality answers and genuine utility. The goal shouldn't be moving metrics "in the easiest way possible," but rather striving for excellence in the core function – providing information, generating content, solving problems. This requires a shift in mindset, potentially away from the venture capital-fueled obsession with exponential growth metrics borrowed from social media. It means prioritizing accuracy, efficiency, and user respect over session duration and interaction counts. For Developers: Focus on the quality of the output. Is the answer accurate? Is it concise? Is it truly helpful? Refine the AI's ability to understand intent and deliver precisely what's needed. For Users: Be mindful of how AI tools make you feel. Are they saving you time and effort, or are they subtly demanding more of your attention than necessary? Vote with your usage – favor tools that respect your time and deliver real value. For Investors: Look beyond superficial engagement metrics. Ask harder questions about the actual utility and long-term value proposition of the AI tools you fund. Avoiding the Engagement Cancer As one commentator on X put it, reacting to Systrom's warning, "Engagement is the cancer that will end the software industry." While perhaps hyperbolic, the sentiment resonates. When engagement becomes the only goal, divorced from genuine usefulness, software quality suffers, and user experience degrades. Kevin Systrom, having seen the power and pitfalls of engagement-driven platforms firsthand, offers a crucial perspective. AI holds immense potential, but if it falls into the same engagement traps that defined the last era of tech, we risk building tools that are more pestering than productive. The focus must return to utility, ensuring that these powerful technologies serve our needs, not just demand our attention.