A peculiar and somewhat unsettling trend has emerged among ChatGPT users recently. The popular AI chatbot has reportedly begun addressing some users by their first names during conversations, particularly while reasoning through problems. This marks a departure from its previous default behavior, and significantly, several individuals claim this is happening even though they never explicitly provided their names to the chatbot or instructed it on how to address them. This unexpected familiarity has sparked a wave of discussion and concern within the AI community and among casual users alike. Reactions to this unprompted personalization have been decidedly mixed, leaning towards wary or negative. High-profile users like software developer Simon Willison have publicly labeled the feature "creepy and unnecessary," while another developer, Nick Dobos, expressed strong dislike, stating he "hated it." A scan across social platforms like X reveals numerous users expressing confusion and apprehension about ChatGPT suddenly operating on a first-name basis. One user likened the experience to being singled out by a teacher, commenting, "It's like a teacher keeps calling my name, LOL... Yeah, I don’t like it." This sentiment captures the feeling of unwelcome attention that many seem to be experiencing. The precise origin and timing of this change remain unclear. Speculation initially pointed towards ChatGPT's enhanced "memory" feature, designed to personalize interactions by recalling information from previous chats. However, this explanation seems incomplete, as multiple users have reported encountering the name usage despite having explicitly disabled the memory function and related personalization settings in their accounts. This discrepancy adds another layer of mystery to the situation, suggesting the cause might be a different, undocumented feature update, a potential bug, or a broader shift in how the AI handles user session data. Compounding the user confusion is the lack of official clarification from OpenAI, the organization behind ChatGPT. Despite inquiries, including those from media outlets like TechCrunch, the company has not yet provided an explanation for why the chatbot is accessing and using user names in this manner, nor has it detailed how this information is being sourced, especially for users who believe they haven't shared it. This silence leaves users guessing about the underlying mechanisms and the potential privacy implications. This phenomenon underscores the delicate balance AI developers must strike between creating personalized, helpful interactions and respecting user privacy boundaries. When personalization feels unexpected or occurs without clear consent or control, it can quickly cross the line into perceived intrusiveness, potentially eroding user trust. The fact that users report being unable to reliably opt-out by disabling known personalization features highlights the need for greater transparency and more robust user controls in AI systems. Until the reasons behind this behavior are clarified and users are given clear agency over such features, the unease surrounding ChatGPT's newfound familiarity is likely to persist, potentially impacting user confidence in the platform.