Tim Berners-Lee, the inventor of the World Wide Web, recently posed a critical question about artificial intelligence (AI): "Who does AI work for?" This inquiry highlights the need for transparency and accountability in AI systems, especially as they become increasingly integrated into daily life. The question was raised during a panel discussion at South by Southwest, emphasizing the importance of understanding whose interests AI systems serve—users or their creators.The Challenge of AI AlignmentAI systems, particularly those developed by large corporations, may prioritize their creators' interests over users', leading to potential conflicts of interest. For example, an AI assistant might be designed to maximize sales rather than provide the best options for users. Berners-Lee draws parallels with professions like medicine and law, where practitioners have a duty to act in their clients' best interests.Historical Context: Lessons from the Early WebThe development of the World Wide Web involved collaboration among companies and researchers to establish open standards, such as the World Wide Web Consortium (W3C). Unlike the early web, AI development is more competitive and lacks a unified body setting standards, similar to CERN for nuclear physics. Berners-Lee suggests that AI developers should form a collaborative organization to ensure AI benefits society as a whole.Trust and Regulation in AITrust in AI systems is a significant concern, especially with the use of synthetic data and the need for regulation. The AI industry faces challenges in balancing innovation with ethical considerations and regulatory oversight. Key questions include: How can AI systems be designed to prioritize user interests? What regulatory frameworks are needed to ensure AI serves the public good?Future Directions and SolutionsBerners-Lee's work on the Solid project aims to give users control over their data, allowing them to decide how it is used. By empowering users with data ownership, AI systems can be designed to work more transparently and in users' best interests. Potential outcomes include increased transparency in AI decision-making processes, better alignment of AI goals with user needs, and enhanced trust in AI systems through user control and accountability.Berners-Lee's question highlights the need for AI systems to serve users' interests, drawing parallels with other professions and emphasizing the importance of collaboration and regulation. The future of AI depends on addressing these challenges to ensure that AI benefits society as a whole.