A recent all-hands meeting at the General Services Administration (GSA) turned contentious as federal workers voiced strong opposition to the agency's artificial intelligence (AI) plans. Leaked chats obtained by WIRED reveal a growing disconnect between GSA leadership, appointed during the Trump administration, and its staff regarding the direction and implementation of AI technologies within the agency. The meeting, intended to showcase the potential benefits of AI, quickly devolved into a grilling session, with employees demanding concrete answers about the impact of AI on their jobs, data privacy, and the overall mission of the GSA. According to reports, the staff expressed frustration with what they perceived as a lack of transparency and a top-down approach to AI adoption. The sentiment was clear: they wanted more than just a demo; they wanted substantive answers. Staff Concerns and Demands The primary concerns raised by GSA employees revolved around job security. With AI automation looming, workers feared potential layoffs or displacement. They questioned whether the agency had a plan to retrain or reskill employees whose roles might be affected by AI. Furthermore, concerns were raised about the ethical implications of using AI, particularly regarding data bias and algorithmic accountability. Employees demanded assurances that AI systems would be fair, transparent, and not perpetuate existing inequalities. Another key area of contention was data privacy. Employees expressed skepticism about the security of sensitive government data when processed by AI systems. They questioned whether adequate safeguards were in place to prevent data breaches and ensure compliance with privacy regulations. The lack of clear answers from the Trump appointee fueled further distrust and resentment. The Fallout and Future Implications The heated all-hands meeting highlights the challenges of implementing AI in government agencies. It underscores the importance of engaging employees in the decision-making process and addressing their concerns proactively. A successful AI strategy requires not only technological expertise but also a commitment to transparency, ethical considerations, and workforce development. The incident at the GSA serves as a cautionary tale for other government agencies considering AI adoption. It demonstrates that simply showcasing the potential benefits of AI is not enough. Agencies must also address the legitimate concerns of their employees and ensure that AI is implemented in a responsible and equitable manner. Failure to do so could lead to resistance, decreased morale, and ultimately, the failure of AI initiatives. Moving forward, the GSA and other agencies must prioritize open communication, employee training, and ethical frameworks to ensure that AI serves the public good and empowers the workforce, rather than replacing it.