Google's new AI model, Gemini 2.0 Flash, is making waves, but not all for good reasons. It's incredibly good at generating and editing images, but users have discovered it can also remove watermarks. This has sparked controversy, especially among stock image companies like Getty Images who rely on watermarks to protect their work.Think of it like this: you take a photo and put your name on it. Someone uses Gemini to erase your name and claims the photo as their own. That's essentially what's happening with stock photos. It's a big deal because removing a watermark without permission is against copyright law in the U.S.https://x.com/tanayj/status/1901362361476296858While Gemini's image generation feature is labeled "experimental," it's readily available to developers. The AI is surprisingly good at removing watermarks and even filling in the missing parts of the image. Other AI tools can do this too, but Gemini seems to be one of the best, and it's free. This makes it easy for people to potentially misuse it.Some people might ask, "Why is this a problem if it's just experimental?" Well, even though it's not officially a finished product, it's still accessible and being used. This raises concerns about how Google will control its use in the future. Other AI models, like Anthropic's Claude, refuse to remove watermarks, calling it unethical and potentially illegal. This highlights the different approaches companies are taking to AI responsibility.Another common question is, "Can't watermarks be easily added back?" Yes, but the damage might already be done. Once an image is shared without a watermark, it can be difficult to control its spread and enforce copyright. It's like trying to put toothpaste back in the tube.This situation raises important questions about the future of AI and copyright. As AI gets better at manipulating images, protecting creative work will become even more challenging. Google hasn't officially commented on the issue yet, but it's likely they'll face pressure to add safeguards to Gemini. This could involve restricting access or training the AI to refuse watermark removal requests. The outcome of this situation could set a precedent for how AI image editing tools are developed and regulated in the future.