AI retouching and AI-generated images have vastly different detection difficulties. Purely generated? They basically can't escape detection. But for retouching, which is a half-manual, half-AI job, whether it can be caught is really hard to say.
Recently, I made a P-image detector by myself, and it took about ten minutes to put together using Vibe. To be honest, the results were quite surprising—I ran both normal images and those controversial ones, and the accuracy was quite impressive. Of course, the prerequisite is to connect to your own API; otherwise, it won’t work.
After several rounds of testing, I found that this thing is quite adept at identifying obviously fake images, and it can occasionally catch flaws in ambiguous cases as well. The technical barrier isn't high; the key is that the training data needs to be sufficiently diverse.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
AI retouching and AI-generated images have vastly different detection difficulties. Purely generated? They basically can't escape detection. But for retouching, which is a half-manual, half-AI job, whether it can be caught is really hard to say.
Recently, I made a P-image detector by myself, and it took about ten minutes to put together using Vibe. To be honest, the results were quite surprising—I ran both normal images and those controversial ones, and the accuracy was quite impressive. Of course, the prerequisite is to connect to your own API; otherwise, it won’t work.
After several rounds of testing, I found that this thing is quite adept at identifying obviously fake images, and it can occasionally catch flaws in ambiguous cases as well. The technical barrier isn't high; the key is that the training data needs to be sufficiently diverse.