Drop a video or image. The tool detects fingerprints from ChatGPT, Gemini, Nano Banana, Midjourney, Sora, Runway, Higgsfield, Stable Diffusion, ComfyUI, Flux, Ideogram, Adobe Firefly, Meta AI, Grok and 50+ other AI tools — then strips them and injects clean iPhone metadata so Instagram's classifier reads it as a normal phone upload. Nothing else changes — resolution, frame rate, bitrate, audio, and pixels are byte-for-byte identical to your upload. Videos are stream-copied with -c copy, never re-encoded.
The actual file format is read from the first bytes — not the extension. ChatGPT names its JPEGs .png on purpose; this tool catches that.
Every segment, chunk, and atom is enumerated. Known AI fingerprints (OpenAI ICC profile, C2PA manifests, Higgsfield XMP tags, etc.) are matched by hash and content.
Every non-essential segment is removed: JFIF markers, generic ICC profiles, XMP packets, EXIF blocks, C2PA manifests, comment segments, AI provenance tags.
Fresh iPhone metadata is written in: Make: Apple, randomized model (13–16 Pro range), iOS version, lens model, realistic DateTimeOriginal. Filename renamed to IMG_XXXX.
Instagram is deactivating creator, business, and personal accounts at scale — they say it's to clear out bots and "AI spam." A large signal in that classification is the metadata embedded in the file: if your upload says Software: Higgsfield, contains a C2PA manifest, or carries OpenAI's signature ICC profile, the algorithm has already decided.
This tool removes those signals and replaces them with the metadata signature of a normal iPhone upload. It does not edit your image or video — pixels, audio, frame rate, and resolution are byte-identical. Only the metadata changes.
Everything runs in your browser. Files never leave your device. There are no servers, no logs, no uploads to anyone.
No. Videos are stream-copied — no re-encoding. The output is the same codec, bitrate, frame rate, and resolution as the input. Only the metadata box is rewritten.
JPEG and PNG pixel data is not re-encoded — only the metadata segments are rebuilt. Output is pixel-identical to input.
Yes. Everything runs in your browser. No file is ever sent to any server. You can disconnect your internet after the page loads and the tool still works.
The video engine (FFmpeg) loads on first use — about 30MB, cached after that. Every subsequent video processes in 1–3 seconds.
OpenAI / ChatGPT (DALL-E, GPT Image, Sora), Google (Gemini, Nano Banana, Imagen, Veo, Vertex AI), Meta AI / Imagine, Microsoft (Bing Image Creator, Designer, Copilot), xAI Grok, Adobe Firefly, Stable Diffusion (A1111, Forge, ComfyUI, SDXL), Flux (Dev / Pro / Schnell), Midjourney, Ideogram, Recraft, Krea, Leonardo, Playground, Runway (Gen-3 / Gen-4), Higgsfield, Pika, Kling, Luma Dream Machine, MiniMax Hailuo, HeyGen, Synthesia, PixVerse, Haiper, Hunyuan, plus C2PA Content Credentials and IPTC trainedAlgorithmicMedia flags. Detection works by ICC fingerprint hashing, structural chunk pattern matching, and byte-level scanning of manifests.
No. SynthID (Google) and similar invisible pixel watermarks are encoded into the image data itself, not into metadata. This tool only modifies metadata. Pixels are never touched — that's the no-quality-loss guarantee. For Google AI images, stripping metadata helps but is not a complete defense. Taking a screenshot of the AI image first reliably defeats pixel watermarks before processing through this tool.
Not yet — coming soon. For now, convert HEIC to JPEG on your phone (Settings → Camera → Formats → Most Compatible) and re-export.