Leica has implemented something in this vein with encrypted capture metadata embedded in the file, but there are a lot of baked in vulnerabilities when you keep your validation record on the file itself.
Solution looking for a use case, most likely, especially if it requires addition of hardware to existing cameras. There’s already lots of ways of fingerprinting camera + lens combinations and tying them to raw files + metadata, and lots of existing systems for automatically capturing originals and tagging derivatives of those originals. Those already have credence with the courts, it’d be an uphill battle to overcome that momentum without a hell of a value add… not sure a blockchain actually is that add.
You're right that existing fingerprinting methods (sensor noise patterns, lens artifacts) work for tying images to specific hardware, and courts do accept those. Where I see gaps:
1. Existing methods require forensic expertise - Sensor fingerprinting needs specialized analysis. Courts accept it, but it's not instantly verifiable by anyone. Expensive and time-consuming for routine authentication and difficult to automate
2. Derivative tagging systems depend on trusted intermediaries - News organizations, stock photo agencies, etc. Works great until you need to verify images outside those systems. Independent journalists, citizen journalists don't have access
3. The deepfake problem is accelerating - Existing forensic methods struggle with AI-generated content. Detection is always playing catch-up with generation. Courts may need higher standards as manipulation gets easier
The blockchain value-add I'm proposing:
- Instant binary verification (hash matches or doesn't - no expertise required)
- No trusted intermediary needed (public ledger anyone can check)
- Established at capture (before manipulation is possible)
Your point about hardware addition is valid though. Secure elements are already in many modern cameras for other purposes (DRM, wireless security), so marginal cost might be lower than it seems. But I agree that retrofitting existing cameras is likely impractical.
Real question: Is the gap between "forensically provable with expertise" and "instantly verifiable by anyone" worth the additional complexity? Maybe the answer is no. Existing systems work well enough for professional contexts where authentication matters. Would be curious if you've seen situations where current methods failed or were inadequate?
Well I’ve worked in photography and cinematography, and some of that has involved legal and forensic contexts, and I’d say that for most purposes, pre-AI, there’s not really a gap that is about authentication… there’s a need to know of image B was derived from image A, but usually that’s a qualitative decision based on chain of custody (usually whomever has the earliest / rawest / highest quality original or primary reproduction master wins) not a quantitative thing based on hashing, especially as the hash changes from capture to use (especially when hashing the image + metadata and not just the image data itself). There’s more need for copy-robust steganography (for example, a watermark that’s invisible to humans but which AI cannot fail to introduce would be a trillion dollar technology) than there is a known and exploitable gap around who captured an image and what chain of images derived from that. If you work with raw data from original camera file and DNG for your various intermediate works (easily done with Lightroom / Darktable etc) you’re already able to maintain what is effectively a Markov chain through the simple act of keeping all your files backed up. Admittedly walking back from a low resolution copy on some Chinese t-shirt to the original camera file isn’t exactly easy, but it’s also relatively rare that it needs to happen.
I would say there probably is a gap in some areas I don’t know well… medical and forensic imagery, for instance, and law enforcement evidentiary chains. If your system was ubiquitous and free could the FBI coming across a WebP of apparent CSAM on the dark net be connected back to a specific camera file, with a specific date and time stamp, establishing quantitatively that the subject and the photographer and the consumer are all tied by a verifiable chain of possession? If so, well there’s societal good (and potential bad) arguments for it, but for it to be really useful it would need to be mandated inclusion in the cameras themselves.
For commercial photography I think you’ve got the problem that this is already relatively addressed. For a post-generative AI world it’s not clear how proof of authorship of what would have to be training data would be discernible from the deepfake content (absent that robust watermark idea, which would already make you rich beyond need). But in certain extremely specific workflows where chain of custody is really and ubiquitously important (medical records, legal evidence, educational materials, museum reprophotography, etc) there may be a market, but it would be very hard to validate without finding narrow experts.
Thanks for the detailed response - this is exactly the kind of domain expertise I need to hear.
You're right that formal institutional workflows (courts, news organizations, professional photography) already handle chain of custody adequately through raw file retention and existing practices. I'm realizing my value proposition isn't for those contexts where authentication has always been critical and processes exist.
Where I see the gap is informal authentication at scale - the billions of images shared daily on social media, used in online discourse, spreading as potential misinformation. Your workflow (keeping raw files, institutional backing, forensic analysis when needed) works great for professional contexts. But:
How does the average person verify an image they see online?
- They don't have access to forensic analysis
- They don't know who has the "earliest/rawest version"
- Trusted institutions are too slow to counter propaganda at internet speed
- Even if institutions could authenticate on demand, would they scale to billions of images?
Blockchain provides automated, scalable verification: platforms could flag images as "no blockchain record found - likely generated/manipulated" without human intervention. Can't generate false positives (hash either matches or doesn't). This doesn't replace institutional workflows - it augments them for contexts where those workflows don't exist.
On the post-AI point: I actually think this is backwards. If we reach a world where we can't even prove "this camera captured this scene," then we have no ground truth at all. Hardware attestation becomes MORE critical, not less. Your blockchain record also includes geotags, timestamp, camera ID - significantly harder to forge a complete fake than just the image itself. Without some method of proving hardware capture, the only option is to stop using images for truth-verification entirely.
On ubiquity: Every standard starts somewhere. HTTPS, GPS in cameras, seatbelts - none were ubiquitous until they were. Even before universal adoption, blockchain authentication can prove a positive ("this image has verifiable provenance") even if it can't yet prove a negative ("this image was generated"). For law enforcement, that's still valuable.
On watermarking: Watermarks can be trained around - that's what GANs do. If you watermark with something requiring a key to decode, you're already halfway to cryptographic signing, just without blockchain's forgery resistance. They're complementary approaches, not competing ones.
On qualitative vs quantitative: As an engineer, quantitative beats qualitative for anything requiring accuracy at scale. Expert judgment works for individual high-stakes cases but doesn't scale to internet-speed misinformation.
You've helped me clarify that my audience isn't professional photographers with institutional backing - it's everyone else who needs to distinguish real from fake at the speed of social media. That's probably a harder problem to solve, but arguably more important given how information spreads today.
Does that reframing make sense, or am I still missing key limitations?
If there’s a technological solution that could, somehow, provide a ubiquitous means of walking back from image discovered on the internet through the whole chain of custody to either a specific camera or a specific prompt, then yes there’s definitely a market and a need for that in society, and both the market and the need are going to be substantially greater, even after this AI-summer collapses, but I’ll be damned if I know how you’d get to the technical solution without first achieving the legal momentum necessary to make it ubiquitous enough to actually be useful for that grand of a purpose. It’s the chicken/egg problem, or maybe more accurately the “stuffing-everything—back-into-Pandora’s-box” problem. Humanity has already created sufficient original imagery without a blockchain (or whatever) technical provenance model to allow for an essentially infinite remixing of just that existing body of images we didn’t know we’d have a need for provenance for into an enormous corpus of slop we also don’t have provenance for. Yes it would be good if there already existed a blockchain containing a Markov chain back to every original work of natural intellect, but how do you build that while the models are already regurgitating and chewing the weights they’ve already learned?
I mean I do think you’ve (also) noticed one of the wicked hard problems, it’s just I’ve had similar conversations in the photography and cinematography and VFX worlds going back more than 20 years, long before generative AI was a thing, but now they are a thing I think we’re stuck in a world where we need to understand that image + attestation != truth, and never really did.
But if you do figure it out, and somehow there exists a future in which we somehow never let Schrödinger’s stinking cat out of Schrödinger’s stinking bag, I’ll be the first to invest.
You're right that reaching full usefulness requires ubiquity, but achieving it requires solving adoption challenges that may be insurmountable. Billions of unauthenticated images already exist, but establishing truth going forward—even if only in certain areas—still has immense value.
Your point that an authenticated image can still be a forgery through staging is correct. However, that's been true for much longer than our current crisis and the results have been far more manageable. Hardware attestation doesn't solve truth, but it's a necessary starting point.
The honest answer: This solves a real problem, but implementation barriers may be insurmountable. I actually first imagined this framework over a year and a half ago and tried shopping it around, hoping a utility patent would motivate implementation. No one was interested in solving a problem with such a big question mark around reaching the end game.
I'm publishing as prior art because if legal/regulatory momentum does emerge, I don't want authentication monopolized. But you're right to be skeptical. The fact you've had these conversations for 20+ years highlights the enormity of the problem. That said, we also haven't had blockchain for that long, and it's remarkably well suited to this application (way more than currency, in my opinion).
If I figure out how to stuff everything back in Pandora's box, I'll let you know where to send the check.
Kinda surprised someone hasn't designed this already.
Leica has implemented something in this vein with encrypted capture metadata embedded in the file, but there are a lot of baked in vulnerabilities when you keep your validation record on the file itself.
Solution looking for a use case, most likely, especially if it requires addition of hardware to existing cameras. There’s already lots of ways of fingerprinting camera + lens combinations and tying them to raw files + metadata, and lots of existing systems for automatically capturing originals and tagging derivatives of those originals. Those already have credence with the courts, it’d be an uphill battle to overcome that momentum without a hell of a value add… not sure a blockchain actually is that add.
You're right that existing fingerprinting methods (sensor noise patterns, lens artifacts) work for tying images to specific hardware, and courts do accept those. Where I see gaps:
1. Existing methods require forensic expertise - Sensor fingerprinting needs specialized analysis. Courts accept it, but it's not instantly verifiable by anyone. Expensive and time-consuming for routine authentication and difficult to automate
2. Derivative tagging systems depend on trusted intermediaries - News organizations, stock photo agencies, etc. Works great until you need to verify images outside those systems. Independent journalists, citizen journalists don't have access
3. The deepfake problem is accelerating - Existing forensic methods struggle with AI-generated content. Detection is always playing catch-up with generation. Courts may need higher standards as manipulation gets easier
The blockchain value-add I'm proposing:
- Instant binary verification (hash matches or doesn't - no expertise required)
- No trusted intermediary needed (public ledger anyone can check)
- Established at capture (before manipulation is possible)
Your point about hardware addition is valid though. Secure elements are already in many modern cameras for other purposes (DRM, wireless security), so marginal cost might be lower than it seems. But I agree that retrofitting existing cameras is likely impractical.
Real question: Is the gap between "forensically provable with expertise" and "instantly verifiable by anyone" worth the additional complexity? Maybe the answer is no. Existing systems work well enough for professional contexts where authentication matters. Would be curious if you've seen situations where current methods failed or were inadequate?
Well I’ve worked in photography and cinematography, and some of that has involved legal and forensic contexts, and I’d say that for most purposes, pre-AI, there’s not really a gap that is about authentication… there’s a need to know of image B was derived from image A, but usually that’s a qualitative decision based on chain of custody (usually whomever has the earliest / rawest / highest quality original or primary reproduction master wins) not a quantitative thing based on hashing, especially as the hash changes from capture to use (especially when hashing the image + metadata and not just the image data itself). There’s more need for copy-robust steganography (for example, a watermark that’s invisible to humans but which AI cannot fail to introduce would be a trillion dollar technology) than there is a known and exploitable gap around who captured an image and what chain of images derived from that. If you work with raw data from original camera file and DNG for your various intermediate works (easily done with Lightroom / Darktable etc) you’re already able to maintain what is effectively a Markov chain through the simple act of keeping all your files backed up. Admittedly walking back from a low resolution copy on some Chinese t-shirt to the original camera file isn’t exactly easy, but it’s also relatively rare that it needs to happen.
I would say there probably is a gap in some areas I don’t know well… medical and forensic imagery, for instance, and law enforcement evidentiary chains. If your system was ubiquitous and free could the FBI coming across a WebP of apparent CSAM on the dark net be connected back to a specific camera file, with a specific date and time stamp, establishing quantitatively that the subject and the photographer and the consumer are all tied by a verifiable chain of possession? If so, well there’s societal good (and potential bad) arguments for it, but for it to be really useful it would need to be mandated inclusion in the cameras themselves.
For commercial photography I think you’ve got the problem that this is already relatively addressed. For a post-generative AI world it’s not clear how proof of authorship of what would have to be training data would be discernible from the deepfake content (absent that robust watermark idea, which would already make you rich beyond need). But in certain extremely specific workflows where chain of custody is really and ubiquitously important (medical records, legal evidence, educational materials, museum reprophotography, etc) there may be a market, but it would be very hard to validate without finding narrow experts.
Thanks for the detailed response - this is exactly the kind of domain expertise I need to hear. You're right that formal institutional workflows (courts, news organizations, professional photography) already handle chain of custody adequately through raw file retention and existing practices. I'm realizing my value proposition isn't for those contexts where authentication has always been critical and processes exist.
Where I see the gap is informal authentication at scale - the billions of images shared daily on social media, used in online discourse, spreading as potential misinformation. Your workflow (keeping raw files, institutional backing, forensic analysis when needed) works great for professional contexts. But:
How does the average person verify an image they see online?
- They don't have access to forensic analysis
- They don't know who has the "earliest/rawest version"
- Trusted institutions are too slow to counter propaganda at internet speed
- Even if institutions could authenticate on demand, would they scale to billions of images?
Blockchain provides automated, scalable verification: platforms could flag images as "no blockchain record found - likely generated/manipulated" without human intervention. Can't generate false positives (hash either matches or doesn't). This doesn't replace institutional workflows - it augments them for contexts where those workflows don't exist.
On the post-AI point: I actually think this is backwards. If we reach a world where we can't even prove "this camera captured this scene," then we have no ground truth at all. Hardware attestation becomes MORE critical, not less. Your blockchain record also includes geotags, timestamp, camera ID - significantly harder to forge a complete fake than just the image itself. Without some method of proving hardware capture, the only option is to stop using images for truth-verification entirely.
On ubiquity: Every standard starts somewhere. HTTPS, GPS in cameras, seatbelts - none were ubiquitous until they were. Even before universal adoption, blockchain authentication can prove a positive ("this image has verifiable provenance") even if it can't yet prove a negative ("this image was generated"). For law enforcement, that's still valuable.
On watermarking: Watermarks can be trained around - that's what GANs do. If you watermark with something requiring a key to decode, you're already halfway to cryptographic signing, just without blockchain's forgery resistance. They're complementary approaches, not competing ones. On qualitative vs quantitative: As an engineer, quantitative beats qualitative for anything requiring accuracy at scale. Expert judgment works for individual high-stakes cases but doesn't scale to internet-speed misinformation.
You've helped me clarify that my audience isn't professional photographers with institutional backing - it's everyone else who needs to distinguish real from fake at the speed of social media. That's probably a harder problem to solve, but arguably more important given how information spreads today. Does that reframing make sense, or am I still missing key limitations?
If there’s a technological solution that could, somehow, provide a ubiquitous means of walking back from image discovered on the internet through the whole chain of custody to either a specific camera or a specific prompt, then yes there’s definitely a market and a need for that in society, and both the market and the need are going to be substantially greater, even after this AI-summer collapses, but I’ll be damned if I know how you’d get to the technical solution without first achieving the legal momentum necessary to make it ubiquitous enough to actually be useful for that grand of a purpose. It’s the chicken/egg problem, or maybe more accurately the “stuffing-everything—back-into-Pandora’s-box” problem. Humanity has already created sufficient original imagery without a blockchain (or whatever) technical provenance model to allow for an essentially infinite remixing of just that existing body of images we didn’t know we’d have a need for provenance for into an enormous corpus of slop we also don’t have provenance for. Yes it would be good if there already existed a blockchain containing a Markov chain back to every original work of natural intellect, but how do you build that while the models are already regurgitating and chewing the weights they’ve already learned?
I mean I do think you’ve (also) noticed one of the wicked hard problems, it’s just I’ve had similar conversations in the photography and cinematography and VFX worlds going back more than 20 years, long before generative AI was a thing, but now they are a thing I think we’re stuck in a world where we need to understand that image + attestation != truth, and never really did.
But if you do figure it out, and somehow there exists a future in which we somehow never let Schrödinger’s stinking cat out of Schrödinger’s stinking bag, I’ll be the first to invest.
You're right that reaching full usefulness requires ubiquity, but achieving it requires solving adoption challenges that may be insurmountable. Billions of unauthenticated images already exist, but establishing truth going forward—even if only in certain areas—still has immense value.
Your point that an authenticated image can still be a forgery through staging is correct. However, that's been true for much longer than our current crisis and the results have been far more manageable. Hardware attestation doesn't solve truth, but it's a necessary starting point.
The honest answer: This solves a real problem, but implementation barriers may be insurmountable. I actually first imagined this framework over a year and a half ago and tried shopping it around, hoping a utility patent would motivate implementation. No one was interested in solving a problem with such a big question mark around reaching the end game.
I'm publishing as prior art because if legal/regulatory momentum does emerge, I don't want authentication monopolized. But you're right to be skeptical. The fact you've had these conversations for 20+ years highlights the enormity of the problem. That said, we also haven't had blockchain for that long, and it's remarkably well suited to this application (way more than currency, in my opinion).
If I figure out how to stuff everything back in Pandora's box, I'll let you know where to send the check.
[dead]