AI + Photography = ?

 

Unfocussed Mike said, 1703630716

Russ Freeman said

FunPhotographer said

On Modelfolio the camera used to take an image is displayed with the image.

That’s not infallible, as you can edit the metadata but it’s useful detail to see. There are developments by the camera companies to make metadata more authentic with a thing called ‘content credentials’ I understand?

Probably a few years away before it comes mainstream though.

It's called EXIF data, and it is easy to edit.

If you are thinking of this bullshit, and it hasn't made you spit your coffee onto your keyboard while you laugh and laugh and laugh, then you probably think AI will solve the problem for you and Clarkes Third law is for you. 

CAI is not the worst idea, I think.

It doesn't solve the fundamental problem of modern journalism, the first solution to which seems obvious: to protect any sense of objective reality, news organisations with diametrically opposed views should be routinely validating the authenticity and timeliness of each others' raw footage, so they can robustly disagree on the interpretation of shared facts and a shared timeline. If there was a culture of validating/pooling footage it would be a lot harder for a rogue operation like OANN/RT/NewsMax to insert thinly sourced internet fakery into real news.

But CAI as a principle could be useful within that, if it gains a bit more independence from Adobe than DNG did; the two different device-based signing approaches being trialled by Sony and Leica might help move things along. 

They won't help _here_ though, because here we're posting attempts at art, right? Here it's not that important to worry if a raw file has been faked or came before/after another one.

The basic thing that will help us here is deciding it is unacceptable to post wholly AI-generated work, and collectively policing it at a community level by reporting it. 

Photography generally (I do not mean Purpleport specifically by any means) is the least artistically- and visually-aware art form though; so many photographers appear to have little idea how to assess what we are looking at, its difficulty, heritage, commonality etc., and even if one has spent some time learning this stuff, the artistic education of an average experienced photographer (mine included) is barely on the level of the average beginner watercolourist. 

AI is now teaching us a lesson, and the lesson is:we don't assign enough value to the work. Photographers now need to think about art, its meaning, the interaction between art and effort, between constraints and vision etc., and not just jump at each labour saving device as it comes along.

(Edited to remove unwieldy generic "you")

Edited by Unfocussed Mike

Holly Alexander said, 1703630670

Russ Freeman to be honest Facebook and Instagram don't give a shit haha! They are just happy with more posts the better!

Russ Freeman (staff) said, 1703631688

Unfocussed Mike said

Russ Freeman said

FunPhotographer said

On Modelfolio the camera used to take an image is displayed with the image.

That’s not infallible, as you can edit the metadata but it’s useful detail to see. There are developments by the camera companies to make metadata more authentic with a thing called ‘content credentials’ I understand?

Probably a few years away before it comes mainstream though.

It's called EXIF data, and it is easy to edit.

If you are thinking of this bullshit, and it hasn't made you spit your coffee onto your keyboard while you laugh and laugh and laugh, then you probably think AI will solve the problem for you and Clarkes Third law is for you. 

CAI is not the worst idea, I think.

It doesn't solve the fundamental problem of modern journalism, the first solution to which seems obvious: to protect any sense of objective reality, news organisations with diametrically opposed views should be routinely validating the authenticity and timeliness of each others' raw footage, so they can robustly disagree on the interpretation of shared facts and a shared timeline. If there was a culture of validating/pooling footage it would be a lot harder for a rogue operation like OANN/RT/NewsMax to insert thinly sourced internet fakery into real news.

But CAI as a principle could be useful within that, if it gains a bit more independence from Adobe than DNG did; the two different device-based signing approaches being trialled by Sony and Leica might help move things along. 

They won't help _here_ though, because here we're posting attempts at art, right? Here it's not that important to worry if a raw file has been faked or came before/after another one.

The basic thing that will help us here is deciding it is unacceptable to post wholly AI-generated work, and collectively policing it at a community level by reporting it. 

Photography generally (I do not mean Purpleport specifically by any means) is the least artistically- and visually-aware art form though; so many photographers appear to have little idea how to assess what they are looking at, its difficulty, heritage, commonality etc.

AI is just teaching us a lesson, and the lesson is: photographers need to think about art.

It is amongst the worst ideas, and it is grasping at trying to remain relevant in a field where they have created their irrelevancy *for marketing*.

If the BBC can label an image as "Certified authentic" simply because it is algorithmically so according to XYZ Corp, then we have a serious issue when someone as stupid as me can alter it to say what I want.

AI, in its current (non-intelligent) apparition, teaches nothing of value. It can't teach, any more than furniture from Ikea teaches carpentry.



Unfocussed Mike said, 1703632915

Russ Freeman said|

It is amongst the worst ideas, and it is grasping at trying to remain relevant in a field where they have created their irrelevancy *for marketing*.

If the BBC can label an image as "Certified authentic" simply because it is algorithmically so according to XYZ Corp, then we have a serious issue when someone as stupid as me can alter it to say what I want.

Right, but I don't think that is what the BBC will do anyway -- CAI can only ever be a layer of what they do, because the photo itself can still be a photograph of a fake scenario, right?

(The BBC's original plan was Project Origin, which is folded into C2PA, which is the standards body that supports CAI. Project Origin is dead serious stuff.)

The point of CAI and the various on-device solutions and implementations from C2PA is that the provenance data won't be editable, only added to.

The BBC and AP, say, will be able to distribute cameras that have some C2PA standard implemented at the firmware/trust zone level, and the profile data will be cryptographically signed, such that changing it will not match the cryptographic hash. And future edits are part of the same authenticity chain.

Trust is always layers and affidavits, and if a press photographer's image file can only have come from a particular device in a particular time/sequential relationship with some other baseline photo, for example a shot taken in agreed circumstances with other organisations, that does have some value. The value of a time-series in the future will be really obvious.

But beyond that we still have to train people to understand how trust works, to learn to ignore conspiratorial bleating about the "MSM" or the "lamestream media" or the "establishment media", and to ask every news establishment for verification and proofs.

For example, if you are a random blogger, and you have an image of an event that has no CAI provenance or similar, to go with a sensational interpretation of the content, and your image is up against contradictory images from two opposing news organisations that do have CAI provenance, where the images are from photographers who have a track record of doing verifiable work, then it really is incumbent on you as a random blogger to come up with a reason why yours should be trusted and theirs not. 

But as I say, to get this to work means imagining a situation where media enemies like the BBC and, say, Fox News work to quickly validate each other's raw footage, such that their verification supports the growing story, even if the parties disagree on the meanings of the events depicted.

Edited by Unfocussed Mike

FunPhotographer said, 1703633173

Russ Freeman said

FunPhotographer said

On Modelfolio the camera used to take an image is displayed with the image.

That’s not infallible, as you can edit the metadata but it’s useful detail to see. There are developments by the camera companies to make metadata more authentic with a thing called ‘content credentials’ I understand?

Probably a few years away before it comes mainstream though.

It's called EXIF data, and it is easy to edit.

If you are thinking of this bullshit, and it hasn't made you spit your coffee onto your keyboard while you laugh and laugh and laugh, then you probably think AI will solve the problem for you and Clarkes Third law is for you. 


This is where I first heard about it.  Worth watching for those like me who want to gain a basic grasp of the problem without too much posturing or talking down to.  The CAI is specifically discussed at length from around 12 minutes in, but the whole clip is worth watching. 


https://www.youtube.com/watch?v=_1L0Ukm-1rw

Russ Freeman (staff) said, 1703637697

FunPhotographer said

Russ Freeman said

FunPhotographer said

On Modelfolio the camera used to take an image is displayed with the image.

That’s not infallible, as you can edit the metadata but it’s useful detail to see. There are developments by the camera companies to make metadata more authentic with a thing called ‘content credentials’ I understand?

Probably a few years away before it comes mainstream though.

It's called EXIF data, and it is easy to edit.

If you are thinking of this bullshit, and it hasn't made you spit your coffee onto your keyboard while you laugh and laugh and laugh, then you probably think AI will solve the problem for you and Clarkes Third law is for you. 


This is where I first heard about it.  Worth watching for those like me who want to gain a basic grasp of the problem without too much posturing or talking down to.  The CAI is specifically discussed at length from around 12 minutes in, but the whole clip is worth watching. 


https://www.youtube.com/watch?v=_1L0Ukm-1rw

I'm cynical of such things.

At 12:50 the guy says, "you can't easily fake it", which means YOU can't fake it because they think you lack the techie know-how, but people with that know-how can easily fake it.

I'm reminded of DeCSS. I might still have the t-shirt.



Russ Freeman (staff) said, 1703638432

Unfocussed Mike said

...

For example, if you are a random blogger, and you have an image of an event that has no CAI provenance or similar, to go with a sensational interpretation of the content, and your image is up against contradictory images from two opposing news organisations that do have CAI provenance, where the images are from photographers who have a track record of doing verifiable work, then it really is incumbent on you as a random blogger to come up with a reason why yours should be trusted and theirs not. 

...

 

Licence it to those people you know will report things the way you like.

Make the licence revocable and you have a means to control who can tell the truth over time.


Unfocussed Mike said, 1703639075

Russ Freeman said

Unfocussed Mike said

...

For example, if you are a random blogger, and you have an image of an event that has no CAI provenance or similar, to go with a sensational interpretation of the content, and your image is up against contradictory images from two opposing news organisations that do have CAI provenance, where the images are from photographers who have a track record of doing verifiable work, then it really is incumbent on you as a random blogger to come up with a reason why yours should be trusted and theirs not. 

...

 

Licence it to those people you know will report things the way you like.

Make the licence revocable and you have a means to control who can tell the truth over time.

License what? It's literally on Github with Apache/MIT licences.

https://github.com/contentauth

I mean: CAI/C2PA might not work. But it's not something that can be revoked. The main challenge it has is if it leans too much on Adobe for contributions and they lose interest.

Russ Freeman (staff) said, 1703644890

Unfocussed Mike said

Russ Freeman said

Unfocussed Mike said

...

For example, if you are a random blogger, and you have an image of an event that has no CAI provenance or similar, to go with a sensational interpretation of the content, and your image is up against contradictory images from two opposing news organisations that do have CAI provenance, where the images are from photographers who have a track record of doing verifiable work, then it really is incumbent on you as a random blogger to come up with a reason why yours should be trusted and theirs not. 

...

 

Licence it to those people you know will report things the way you like.

Make the licence revocable and you have a means to control who can tell the truth over time.

License what? It's literally on Github with Apache/MIT licences.

https://github.com/contentauth

I mean: CAI/C2PA might not work. But it's not something that can be revoked. The main challenge it has is if it leans too much on Adobe for contributions and they lose interest.

I clearly misunderstood.

If a random blogger can add the needed metadata to a jpg then I don't see what it achieves. 


Unfocussed Mike said, 1703684813

Russ Freeman said

Unfocussed Mike said

Russ Freeman said

Unfocussed Mike said

...

For example, if you are a random blogger, and you have an image of an event that has no CAI provenance or similar, to go with a sensational interpretation of the content, and your image is up against contradictory images from two opposing news organisations that do have CAI provenance, where the images are from photographers who have a track record of doing verifiable work, then it really is incumbent on you as a random blogger to come up with a reason why yours should be trusted and theirs not. 

...

 

Licence it to those people you know will report things the way you like.

Make the licence revocable and you have a means to control who can tell the truth over time.

License what? It's literally on Github with Apache/MIT licences.

https://github.com/contentauth

I mean: CAI/C2PA might not work. But it's not something that can be revoked. The main challenge it has is if it leans too much on Adobe for contributions and they lose interest.

I clearly misunderstood.

If a random blogger can add the needed metadata to a jpg then I don't see what it achieves. 

It achieves adding the trust chain to see what edits were made. And if the random blogger has a camera that supports CAI in hardware, and a raw processor that supports it, it helps prove that the photo (or footage) came from their camera (and shows what edits were made). And establishes a sense of time, in that the media must exist in a sequence.

But yeah, it’s only one component of the process of trusting and verifying the producers of content. 

What you get from this is “I took it with my camera and I can prove I didn’t subsequently edit it deceptively”.

This is not nothing — it’s more than we have now, because that level of proof at the moment requires trusting that a raw file has not been tampered with, and/or trusting the organisation who converted it from raw when they say so. In principle, CAI adds something.

Add this to the reality of modern news (that most major events are documented by more than one device and more than one angle) and we could have something that helps add some verification to sourcing and defends us from completely fabricated content (AI generated or deceptively edited).

From that ground level and up, we still have questions about the long term credibility of sources, reporters and media organisations, but we always did!

What CAI could get us back to is the same level of trust that we had when fakery was not so industrially fast and simple. Which is not perfect but it is considerably better than a world where anyone can use the mere existence of AI content generation tools to cast doubt on the truthfulness and reputation of everyone else (“show me a writer who doesn’t…” etc.)

Then there are more community-verification systems to build on top.

All of this is difficult, imperfect and will be hard to establish. But without these “chain of evidence” tools that help even on this basic level, we will soon be in a bad place.

Edited by Unfocussed Mike

Russ Freeman (staff) said, 1703689632

Unfocussed Mike said

...

What you get from this is “I took it with my camera and I can prove I didn’t subsequently edit it deceptively”.

...

If I deceptively edit such a photo and change the metadata to hide that edit, then surely the entire "chain of evidence/trust" is merely theatre for the masses.

Unfocussed Mike said, 1703691340

Russ Freeman said

Unfocussed Mike said

...

What you get from this is “I took it with my camera and I can prove I didn’t subsequently edit it deceptively”.

...

If I deceptively edit such a photo and change the metadata to hide that edit, then surely the entire "chain of evidence/trust" is merely theatre for the masses.

It’s more nuanced than that, and I don’t know why you keep talking about “the masses” as if this is being imposed by some unified conspiracy. It’s really not.

The cryptographic chain is the point. It starts at the source file, and each edit modifies it. The metadata describing the edit is combined with the change in hash of the file at the point of that edit and the hash of the file before the edit. So you can’t take an edit out of the chain once a subsequent edit is made, and if you make an edit that isn’t represented by an entry on the chain, that becomes obvious from use of the verification tools.

(It is analogous to but not dependent on existing blockchain technologies, and this stuff — the uneditability of the blockchain — is now fairly well proven, to the extent that corrupt whales have had to fork entire currencies to work around mistakes and frauds that were not made in their favour. Uneditability is not the fundamental issue with cryptocurrencies.)

Digital chain of evidence stuff is difficult but potentially valuable.

In this case if you start with a raw file from a camera that doesn’t have its own means of verification through a device certificate, then yes, the chain will show that the initial input is untrusted, and it will be up to the user, producer, viewer, to decide how much trust they place in the integrity of the source image. But this is  not nothing — it’s routine to have questions about the subjective value of a source image anyway. And trust is always layers. (Try proving your identity to someone, sometime, and see how this works; I am only ever who someone else agrees I am)

If the source file is additionally verifiable then you get some extra value (the proof of which device captured it and basically when).

Though you still have to decide whether the photo was taken with the intention to mislead. Figuring out that stuff is what competitive journalism is about, and it’s our best defence against fascism.

With the chain of evidence stuff you can tell — in principle — whether a secondary participant has deceptively edited the footage, which has been a problem with misinformation in the last five or six years. And after all, that is one major form of deception: taking something that appears to be real and changing it to say something else.

There are additional benefits when footage is being pooled, notarised with an escrow or archive service, or cross-verified by other media organisations. 

None of it is perfect but it’s better than doing nothing.

Edited by Unfocussed Mike

Off Beat Image said, 1703691544

Unfocussed Mike said

Russ Freeman said

Unfocussed Mike said

...

What you get from this is “I took it with my camera and I can prove I didn’t subsequently edit it deceptively”.

...

If I deceptively edit such a photo and change the metadata to hide that edit, then surely the entire "chain of evidence/trust" is merely theatre for the masses.

It’s more nuanced than that, and I don’t know why you keep talking about “the masses” as if this is being imposed by some unified conspiracy. It’s really not.

The cryptographic chain is the point. It starts at the source file, and each edit modifies it. The metadata describing the edit is combined with the change in hash of the file at the point of that edit and the hash of the file before the edit. So you can’t take an edit out of the chain once a subsequent edit is made, and if you make an edit that isn’t represented by an entry on the chain, that becomes obvious from use of the verification tools.

(It is analogous to but not dependent on existing blockchain technologies, and this stuff — the uneditability of the blockchain — is now fairly well proven, to the extent that corrupt whales have had to fork entire currencies to work around mistakes and frauds that were not made in their favour. Uneditability is not the fundamental issue with cryptocurrencies.)

Digital chain of evidence stuff is difficult but potentially valuable.

In this case if you start with a raw file from a camera that doesn’t have its own means of verification through a device certificate, then yes, the chain will show that the initial input is untrusted, and it will be up to the user, producer, viewer, to decide how much trust they place in the integrity of the source image. But this is  not nothing — it’s routine to have questions about the subjective value of a source image anyway. And trust is always layers. (Try proving your identity to someone, sometime, and see how this works; I am only ever who someone else agrees I am)

If the source file is additionally verifiable then you get some extra value (the proof of which device captured it and basically when).

Though you still have to decide whether the photo was taken with the intention to mislead. Figuring out that stuff is what competitive journalism is about, and it’s our best defence against fascism.

With the chain of evidence stuff you can tell — in principle — whether a secondary participant has deceptively edited the footage, which has been a problem with misinformation in the last five or six years. And after all, that is one major form of deception: taking something that appears to be real and changing it to say something else.

There are additional benefits when footage is being pooled, notarised with an escrow or archive service, or cross-verified by other media organisations. 

None of it is perfect but it’s better than doing nothing.

Edited by Unfocussed Mike


Surely you can just re-photograph a high res printout and the chain starts over?

Russ Freeman (staff) said, 1703694128

Off Beat Image said

Unfocussed Mike said

Russ Freeman said

Unfocussed Mike said

...

What you get from this is “I took it with my camera and I can prove I didn’t subsequently edit it deceptively”.

...

If I deceptively edit such a photo and change the metadata to hide that edit, then surely the entire "chain of evidence/trust" is merely theatre for the masses.

It’s more nuanced than that, and I don’t know why you keep talking about “the masses” as if this is being imposed by some unified conspiracy. It’s really not.

The cryptographic chain is the point. It starts at the source file, and each edit modifies it. The metadata describing the edit is combined with the change in hash of the file at the point of that edit and the hash of the file before the edit. So you can’t take an edit out of the chain once a subsequent edit is made, and if you make an edit that isn’t represented by an entry on the chain, that becomes obvious from use of the verification tools.

(It is analogous to but not dependent on existing blockchain technologies, and this stuff — the uneditability of the blockchain — is now fairly well proven, to the extent that corrupt whales have had to fork entire currencies to work around mistakes and frauds that were not made in their favour. Uneditability is not the fundamental issue with cryptocurrencies.)

Digital chain of evidence stuff is difficult but potentially valuable.

In this case if you start with a raw file from a camera that doesn’t have its own means of verification through a device certificate, then yes, the chain will show that the initial input is untrusted, and it will be up to the user, producer, viewer, to decide how much trust they place in the integrity of the source image. But this is  not nothing — it’s routine to have questions about the subjective value of a source image anyway. And trust is always layers. (Try proving your identity to someone, sometime, and see how this works; I am only ever who someone else agrees I am)

If the source file is additionally verifiable then you get some extra value (the proof of which device captured it and basically when).

Though you still have to decide whether the photo was taken with the intention to mislead. Figuring out that stuff is what competitive journalism is about, and it’s our best defence against fascism.

With the chain of evidence stuff you can tell — in principle — whether a secondary participant has deceptively edited the footage, which has been a problem with misinformation in the last five or six years. And after all, that is one major form of deception: taking something that appears to be real and changing it to say something else.

There are additional benefits when footage is being pooled, notarised with an escrow or archive service, or cross-verified by other media organisations. 

None of it is perfect but it’s better than doing nothing.

Edited by Unfocussed Mike


Surely you can just re-photograph a high res printout and the chain starts over?

Or just save the edited image and then add the needed metadata and pretend it was a real photo.

Unfocussed Mike said

Russ Freeman said

Unfocussed Mike said

...

What you get from this is “I took it with my camera and I can prove I didn’t subsequently edit it deceptively”.

...

If I deceptively edit such a photo and change the metadata to hide that edit, then surely the entire "chain of evidence/trust" is merely theatre for the masses.

It’s more nuanced than that, and I don’t know why you keep talking about “the masses” as if this is being imposed by some unified conspiracy. It’s really not.

...

 

I say "the masses" because most people (the masses) don't have the first clue about anything tech-related and will just nod in agreement when they see a sticker that says "verified and trusted" (or whatever the slogan will be).

It's theatre because all it achieves is a false sense of security.

I'm interested in knowing how the chain starts, and whether I can start such a chain with a photo I took using my old 5dm2, or whether I can save a JPG using libjpeg and then create a chain, or are there chain-starting gatekeepers, like Canon or Adobe?

Unfocussed Mike said, 1703694422

Off Beat Image said

Unfocussed Mike said

Russ Freeman said

Unfocussed Mike said

...

What you get from this is “I took it with my camera and I can prove I didn’t subsequently edit it deceptively”.

...

If I deceptively edit such a photo and change the metadata to hide that edit, then surely the entire "chain of evidence/trust" is merely theatre for the masses.

It’s more nuanced than that, and I don’t know why you keep talking about “the masses” as if this is being imposed by some unified conspiracy. It’s really not.

The cryptographic chain is the point. It starts at the source file, and each edit modifies it. The metadata describing the edit is combined with the change in hash of the file at the point of that edit and the hash of the file before the edit. So you can’t take an edit out of the chain once a subsequent edit is made, and if you make an edit that isn’t represented by an entry on the chain, that becomes obvious from use of the verification tools.

(It is analogous to but not dependent on existing blockchain technologies, and this stuff — the uneditability of the blockchain — is now fairly well proven, to the extent that corrupt whales have had to fork entire currencies to work around mistakes and frauds that were not made in their favour. Uneditability is not the fundamental issue with cryptocurrencies.)

Digital chain of evidence stuff is difficult but potentially valuable.

In this case if you start with a raw file from a camera that doesn’t have its own means of verification through a device certificate, then yes, the chain will show that the initial input is untrusted, and it will be up to the user, producer, viewer, to decide how much trust they place in the integrity of the source image. But this is  not nothing — it’s routine to have questions about the subjective value of a source image anyway. And trust is always layers. (Try proving your identity to someone, sometime, and see how this works; I am only ever who someone else agrees I am)

If the source file is additionally verifiable then you get some extra value (the proof of which device captured it and basically when).

Though you still have to decide whether the photo was taken with the intention to mislead. Figuring out that stuff is what competitive journalism is about, and it’s our best defence against fascism.

With the chain of evidence stuff you can tell — in principle — whether a secondary participant has deceptively edited the footage, which has been a problem with misinformation in the last five or six years. And after all, that is one major form of deception: taking something that appears to be real and changing it to say something else.

There are additional benefits when footage is being pooled, notarised with an escrow or archive service, or cross-verified by other media organisations. 

None of it is perfect but it’s better than doing nothing.

Edited by Unfocussed Mike


Surely you can just re-photograph a high res printout and the chain starts over?

You absolutely can!

But you know what you can't do?

Edit the image, print it out, photograph the print out and then successfully claim your image is identical to the source. Because your verification chains will demonstrate that they aren't.

This means we are away from digital fraud and back to journalistic fraud, which we've had for centuries and we have some imperfect mechanisms to deal with.

Trust is a web of verifiable relationships.

-- 

Here's a thought experiment for you:

There is a bank. It has two customers, A and B, who steal from each other when they send money to, or receive money from the bank.

Customer A devises a way to send their money to the bank in secret, and receive it in secret in return.

A now has an advantage over B.

Customer B figures out the same way to send and receive money to/from the bank in secret, and it so happens that both knowing the secret of how it is done does not break the individual secrets.

A no longer has an advantage over B. 

Both know this.

Is it true to say that because neither has an advantage over the other, there's no benefit to the secrecy?

If you say: yes, there is no benefit, you are wrong, obviously.

But you are wrong in an obvious way and at least one subtle way.

What's one subtle way you are wrong?