Banning AI art on PP

 

waist.it said, 1663333856

Russ Freeman said

waist.it said

Russ Freeman said

waist.it said

Gothic Image said

These AIs train on a very large data set from across the 'net, but what would happen if in the future those AIs were available to individuals such that you could train them purely on your own images? 


Yours is a rather better example than mine.

As you know, pretty much everything I ever taken, plus any edits all reside on my Linux Media Server - which is also a fully fledged LAMP server. About 550,000 SOOC and around 50,000 edits live there. Many of these are tagged and/or captioned, using a local installation of Piwigo, with the corresponding metadata stored in a MySQL (MariaDB) database - mostly so I can find stuff again.

However, this also means that most, if not all my images and their meta are already in a form that is easily digested and assimilated by AI. So, a system based say on OpenAI could be trained with a pool of my own work in the way you describe. Moreover, it would not merely know my model's names and images, but also photographic style and preferences, and my editing skills or (lack of them) as well.

Assuming it all works as expected, I could submit a plain English request:-

"Create a picture of Chiara, 28 years old, wind in her hair, posed like Bettie Page, sitting on the bonnet of a Lotus 7, on Miu Wo beach, at sunset."

Would near-zero-effort images generated like this be acceptable for modelling sites?

Not anymore on this site, no.


And I'm guessing that arguments such as:-

  • "Thirty years of massive effort to make it 'zero effort'."

And:-

  • "All the component pieces are mine."

Wouldn't cut much proverbial ice?

Of the dozen or so reports so far, the offer of "delete the photo if it's machine-generated, or send us a copy of the unedited" all have resulted in the photo being deleted by the uploader.

If the answer to "did you take this photo?" isn't yes, or "Please send us the unedited original in high resolution" cannot be fulfilled, then the image will be deleted.

I'm sure there are complexities, but I have tried to use language and rules such that complexities are minimised, and being the adaptable human I am, I will, of course, change things to adapt as new things arise.


Your site, your rules, and I have zero intention of breaking any of them. :-)

Granted, my earlier effort in this thread to trick you with AI failed. Not sure if it was lack of artificial intelligence or lack of mine! lol. Nevertheless, I think you see my point? In the near future, creating a fake AI "original" too shouldn't be too hard. Even faking the EXIF has become a trivial task - and you don't even need AI for that:-

  • exiftool -TagsFromFile source.jpg -all:all target.jpg

How on earth are you or your image screeners going to spot the AI fakes from the genuine ones, without making an inordinate amount of extra work for yourselves?



Russ Freeman (staff) said, 1663335648

waist.it said

Russ Freeman said

waist.it said

Russ Freeman said

waist.it said

Gothic Image said

These AIs train on a very large data set from across the 'net, but what would happen if in the future those AIs were available to individuals such that you could train them purely on your own images? 


Yours is a rather better example than mine.

As you know, pretty much everything I ever taken, plus any edits all reside on my Linux Media Server - which is also a fully fledged LAMP server. About 550,000 SOOC and around 50,000 edits live there. Many of these are tagged and/or captioned, using a local installation of Piwigo, with the corresponding metadata stored in a MySQL (MariaDB) database - mostly so I can find stuff again.

However, this also means that most, if not all my images and their meta are already in a form that is easily digested and assimilated by AI. So, a system based say on OpenAI could be trained with a pool of my own work in the way you describe. Moreover, it would not merely know my model's names and images, but also photographic style and preferences, and my editing skills or (lack of them) as well.

Assuming it all works as expected, I could submit a plain English request:-

"Create a picture of Chiara, 28 years old, wind in her hair, posed like Bettie Page, sitting on the bonnet of a Lotus 7, on Miu Wo beach, at sunset."

Would near-zero-effort images generated like this be acceptable for modelling sites?

Not anymore on this site, no.


And I'm guessing that arguments such as:-

  • "Thirty years of massive effort to make it 'zero effort'."

And:-

  • "All the component pieces are mine."

Wouldn't cut much proverbial ice?

Of the dozen or so reports so far, the offer of "delete the photo if it's machine-generated, or send us a copy of the unedited" all have resulted in the photo being deleted by the uploader.

If the answer to "did you take this photo?" isn't yes, or "Please send us the unedited original in high resolution" cannot be fulfilled, then the image will be deleted.

I'm sure there are complexities, but I have tried to use language and rules such that complexities are minimised, and being the adaptable human I am, I will, of course, change things to adapt as new things arise.


Your site, your rules, and I have zero intention of breaking any of them. :-)

Granted, my earlier effort in this thread to trick you with AI failed. Not sure if it was lack of artificial intelligence or lack of mine! lol. Nevertheless, I think you see my point? In the near future, creating a fake AI "original" too shouldn't be too hard. Even faking the EXIF has become a trivial task - and you don't even need AI for that:-

  • exiftool -TagsFromFile source.jpg -all:all target.jpg

How on earth are you or your image screeners going to spot the AI fakes from the genuine ones, without making an inordinate amount of extra work for yourselves?

I am sure there are many options for determining whether someone is lying about how they arrived at an image.

Editing JPG metadata has always been easy. That's nothing new. Even I developed an image encoding library in C++ in 1999 that made it trivial.

Determining whether someone is being honest is always difficult, a challenge comes and then is overcome, and then a new challenge arises and is defeated, and again, and again. This too, is nothing new. In around 2000, I created game anticheat software, which was defeated, and then I improved it, and the cycle went on. The same thing happened with copy protection and software crackers, and round and round it went.

Maybe we'll create a new site that allows machines to compete for fake eyeballs to gain fake likes for fake photos so those that like to text their orders for photos can get their jollies without being called out about it.

Who knows?!

But for now, we'll do our best to weed out those that try to deceive the community into thinking they created something when they didn't.




waist.it said, 1663337346

Russ Freeman said

waist.it said

Russ Freeman said

waist.it said

Russ Freeman said

waist.it said

Gothic Image said

These AIs train on a very large data set from across the 'net, but what would happen if in the future those AIs were available to individuals such that you could train them purely on your own images? 


Yours is a rather better example than mine.

As you know, pretty much everything I ever taken, plus any edits all reside on my Linux Media Server - which is also a fully fledged LAMP server. About 550,000 SOOC and around 50,000 edits live there. Many of these are tagged and/or captioned, using a local installation of Piwigo, with the corresponding metadata stored in a MySQL (MariaDB) database - mostly so I can find stuff again.

However, this also means that most, if not all my images and their meta are already in a form that is easily digested and assimilated by AI. So, a system based say on OpenAI could be trained with a pool of my own work in the way you describe. Moreover, it would not merely know my model's names and images, but also photographic style and preferences, and my editing skills or (lack of them) as well.

Assuming it all works as expected, I could submit a plain English request:-

"Create a picture of Chiara, 28 years old, wind in her hair, posed like Bettie Page, sitting on the bonnet of a Lotus 7, on Miu Wo beach, at sunset."

Would near-zero-effort images generated like this be acceptable for modelling sites?

Not anymore on this site, no.


And I'm guessing that arguments such as:-

  • "Thirty years of massive effort to make it 'zero effort'."

And:-

  • "All the component pieces are mine."

Wouldn't cut much proverbial ice?

Of the dozen or so reports so far, the offer of "delete the photo if it's machine-generated, or send us a copy of the unedited" all have resulted in the photo being deleted by the uploader.

If the answer to "did you take this photo?" isn't yes, or "Please send us the unedited original in high resolution" cannot be fulfilled, then the image will be deleted.

I'm sure there are complexities, but I have tried to use language and rules such that complexities are minimised, and being the adaptable human I am, I will, of course, change things to adapt as new things arise.


Your site, your rules, and I have zero intention of breaking any of them. :-)

Granted, my earlier effort in this thread to trick you with AI failed. Not sure if it was lack of artificial intelligence or lack of mine! lol. Nevertheless, I think you see my point? In the near future, creating a fake AI "original" too shouldn't be too hard. Even faking the EXIF has become a trivial task - and you don't even need AI for that:-

  • exiftool -TagsFromFile source.jpg -all:all target.jpg

How on earth are you or your image screeners going to spot the AI fakes from the genuine ones, without making an inordinate amount of extra work for yourselves?

I am sure there are many options for determining whether someone is lying about how they arrived at an image.

Editing JPG metadata has always been easy. That's nothing new. Even I developed an image encoding library in C++ in 1999 that made it trivial.

Determining whether someone is being honest is always difficult, a challenge comes and then is overcome, and then a new challenge arises and is defeated, and again, and again. This too, is nothing new. In around 2000, I created game anticheat software, which was defeated, and then I improved it, and the cycle went on. The same thing happened with copy protection and software crackers, and round and round it went.

Maybe we'll create a new site that allows machines to compete for fake eyeballs to gain fake likes for fake photos so those that like to text their orders for photos can get their jollies without being called out about it.

Who knows?!

But for now, we'll do our best to weed out those that try to deceive the community into thinking they created something when they didn't.




Indeed, the game of cat-and-mouse has always been afoot, especially for a chap in your situation. Seems to me however, this time ,simply determining the rules is likely to prove, shall we say, somewhat challenging. Let alone determining which side of said rules various AI-derived pastiches of a photographer's own work would fall?

A R G E N T U M said, 1663341516

waist.it said

Russ Freeman said

waist.it said

Russ Freeman said

waist.it said

Russ Freeman said

waist.it said

Gothic Image said

These AIs train on a very large data set from across the 'net, but what would happen if in the future those AIs were available to individuals such that you could train them purely on your own images? 


Yours is a rather better example than mine.

As you know, pretty much everything I ever taken, plus any edits all reside on my Linux Media Server - which is also a fully fledged LAMP server. About 550,000 SOOC and around 50,000 edits live there. Many of these are tagged and/or captioned, using a local installation of Piwigo, with the corresponding metadata stored in a MySQL (MariaDB) database - mostly so I can find stuff again.

However, this also means that most, if not all my images and their meta are already in a form that is easily digested and assimilated by AI. So, a system based say on OpenAI could be trained with a pool of my own work in the way you describe. Moreover, it would not merely know my model's names and images, but also photographic style and preferences, and my editing skills or (lack of them) as well.

Assuming it all works as expected, I could submit a plain English request:-

"Create a picture of Chiara, 28 years old, wind in her hair, posed like Bettie Page, sitting on the bonnet of a Lotus 7, on Miu Wo beach, at sunset."

Would near-zero-effort images generated like this be acceptable for modelling sites?

Not anymore on this site, no.


And I'm guessing that arguments such as:-

  • "Thirty years of massive effort to make it 'zero effort'."

And:-

  • "All the component pieces are mine."

Wouldn't cut much proverbial ice?

Of the dozen or so reports so far, the offer of "delete the photo if it's machine-generated, or send us a copy of the unedited" all have resulted in the photo being deleted by the uploader.

If the answer to "did you take this photo?" isn't yes, or "Please send us the unedited original in high resolution" cannot be fulfilled, then the image will be deleted.

I'm sure there are complexities, but I have tried to use language and rules such that complexities are minimised, and being the adaptable human I am, I will, of course, change things to adapt as new things arise.


Your site, your rules, and I have zero intention of breaking any of them. :-)

Granted, my earlier effort in this thread to trick you with AI failed. Not sure if it was lack of artificial intelligence or lack of mine! lol. Nevertheless, I think you see my point? In the near future, creating a fake AI "original" too shouldn't be too hard. Even faking the EXIF has become a trivial task - and you don't even need AI for that:-

  • exiftool -TagsFromFile source.jpg -all:all target.jpg

How on earth are you or your image screeners going to spot the AI fakes from the genuine ones, without making an inordinate amount of extra work for yourselves?

I am sure there are many options for determining whether someone is lying about how they arrived at an image.

Editing JPG metadata has always been easy. That's nothing new. Even I developed an image encoding library in C++ in 1999 that made it trivial.

Determining whether someone is being honest is always difficult, a challenge comes and then is overcome, and then a new challenge arises and is defeated, and again, and again. This too, is nothing new. In around 2000, I created game anticheat software, which was defeated, and then I improved it, and the cycle went on. The same thing happened with copy protection and software crackers, and round and round it went.

Maybe we'll create a new site that allows machines to compete for fake eyeballs to gain fake likes for fake photos so those that like to text their orders for photos can get their jollies without being called out about it.

Who knows?!

But for now, we'll do our best to weed out those that try to deceive the community into thinking they created something when they didn't.



Indeed, the game of cat-and-mouse has always been afoot, especially for a chap in your situation. Seems to me however, this time ,simply determining the rules is likely to prove, shall we say, somewhat challenging. Let alone determining which side of said rules various AI-derived pastiches of a photographer's own work would fall?


The Sweeney T Shirt | We Haven't Had Any Dinner | Revolution Ape

waist.it said, 1663341996

The Sweeney T Shirt | We Haven't Had Any Dinner | Revolution Ape

A R G E N T U M I was watching that very episode just the other night, "The Ringer", with Brian Blessed and Ian Hendry playing the baddies. We may have AI today, but they certainly don't make villains like that any more... :-)

Edited by waist.it

A R G E N T U M said, 1663345483

waist.it said


A R G E N T U M I was watching that very episode just the other night, "The Ringer", with Brian Blessed and Ian Hendry playing the baddies. We may have AI today, but they certainly don't make villains like that any more... :-)

Edited by waist.it


And to think Brian Blessed started off in Z-Cars playing PC Fancy Smith, long before any of us had heard of Cybercrime  :)

Feeling My Age :: Feeling My Age TV Theme Of The Week: Z Cars | Feeling My  Age

Oliver Thompson said, 1663357567

I think it's a good decision. You wouldn't submit a photograph to an oil painting museum, submitting ai generated images to a photography website shouldn't be allowed

Lenswonder said, 1663419659

Looking at Instagram lately with all these AI images & it's aweful. Just reminds you of a generic sugary supermarket cake.

MidgePhoto said, 1663422773

Flickr, always a consciously broad church, has added a category for virtual photography/photographs.

So perhaps that is where to lodge the output of our conversationally-interfaced tools.

fab1 said, 1663423186

Interesting AI is banned on PP but the advice on avatars says this if you don't pick an avatar. 

"...your avatar image will be automatically chosen by a computer program."

LOL 😁

MidgePhoto said, 1663424774

fab1 said

Interesting AI is banned on PP but the advice on avatars says this if you don't pick an avatar. 

"...your avatar image will be automatically chosen by a computer program."

LOL 😁


Purplebot promises not to make a _clever_ choice.

HorrifyMeUK said, 1663426970

MidgePhoto said

fab1 said

Interesting AI is banned on PP but the advice on avatars says this if you don't pick an avatar. 

"...your avatar image will be automatically chosen by a computer program."

LOL 😁


Purplebot promises not to make a _clever_ choice.


Purple bot may choose it but it doesn’t MAKE it

Tarmoo said, 1663444478

Interesting article suggesting that AI Art is damaging artists such as Greg Reukowski as their art is getting swamped by online digital copies made by AI generators ... https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/

Edited by Tarmoo

HorrifyMeUK said, 1663485645

Tarmoo said

Interesting article suggesting that AI Art is damaging artists such as Greg Reukowski as their art is getting swamped by online digital copies made by AI generators ... https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/

Edited by Tarmoo


I just read this. 

“The UK, which hopes to boost domestic AI development, wants to change laws to give AI developers greater access to copyrighted data. Under these changes,  developers would be able to scrape works protected by copyright to train their AI systems for both commercial and noncommercial purposes. ”


it’s interesting to see how many artists are against this, and how the AI companies clearly don’t give a fuck. Talk of training the algorithm on public domain images or forming partnerships with museums or artists is utter bollocks. You only need look at how piss poor the copyright policy is on YouTube or any social media platform to see where it will end up. 

This really is a genuine Jurassic Park moment: they were so concerned with whether or not they COULD build this thing that nobody ever stopped to ask if they SHOULD. I’ve always embraced digital tech in my career in image making so don’t exactly consider myself a Luddite but this AI text-to-image tool really does feel like a huge step too far. I really hope it gets eaten by its own can of worms.

Unfocussed Mike said, 1663533912

HorrifyMeUK said

Tarmoo said

Interesting article suggesting that AI Art is damaging artists such as Greg Reukowski as their art is getting swamped by online digital copies made by AI generators ... https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/

Edited by Tarmoo


I just read this. 

“The UK, which hopes to boost domestic AI development, wants to change laws to give AI developers greater access to copyrighted data. Under these changes,  developers would be able to scrape works protected by copyright to train their AI systems for both commercial and noncommercial purposes. ”


it’s interesting to see how many artists are against this, and how the AI companies clearly don’t give a fuck. Talk of training the algorithm on public domain images or forming partnerships with museums or artists is utter bollocks. You only need look at how piss poor the copyright policy is on YouTube or any social media platform to see where it will end up. 

This really is a genuine Jurassic Park moment: they were so concerned with whether or not they COULD build this thing that nobody ever stopped to ask if they SHOULD. I’ve always embraced digital tech in my career in image making so don’t exactly consider myself a Luddite but this AI text-to-image tool really does feel like a huge step too far. I really hope it gets eaten by its own can of worms.

It's appalling, isn't it -- a wholly unjustifiable bit of law that is being drafted so early that most people do not know what it means: 

- Copyright doesn't apply if you're training an AI.

But since the theft has already happened -- the training data set is out there -- there's nothing really to do.

I do think it is possible it will get eaten by its own can of worms, though, because one AI will not necessarily recognise the work of another AI. So for example an AI training itself on Jackson Pollock won't necessarily be able to tell that the Jackson Pollock in the blog post is not real. 

It's entirely possible that for example Midjourney can't even recognise its own work -- though I do wonder if they've devised some way to resolve that problem by embedding adversarial qualities.