AI and plagurism

 

GPA6 said, 1699376996

-sp●●n- said

GPA6 said


I tried to insert the image using the image tab but it didn't work, I had copied a link from my drop box account. 

anyway, the picture was created in a few mins, selfie, ai generated image and then a couple of mins to overlay. 80 plus percent is AI, is the top image acceptable among photographers? 


Edited by GPA6


I cannot wait for it to become mainstream and the mess it will make to dating apps "but you look nothing like the picture".


apparently, the vast majority of people would rather their pictures looked good then they would a true likeness. I moderate a FB group that edits images upon request.  The idea was restore pictures of lost love ones or to salvage memorable moments but it hasn't worked out like that! I really wouldn't want to be on uniform of dating site haha! You're gunna get more than you bargained for and in some cases, a lot more. :)

GPA6 said, 1699377242

Gothic Image said

GPA6 said

[snip]


which is driving people away from open discussion.  Why doesn't anyone that's not interested in the original thread just go and find another one? 


I think some of us are trying to understand what the original thread was actually about, hence my comment above.  AI can selectively alter parts of an existing image - don't we know that already?  What's the discussion point here?


Hi Goth, I'm glad you're turned up. First off, how do I share an image here, I tried to link an image from drop box but it didn't work?  Second, the thread is about regenerating an existing image. See the first three. The first image is genuine, the second two are generated using AI from that image. And so, you might go to great lengths to create a work of art only to find that AI can replicate it without copying it. 

The Ghost said, 1699377408

Gothic Image said

GPA6 said

[snip]


which is driving people away from open discussion.  Why doesn't anyone that's not interested in the original thread just go and find another one? 


I think some of us are trying to understand what the original thread was actually about, hence my comment above.  AI can selectively alter parts of an existing image - don't we know that already?  What's the discussion point here?


Indeed. The ability to take another photographer's idea and pass it off is hardly new, it's just before you needed to turn up to a workshop/group shoot with an actual camera.

Models using AI to misrepresent who (and what) they are is a new issue for photographers but again, you could always use somebody else's selfies etc. to do that before.

-sp●●n- said, 1699377443

Gothic Image said

GPA6 said

[snip]


which is driving people away from open discussion.  Why doesn't anyone that's not interested in the original thread just go and find another one? 


I think some of us are trying to understand what the original thread was actually about, hence my comment above.  AI can selectively alter parts of an existing image - don't we know that already?  What's the discussion point here?


Previously AI was driven by words "a sailing ship in a bay stormy bay at night" and the output was quite random. There was never really a concern for plagiarism, because the effort to describe most pictures so they resemble an existing picture is quite hard, unless the image is a mainstream image, ie you could ask ai "paint the mona lisa wearing a party hat" and it would, but only because AI knows this iconic image, it has seen it on the internet a million times.

The game changer here is the fact now AI can be guided by an existing image, even one it has never seen, very easily, which means the derivative work can be created from any images the ai user likes, not just iconic ones.

Gothic Image said, 1699377978

GPA6 said

Gothic Image said

GPA6 said

[snip]


which is driving people away from open discussion.  Why doesn't anyone that's not interested in the original thread just go and find another one? 


I think some of us are trying to understand what the original thread was actually about, hence my comment above.  AI can selectively alter parts of an existing image - don't we know that already?  What's the discussion point here?


Hi Goth, I'm glad you're turned up. First off, how do I share an image here, I tried to link an image from drop box but it didn't work?  Second, the thread is about regenerating an existing image. See the first three. The first image is genuine, the second two are generated using AI from that image. And so, you might go to great lengths to create a work of art only to find that AI can replicate it without copying it. 


See the recent thread regarding problems linking to Dropbox, although I think the issue is at their end.

I'm still not seeing anything new in the examples shown. Is the point to do with the commands given to the AI? 

The Ghost said, 1699378317

-sp●●n- said

Gothic Image said

GPA6 said

[snip]


which is driving people away from open discussion.  Why doesn't anyone that's not interested in the original thread just go and find another one? 


I think some of us are trying to understand what the original thread was actually about, hence my comment above.  AI can selectively alter parts of an existing image - don't we know that already?  What's the discussion point here?


Previously AI was driven by words "a sailing ship in a bay stormy bay at night" and the output was quite random. There was never really a concern for plagiarism, because the effort to describe most pictures so they resemble an existing picture is quite hard, unless the image is a mainstream image, ie you could ask ai "paint the mona lisa wearing a party hat" and it would, but only because AI knows this iconic image, it has seen it on the internet a million times.

The game changer here is the fact now AI can be guided by an existing image, even one it has never seen, very easily, which means the derivative work can be created from any images the ai user likes, not just iconic ones.

Keep up, I've been doing this sort of thing for seven months now ;-)

What's new is effectively style transfer, so instead of stealing someone's images, I could borrow their style to impersonate them, even down to the way the B&W processing is handled with added grain to mimic Tri-X.

GPA6 said, 1699379385

The Ghost of Prancy McPrettykins said

-sp●●n- said

Gothic Image said

GPA6 said

[snip]


which is driving people away from open discussion.  Why doesn't anyone that's not interested in the original thread just go and find another one? 


I think some of us are trying to understand what the original thread was actually about, hence my comment above.  AI can selectively alter parts of an existing image - don't we know that already?  What's the discussion point here?


Previously AI was driven by words "a sailing ship in a bay stormy bay at night" and the output was quite random. There was never really a concern for plagiarism, because the effort to describe most pictures so they resemble an existing picture is quite hard, unless the image is a mainstream image, ie you could ask ai "paint the mona lisa wearing a party hat" and it would, but only because AI knows this iconic image, it has seen it on the internet a million times.

The game changer here is the fact now AI can be guided by an existing image, even one it has never seen, very easily, which means the derivative work can be created from any images the ai user likes, not just iconic ones.

Keep up, I've been doing this sort of thing for seven months now ;-)

What's new is effectively style transfer, so instead of stealing someone's images, I could borrow their style to impersonate them, even down to the way the B&W processing is handled with added grain to mimic Tri-X.


I didn't see this thread but it's exactly what I have been harping on about for a while. My question is, does this method negate the need for expensive gear? and will it easily put right our own inadequacies? 

GPA6 said, 1699379475

Gothic Image said

GPA6 said

Gothic Image said

GPA6 said

[snip]


which is driving people away from open discussion.  Why doesn't anyone that's not interested in the original thread just go and find another one? 


I think some of us are trying to understand what the original thread was actually about, hence my comment above.  AI can selectively alter parts of an existing image - don't we know that already?  What's the discussion point here?


Hi Goth, I'm glad you're turned up. First off, how do I share an image here, I tried to link an image from drop box but it didn't work?  Second, the thread is about regenerating an existing image. See the first three. The first image is genuine, the second two are generated using AI from that image. And so, you might go to great lengths to create a work of art only to find that AI can replicate it without copying it. 


See the recent thread regarding problems linking to Dropbox, although I think the issue is at their end.

I'm still not seeing anything new in the examples shown. Is the point to do with the commands given to the AI? 


its as ghost said, its about generating an image from an image. I subject that has come up before but in a different context. 

-sp●●n- said, 1699379609

The Ghost of Prancy McPrettykins said

Keep up, I've been doing this sort of thing for seven months now ;-)

What's new is effectively style transfer, so instead of stealing someone's images, I could borrow their style to impersonate them, even down to the way the B&W processing is handled with added grain to mimic Tri-X.

Slightly different? in effort required, yours was multiple steps with various tools and image overlay?

The difference is ease of doing it, take these 2, here is the original:

and a changed image (which blends with the background, and it took 10 words and 30 seconds):





The Ghost said, 1699379910

-sp●●n- said

The Ghost of Prancy McPrettykins said

Keep up, I've been doing this sort of thing for seven months now ;-)

What's new is effectively style transfer, so instead of stealing someone's images, I could borrow their style to impersonate them, even down to the way the B&W processing is handled with added grain to mimic Tri-X.

Slightly different? in effort required, yours was multiple steps with various tools and image overlay?

The difference is ease of doing it, take these 2, here is the original:

and a changed image (which blends with the background, and it took 10 words and 30 seconds):




As I said, seven months is a long time in AI research right now. I could show an example of style transfer, except I would need the permission of a photographer with a much more defined style to show how it's done. That is even less than ten words, you load the 'victims' image, ask the classifier to work out a prompt and then dial the influence down so that the underlying composition is lost but the 'style' remains - between 30% and 50% tends to produce a result.

GPA6 said, 1699380008

-sp●●n- said

The Ghost of Prancy McPrettykins said

Keep up, I've been doing this sort of thing for seven months now ;-)

What's new is effectively style transfer, so instead of stealing someone's images, I could borrow their style to impersonate them, even down to the way the B&W processing is handled with added grain to mimic Tri-X.

Slightly different? in effort required, yours was multiple steps with various tools and image overlay?

The difference is ease of doing it, take these 2, here is the original:

and a changed image (which blends with the background, and it took 10 words and 30 seconds):





this method is simply AI, what ghost and I are referring to is the blend of AI back into the original photo. Your top image makes no sense at all in this example spoon?

-sp●●n- said, 1699380542

The example here was an image which was not easy to blend as it had water, just a thrown together example.

It is early days, in ease of use, there is real good commercial usability in getting the best output, imagine easily selecting a certain hair style from one image, a necklace in another, etc and blending them onto a specific image and changing very little else.

GPA6 said, 1699380678

The Ghost of Prancy McPrettykins said

-sp●●n- said

The Ghost of Prancy McPrettykins said

Keep up, I've been doing this sort of thing for seven months now ;-)

What's new is effectively style transfer, so instead of stealing someone's images, I could borrow their style to impersonate them, even down to the way the B&W processing is handled with added grain to mimic Tri-X.

Slightly different? in effort required, yours was multiple steps with various tools and image overlay?

The difference is ease of doing it, take these 2, here is the original:

and a changed image (which blends with the background, and it took 10 words and 30 seconds):




As I said, seven months is a long time in AI research right now. I could show an example of style transfer, except I would need the permission of a photographer with a much more defined style to show how it's done. That is even less than ten words, you load the 'victims' image, ask the classifier to work out a prompt and then dial the influence down so that the underlying composition is lost but the 'style' remains - between 30% and 50% tends to produce a result.


which platform are you using for style transfer? is it now where you can also create a unique model?


-sp●●n- said, 1699380874

Leonardo

The Ghost said, 1699392984

GPA6 said

The Ghost of Prancy McPrettykins said

-sp●●n- said

The Ghost of Prancy McPrettykins said

Keep up, I've been doing this sort of thing for seven months now ;-)

What's new is effectively style transfer, so instead of stealing someone's images, I could borrow their style to impersonate them, even down to the way the B&W processing is handled with added grain to mimic Tri-X.

Slightly different? in effort required, yours was multiple steps with various tools and image overlay?

As I said, seven months is a long time in AI research right now. I could show an example of style transfer, except I would need the permission of a photographer with a much more defined style to show how it's done. That is even less than ten words, you load the 'victims' image, ask the classifier to work out a prompt and then dial the influence down so that the underlying composition is lost but the 'style' remains - between 30% and 50% tends to produce a result.


which platform are you using for style transfer? is it now where you can also create a unique model?

Stable Diffusion XL with the IP-Adapter extension.
When you say 'unique model' do you mean the nobodies, which are downloadable models created by blending two (or more) people SD recognises and creating a feedback loop?

As in if we ask SD to generate a whole bunch of images which are 50% Emma Watson and 50% random pornstar (seriously) and then training it back onto its own output so that when we ask for that blend we get a somewhat consistent result.