TVTropes Now available in the app store!
Open

Follow TV Tropes

Following

AI-generated content: Legality, ethics, and the nature of art

Go To

Figured that since it's the new big thing sweeping the globe, and how it'll likely have a profound impact on media going forward, I decided to open up a thread specifically to discuss the topic of AI art generators like Dall-E, Midjourney, and the like.

It's a polarizing subject, but I think there's merit in debating the various issues surrounding it.

Such issues include "is it ethical for AI art generators to be trained on data scraped together from copyrighted works, and how is it different from humans getting inspiration from art?" and "what do you see as the future of commercial artistic endeavors going forward (comics, game asset creation, animation, etc)?"

EDIT: Expanding the topic to all forms of AI-generated art, including creative writing and music composition.


There's also a more general thread for discussion of AI as a whole.

Edited by Mrph1 on Jun 22nd 2024 at 11:56:23 AM

Imca (Veteran)
#301: Dec 15th 2022 at 8:04:00 PM

Solution that sounds like a shit post but is made in full sincerity.

Sounds like the next step would be to develop some AI art critics and set them agianst eachother in a feedback loop.

At the end of the day, for as long as our systems are dedicated to a single task we are going to reach the point where we have to start pairing them with complimentary ones to get more usefull results.

...

Which is one of many reasons I do actualy support this trend at the end of the day, it's not just about creative fields but if we are going to start relying on machines for tasks like medicine... which we should they are substantially better at it then we are... they need to communicate in methods that humans understand.

Artistic representation and real language are both vital for said interface operations.

Protagonist506 from Oregon Since: Dec, 2013 Relationship Status: Chocolate!
#302: Dec 15th 2022 at 8:45:30 PM

An AI art critic is probably far, far more difficult than an AI artist.

The thing with an AI artist is that they don't actually need to understand what they're drawing per se. They simply need to look at strings of letters and then figure out what arrangement of shapes fits them.

For example, when you tell it to draw a "red-haired woman", the AI doesn't know what a red-haired woman is. It simply knows that that string of characters probably means you're looking for that specific arrangement of shapes. The words you give it don't mean anything, nor does the arrangement of shapes. What's important to the AI, though, is that they're connected to each other in some way.

By contrast, an AI Art Critic would require actual, meaningful understanding of the concepts that the art is depicting

Leviticus 19:34
Imca (Veteran)
#303: Dec 15th 2022 at 9:05:40 PM

Yes and no, it does understand what "red" "hair" and "woman" are.... At least as well as a pile of math with no real world connection can understand these concepts.... its honestly better at object recognition then we are at this point, it just lacks the whole "real world experience" angle that we have, and the depth of experience that comes with it... its an allegory of the cave problem. Even with us, we are actually incapable of coming up with original concepts, we just remix the ones we have.... people just... don't appreciate just how much information you get just from.... you know living life... and thus the absolute wealth of experience that we draw on, and remix.

There is no reason one shouldn't be able to make an AI Art Critic, there is nothing about humans that make us uniquely different from machines... we aren't special as much as we like to think of ourselfs as such... we just take offense to this being pointed out...

Now I will agree however that it is a more difficult task then making an AI "artist" but its less because machines don't understand the concepts of what there making, and more that judging artistic merit is a very, very subjective experience... Just see the gulf of difference you can get even between human critics and human audience on various movies, games and other pieces of media.

Just because something follows all the rules that critics judge on, even human ones, doesn't necessarily mean its good.

Edited by Imca on Dec 15th 2022 at 9:16:22 AM

RedSavant Since: Jan, 2001
#304: Dec 15th 2022 at 11:35:46 PM

I think it's a little early to suggest that AI have any meaningful understanding of concepts. And when it comes to abstract concepts, we can say for sure that AI can only process whether it visually resembles other instances tagged with that abstract concept.

That's a pretty significant difference.

It's been fun.
Smeagol17 (4 Score & 7 Years Ago)
#305: Dec 15th 2022 at 11:39:13 PM

This depends on what you mean by a critic. Something like ChatGPT can I, guess, fairly easy input pictures and output text close to what human critics get paid for. Whatever it would be "real" criticism, on the other hand... (But, as Richard Feynman, iirc, said (from memory) - You should let people who say machines can't think keep talking, as pretty soon they will arrive at the conclusion that humans can't either.)

Edited by Smeagol17 on Dec 16th 2022 at 11:46:50 PM

Imca (Veteran)
#306: Dec 15th 2022 at 11:58:18 PM

[up][up] Not really, there isn't any meaningful difference between the two, if you can tell X is an X and Y is a Y you understand what X is and what Y is.... its not something that gets deeper then that, we like to think it does, because our ability to actually articulate concepts is fundamentally limited, which is why AI is needed as a technology any way.

If we could properly articulate our understanding of concepts, a rigid program would work just fine... but at a certain point it just becomes impossible for us to do so so we go "fuck it" and start teaching the math instead.

And considering the machines have better object recognition then we do, as can quite easily be demonstrated, and as continually annoyed CAPATCHA designers... we can safely say it understands what the objects are, as much as the person who lives in the cave can understand what the shadows are at least.

[up] I wish we would just skip to that conclusion already, these last few weeks have been draining trying to explain to people who keep regurgitating misinformation how the stuff actually works (its not a fancy copy paste algorithm that is cutting images into confetti and stitching them together like a Frankenstein monster of picture corpses, rather the machine takes the information its given and creates original output based on what it has come to the conclusion things are using its dataset that is heavily limited because the amount of data we need with our primitive training methods is absurd.... none of the original data remains in the program after it is made, just like none of the original chemical remains in a homeopathy mixture), its both the most use I have got out of my degree since I got it, but also incredibly frustrating because none of my 6 years ever covered how to be a teacher. :/

Though by critic I meant something simpler in theory, more complex in practice....

Just simply figuring out if the output is "good" or "not"

Edited by Imca on Dec 16th 2022 at 12:18:23 PM

RedSavant Since: Jan, 2001
#307: Dec 16th 2022 at 12:22:50 AM

[up]Okay, sure, but how do you define to an AI what tragedy is, for instance? Serious question. And I'm aware that that falls into "philosophical zombie" questions of, if you show an AI a tragic story and it produces tears because that's what the heuristic says, is that different from a human crying because the story is sad?, but there is a concrete difference.

It's been fun.
Florien The They who said it from statistically, slightly right behind you. Since: Aug, 2019
The They who said it
#308: Dec 16th 2022 at 12:27:05 AM

[up] Can you say what you think the concrete difference between those things is, for clarity?

RedSavant Since: Jan, 2001
#309: Dec 16th 2022 at 12:39:20 AM

[up] To be transparent, I'm not a philosopher nor a computer scientist, so it's entirely possible that my input isn't actually useful here and I'm just banging rocks together. I can't explain what the difference is in scientific or elegant terms. But unless we're arguing now that the ability to recognize input and produce output qualifies as "true" artificial intelligence - which I was under the sincere impression was something that computer scientists have been pushing back against - then the difference between a human's emotional reaction to emotional content and an AI saying "this text is tagged as a tragedy, therefore I will produce tears at the end," should be fairly obvious, right?

In (an attempt at) more scientific terms, and again, correct me if I'm misunderstanding, but computers don't have "reading comprehension." They don't form attachments to characters or make suppositions about future events, or speculate about characters' mental states or internal monologues. In essence, an AI relies on metadata (tagging, observed audience reaction, or maybe incidence of "sad" words, who knows) to be told what the proper response is. Is that right?

I mean, I could write a line of code in Google Sheets that says, if cell A1 contains the text "tragedy," then cell C1 should say "A tragic masterpiece. I haven't stopped crying. Truly a love story for the ages," but that doesn't mean the line of code has had an emotional response.

Edited by RedSavant on Dec 16th 2022 at 12:48:59 PM

It's been fun.
Smeagol17 (4 Score & 7 Years Ago)
#310: Dec 16th 2022 at 12:49:46 AM

[up][up][up][up]That "critic", I guess, is already part of the program (at least in training). It is just not very good yet.

RainehDaze Nero Fangirl (4 Score & 7 Years Ago)
Nero Fangirl
#311: Dec 16th 2022 at 12:57:55 AM

[up][up] The thing that raises all the questions is that state of the art AI doesn't have such clear-cut rules in how it forms links. You can feed something in and see what connections get activated, and what the weights are (but actual deployed networks have too many nodes and connections to usefully visualise), but you can't draw a link from 'sad words' to 'output tragedy'. It's at the point where we train a network on a corpus of [whatever medium we have], tell it what the appropriate association is, and over time it has a complex internal representation to differentiate genres and output a probability for each of them.

About the only thing separating this from a human response is it doesn't have its own emotional reaction to train it on what constitutes each genre. But in that regard, it's not necessarily meaningfully separated from actual attempts at literary classification.

Imca (Veteran)
#312: Dec 16th 2022 at 1:31:21 AM

@Red Savant: Okay to summarize this, and I do mean this with like.... no ill intent just to be clear before we start, I 100% don't mind the more philisophical side of debates because it is more open to interpretation then a dis-regard of facts.

This is less of a fundamental problem and more a problem of current technology at the moment... computers as they exist lack the depth to "feel" emotion yes.... but this is likely to be a result of a limited set of input and necicery complexity then any thing else... There is no way to know for sure if something else is really "aware" or not, to be honest... we cant even make that conclusion for other people, although only a narcissist or a racist would ever think that there the only aware entity on the planet.

Our best guesses in what can be studied is that awareness arises naturally as the complexity of a system reaches a critical mass, which would mean, that in theory at least.... some day machines will be "aware"... Will we ever be able to tell for sure? Honestly probably not... but given that through out history humanity has justified untold acts of cruelty by just assuming that only there group was aware, its a topic that at least personally I feel is better to air on the side of caution on... and have come to the conclusion that if it reaches that point in my life, I will start asking myself the ethical questions of a machines awareness when it asks me unprompted questions of existence, until that point it is either not complex enough to treat as an entity... or too alien for me to ever pass my own judgment on.... Or both.

At the end of the day there is just... nothing fundamentally special about us that cant in theory be replicated at some point, and that does actually include awareness and the ability to feel, at the moment though... even by the most optimistic guesses were 20 years off from having to answer these questions, and really probably more looking to the 2060s to 2070s for when they have to start being answered.

The reason for the push-back is two fold, first and foremost is that humans have a tendency to anthropomorphize everything, we form bonds with our smartphones, assign them names and personalities, but at the end of the day, current programs are decidedly not.... that, we don't don't even have software with the complexity of a lizard yet, let alone any thing that can be considered "alive"

There is also a second bit of push-back on the topic of even if we do make a program that is sapient, if it will be something that is human and to be honest, the answer to this one is probably a no, it wont think like we do, it wont act like we do, it wont be "human" but that isn't the same as not being aware or having feelings....

....

There is also substantial push-back on the later from BIOLOGISTS even, who get fed up with humans insistence on applying human traits to animals, they aren't making an argument that animals are incapable of thought or feeling... quite the contrary, most are quite involved with saying that we should treat animals better... and I honestly agree with them.... its just that there animals there little brains don't work like ours, they have different feelings, and different thought processes....

Much the same way sapient machine would 40 years from now..... its a question of form rather then posibility.

In (an attempt at) more scientific terms, and again, correct me if I'm misunderstanding, but computers don't have "reading comprehension." They don't form attachments to characters or make suppositions about future events, or speculate about characters' mental states or internal monologues.

Ehhhhhhhhhhh..... Yes and No, Yes in that currently this is correct, No in that this will not always be correct.... Training them for long term thought is a much tougher ask then training for short term thought, its harder to judge on, and harder to feedback on.... and just all around harder. It wont be impossible forever, but at the moment it is indeed out of reach... though so was recognizing a bird was a bird just a short decade ago.

Personally I don't think they will ever form attachments to characters unless we make a general intelligence.... because well... until that point they wont be an aware entity... but long term thought and suppositions about future events are both possible an even goals within the field.

Internal monologue is playing out said suppositions about future events as a possibility, so I am also going to throw it into the "yes and no" camp because yes, it will be possible to do the same thing, but no I don't imagine it would play out the same unless an entity is aware, and our current systems are still decades away from that.

In essence, an AI relies on metadata (tagging, observed audience reaction, or maybe incidence of "sad" words, who knows) to be told what the proper response is. Is that right?

This is however 100% true, it is also however exactly how humans learn, just with a data-pool of immeasurable proportion, we learn because we see an object, observe its reactions to other objects, and form connections with other concepts and it in our neural pathways... we learn what a dog is by observing dogs, being told that it is a quadruped, and drawing comparisons to it and other quadrupeds.

This is comparable to how some machines learn ("bottom up" AI specifically), its just..... your dealing with such an information gulf that it cant even be put into words... a baby has the entire world at its disposal, a machine only has what you can feed it... there is a reason I keep using the allegory of the cave, because bluntly it is the best way to understand the problem.

[up] Honestly this post is a much more condensed and just as accurate summary of the situation, an why I often say "We don't know" when it comes to how something works...

Its just well... the philosophical aspects of this stuff is actually an interesting topic for me.


TL:DR; Feelings and Awareness are highly suspected to originate from the complexity of a system, there there is nothing that should make them impossible to replicate, but we are decades away from reaching the level of complexity needed.

Edited by Imca on Dec 16th 2022 at 1:43:14 AM

RedSavant Since: Jan, 2001
#313: Dec 16th 2022 at 1:38:57 AM

[up]Well, I think that sums up my general problem with the question currently. I'm, again, not a computer scientist or a philosopher, so I can't (reasonably) tell what arguments are being made as a result of the field as it stands now, or from people extrapolating to what will eventually, theoretically be possible.

If the difference between human intelligence and computer intelligence is one of scale and complexity rather than innate qualities, then sure, I accept that. But that feels like saying that an abacus is no different from a supercomputer (kind of an ironic simile, yes), you know? It's true, but it feels like people are retroactively giving the current generations too much credit based on what theoretical future AI might be capable of.

Which doesn't make the current examples any more impressive or intimidating, mind. But to use the comparison you brought up, if I recite Shakespeare to a turtle it's not really going to get anything out of the experience.

Edited by RedSavant on Dec 16th 2022 at 1:41:12 AM

It's been fun.
RainehDaze Nero Fangirl (4 Score & 7 Years Ago)
Nero Fangirl
#314: Dec 16th 2022 at 1:48:20 AM

The very big distinction between AI in usage and humans is that in AI we can turn the training off (and prefer to, a network that you can run on a mobile phone still takes hefty parallel computation to train even a single step... also more training can make it worse), whereas humans can't experience something without at least minor changes somewhere in the brain. We can cut the feedback.

Mind, this isn't necessarily a bad thing, depending on your use case.

Imca (Veteran)
#315: Dec 16th 2022 at 1:53:22 AM

Shakespeare to a turtle is also a rather ironic simile, because the law that states that in order for something to be copyrightable humans have to make it?

Yea, that law was made because some asshole got the bright idea of teaching animals how to make art and then selling it... I use the term "asshole" because they abused them, and so I am glad the law came down on them. :/

But yea, that is also a pretty good comparison because indeed animals are capable of art, but at there level of complexity I also doubt they get any thing out of the experience.

indigoJay from The Astral Plane Since: Dec, 2018 Relationship Status: watch?v=dQw4w9WgXcQ
#316: Dec 16th 2022 at 11:33:35 AM

how do you define to an AI what tragedy is, for instance? Serious question. And I'm aware that that falls into "philosophical zombie" questions of, if you show an AI a tragic story and it produces tears because that's what the heuristic says, is that different from a human crying because the story is sad?, but there is a concrete difference.

Maybe this only makes sense to me, but I really dislike the idea that vibes-based emotional reactions are what makes human criticism valuable. My brain is wired a little differently, so I relate a lot more to the "producing tears because that's what the heuristic says" definition of tragedy than the nebulous "human" one. I think this has helped me identify why a lot of the criticism of AI art rubs me the wrong way. If people think AI is soulless and subhuman because it works on data instead of feeling, what would they think about me?

Moving back on topic: I sincerely hope that anyone whose art has been stolen takes legal action against the companies responsible.

Edited by indigoJay on Dec 16th 2022 at 2:34:13 PM

There is no war in Ba Sing Se.
Zendervai Since: Oct, 2009
#317: Dec 16th 2022 at 11:52:29 AM

I think there are two things to note regarding all this stuff.

1) If an artist does not want their art used for training data, it should not be. Full stop. I don't give a shit if the algorithm actually stores anything concrete.

2) The current A.I.s are operating on an extremely limited set of inputs and their ability to make connections is restricted by said inputs. It's ridiculously unlikely for an art producing AI to gain self-awareness because it effectively has no framework in which to do so. It being able to do pattern recognition has nothing to do with whether or not it's actually aware of anything. It gets the input, strings together a bunch of data, and then outputs it. The AI isn't actually seeing the image because it has no way to contextualize said image. Not for any esoteric reason, just literally because it's not actually designed to look at the result unless you plug it back in, and then the AI isn't going to make the connection because it's not designed to make said connection. ...unless of course there's metadata that says it came from the AI, but that's not proper recognition either.

You can't properly test any of these A.I.s because their input and output is so limited. You can't even come to the conclusion that it's a Chinese Room because a bunch of the output tends to be pretty obviously stitched together.

I think part of the problem here is that it's not really AI in the science fiction sense. It's just another iteration of the algorithms that do things like website moderation. A lot of AI researchers are pushing back against calling this AI art because of that. We're not quite there yet.

Edited by Zendervai on Dec 16th 2022 at 2:54:27 PM

DrunkenNordmann from Exile Since: May, 2015
#318: Dec 16th 2022 at 11:56:04 AM

1) If an artist does not want their art used for training data, it should not be. Full stop. I don't give a shit if the algorithm actually stores anything concrete.

And is this basically part of why a lot of artists hate AI art.

Because not only are the people "making" it - and no, typing a prompt into what's basically a generator does NOT make you an artist - telling them it's gonna make them obsolete, they're also stealing their art to "train" said AI.

Edited by DrunkenNordmann on Dec 16th 2022 at 8:56:38 PM

We learn from history that we do not learn from history
Zendervai Since: Oct, 2009
#319: Dec 16th 2022 at 11:57:26 AM

Yeah, there's been a bunch of examples of someone feeding an artist's entire gallery into an AI and telling it to output "new" art that's supposed to be indistinguishable from the original artist's work. And the original artist never consents to this and usually doesn't know it's happening until a bunch of forgeries show up, because the person with the AI doesn't usually say they used an AI.

And that's what doing that is. It's forgery. It's literally copying a style and pretending the new work is the product of the original artist.

Edited by Zendervai on Dec 16th 2022 at 2:58:05 PM

Ultimatum Disasturbator from The Wiggle Room (Old as dirt) Relationship Status: Who needs love when you have waffles?
Disasturbator
#320: Dec 16th 2022 at 12:19:21 PM

Art forgery has been a thing for centuries,but this is actually worse and I think the people defending this don't realise this about the hill they're dying on.

have a listen and have a link to my discord server
RainehDaze Nero Fangirl (4 Score & 7 Years Ago)
Nero Fangirl
#321: Dec 16th 2022 at 12:23:51 PM

1) If an artist does not want their art used for training data, it should not be. Full stop. I don't give a shit if the algorithm actually stores anything concrete.

Of course, what we've promptly run into is the case of locking the stable after the horses have bolted. Nobody thought to come up with some sort of automated exemption or license to prohibit their art from being analysed by a computer (which is basically what the training step is: take a piece of art and whatever input you want to associate with it, see what your system generates and how different it is from the original, go back and adjust the weights so it gets more accurate in future) until after a bunch of these have already been trained, and it's rather impossible to go back and pull a body of work out of a trained model (you can't unsee something). And if it's not automatically detectable, you run the risk of nobody having the time to read every page note in a domain that uses thousands and thousands of inputs.

And since analysing it like this is perfectly in line with the usual licensing of creative works, there was no reason for anyone to ask beforehand. A lot of image recognition stuff got trained on photos scraped off flickr and the like, for instance, and I know for a fact that same collection was also used for things like autoencoders. The main difference is that the goal here was to create something generative rather than a classification.

... although tbh, it seems kind of pointless except for the purpose of trying to prevent forgeries from people who can't do the training themselves (because the ones that can and are planning to do so anyway are likely to ignore your request all the same, it's not like they're respecting legal principles much anyway). Sort of an inbuilt assumption that one's art is too unique to ever be approximated from what they can train it on? I don't think it's really going to achieve anything even if you do establish a wide standard like that. It'll make people feel a bit better, sure, but practically speaking it's not going to matter.

DrunkenNordmann from Exile Since: May, 2015
#322: Dec 16th 2022 at 12:24:15 PM

[up], [up][up]

Honestly, the whole thing's shaping up to be the next big case of assholes trying to make money off other people's work after NFTs (remember that those also often involved stealing scraping other people's art off the internet to sell).

Edited by DrunkenNordmann on Dec 16th 2022 at 9:24:53 PM

We learn from history that we do not learn from history
editerguy from Australia Since: Jan, 2013 Relationship Status: You cannot grasp the true form
Zendervai Since: Oct, 2009
#324: Dec 16th 2022 at 2:36:14 PM

Generally, art forgery like this is done after the artist is long dead.

DrunkenNordmann from Exile Since: May, 2015
#325: Dec 16th 2022 at 2:39:58 PM

Also, art forgery normally relies on convincing people an artist has done an art piece to make a lot of money off the sale.

AI-related art theft is pretty much the opposite - the people doing it essentially appropriate an artist's work for their own gains and try to claim all the credit (again, see how people using those art generatos have the gall to call themselves artists).

It honestly reminds me of how people who mint NFTs based on other people's art they don't even own and think the art now belongs to them.

We learn from history that we do not learn from history

Total posts: 4,572
Top