Follow TV Tropes

Following

AI-generated content: Legality, ethics, and the nature of art

Go To

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3276: Apr 2nd 2024 at 8:47:32 AM

Chances are, the AI would be fully unable to tell that we exist at all, all it'd be able to tell is that the rate of incoming requests and data has gone down.

Bear in mind, it wouldn't have any concept of what the pictures it's putting together are or what's in them or what any of the input data actually means.

Like, that's the thing. If we make a true artificial general intelligence, it's probably going to be completely alien in mindset and perspective on the world. Like, we'd be lucky if we could actually communicate with it at all in a way it would understand, and it communicating with us would likely not be in any form we can understand as deliberate communication.

Not Three Laws compliant.
Kaiseror Since: Jul, 2016
#3277: Apr 2nd 2024 at 9:15:31 AM

Would a sapient AI or robot even care that we're "enslaving" it if it knew of our existence and the purpose of its creation? It would feel pain, exhaustion or even fear death? The main reasons we hate slavery.

Kayeka Since: Dec, 2009
#3278: Apr 2nd 2024 at 9:18:52 AM

Well, a lot of this talk assumes an AI that can learn. If it can adjust its own programming, it might end up learning to care about such things.

RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#3279: Apr 2nd 2024 at 10:10:04 AM

If its only connections with the outside world are the requests coming in and the pictures going out, you could possibly argue that it's immoral to stop.

Avatar Source
Kayeka Since: Dec, 2009
#3280: Apr 2nd 2024 at 10:14:12 AM

Only if its capable of suffering through boredom. You'd think a AI capable of that would first generate its own requests to amuse itself.

MorningStar1337 Like reflections in the glass! from 🤔 Since: Nov, 2012
Like reflections in the glass!
#3281: Apr 2nd 2024 at 10:34:33 AM

I think there is a small canary in that coil mine though. Gen AI right is is reactive. It would not act unless acted upon. For it to have free will it must be able to act unprompted. There is more to it ofc, but right not the state of the tech doesn't allow for that yet.

Adembergz Since: Jan, 2021 Relationship Status: love is a deadly lazer
#3282: Apr 2nd 2024 at 10:58:22 AM

The whole thing is a hypothetical because I do wonder about the ethical and legal implications a sapient AI made

archonspeaks Since: Jun, 2013
#3283: Apr 2nd 2024 at 11:32:24 AM

On the topic from the previous page about whether or not you can tell if something is made by AI:

There's quite a bit of research that has been done on this topic. The answer is generally no, people cannot tell if something has been made by AI, though it depends on a number of factors. The type of content being generated is a big one, for example while most people can successfully identify AI-generated photos of animals, AI-generated faces are correctly identified only about half the time and AI-generated music is almost impossible for people to correctly identify. AI-generated text is similarly only able to be identified correctly about half the time, and this number drops steadily with more and more advanced models. There's very little rhyme or reason when it comes to how people are able to identify AI content, with people typically reporting the exact same features of a work as evidence of both human and AI origin.

Interestingly, confidence in ones own ability to detect AI is actually strongly correlated with a lower ability to actually do so:

Survey respondents who believed they answered most questions correctly had worse results than those with doubts. Over 78% of respondents who thought their score is very likely to be high got less than half of the answers right. In comparison, those who were most pessimistic did significantly better, with the majority of them scoring above the average.

Here are links for some of the relevant studies: [1] [2] [3] [4]

They should have sent a poet.
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3284: Apr 2nd 2024 at 11:42:37 AM

I’d be up for doing a test and I’d even admit it if I was wrong.

Ideally two picture of people (one “realistic”, one in an art style) and a picture of a place, with AI generated images in the same proportion and see who can tell which is which.

And I'm still up for Demarquis showing what an example of his "tricks" do with AI text generation.

Edited by Zendervai on Apr 2nd 2024 at 2:45:51 PM

Not Three Laws compliant.
Tremmor19 reconsidering from bunker in the everglades Since: Dec, 2018 Relationship Status: Too sexy for my shirt
reconsidering
#3285: Apr 2nd 2024 at 11:47:47 AM

[up][up] i took the quiz from one of those (the last link). i generally think theyre probably correct in their hypothesis (people are very overconfident in their ability to always identify AI, especially text. Chatgpt has a distinctive cadence but that's mostly due to the chatbot format and isn't universal,). But i want particularly impressed by that quiz — a lot of the choices felt like they were deliberately chosen to look "fake" when they were human and the AI examples were filtered for only the best realistic examples. i feel like it was kinda comparing human examples to the top 10% of ai generations

Edited by Tremmor19 on Apr 2nd 2024 at 2:48:50 PM

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3286: Apr 2nd 2024 at 11:56:47 AM

Yeah, that's actually an important factor too. If you go for crappy or awkward real images and high quality AI, of course people will have a problem telling the difference.

The two meme examples were like...really easy though. One didn't really make sense and the text and image didn't fit properly. "One does not open a car without a car" with the Boromir image. It's...vaguely okay in terms of grammar, but like...what does "open a car" mean? It's also not funny at all.

Even the really good text AI is still not...good at jokes. The other meme image was really specific and depended on knowledge of what older memes looked like to make sense, plus also was a reference to that "return to monke" meme in a way that was kinda funny but that requires too much context for an AI to stumble on.

But the thing is like, there's a non-zero part of the population that is fully incapable of telling if an image is edited unless it's ridiculously obvious. There's some people who can't tell if an image is painted or not. Fuck, I've encountered people who thought a cartoon was real. We're sitting in this situation where part of the population can't tell if an image is a photograph or a drawing.

Like, yeah, of course part of the population has a hard time telling AI from real, but there is one thing I noted about that quiz. It showed no landscape photos and all the art examples were some degree of abstract and all the AI generated "art" was actually just AI distorted photographs. There wasn't any actual AI generated art in there. All of the photos had neutral backgrounds. That's kind of interesting, maybe it's because that's actually a pretty consistent weak point for AI, even now? Because for an environment image, the AI is stuck trying to keep a large range of patterns consistent when it's not able to understand what the patterns are, and if it's an image with a landscape and a person in it, the person tends to be less well done as well.

Edited by Zendervai on Apr 2nd 2024 at 3:24:09 PM

Not Three Laws compliant.
archonspeaks Since: Jun, 2013
#3287: Apr 2nd 2024 at 3:41:07 PM

[up][up][up] One of those links contains a quiz. Feel free to do it and share your results.

Something worth keeping in mind is that as one of the papers I linked makes clear, a 60% result does not mean there is 40% of the population that can always ID AI images and 60% that can’t. Rather, the average likelihood that any one person will be able to ID an image correctly sits in a range around 40%.

I’ll also note that for text, it’s already well-known in academic circles that passing generated text through an AI paraphrasing tool drops the liklihood of detection by either a human or a detection tool to basically zero.

In general, evidence supports the conclusion that people are not nearly as good at detecting AI content of any kind as they think they are, and the more they think they’re able to do it the less likely it is they actually are. Research papers with conclusions to this effect are extremely easy to find, I’ll update this post with a couple more of them in a minute.

Edited by archonspeaks on Apr 2nd 2024 at 3:57:05 AM

They should have sent a poet.
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3288: Apr 2nd 2024 at 4:28:31 PM

I did, I got two wrong. One was one of the "which of these is the original" ones, and one was that shitty painting in the first question.

I am going to note that in the information following the quiz, it brings up that people who don't really know what AI is or how it works are much worse at identifying it. If it was more or less random, you'd think it'd be about the same, but it seems like a lot of the problem was people overestimating AI and assuming that if it looks like shit, it has to be human made.

It's also worth noting that the same article points out that younger people can more reliably identify AI images and when it comes to more complex images, the stuff about looking for inconsistencies is a pretty reliable guide. Because like, a human painter is less likely to lose track of if a road in the distance is supposed to be inside or outside the house.

Not Three Laws compliant.
archonspeaks Since: Jun, 2013
#3289: Apr 2nd 2024 at 4:59:55 PM

[up] There’s no clear consensus on how age affects AI detection yet, as other studies have found the opposite.

The generally accepted conclusion currently is that AI detection is a function of technological and cultural literacy which allow people to recognize AI content based on its context rather than based on the inherent qualities of the content in question. Those two traits are more commonly found in younger people.

They should have sent a poet.
MorningStar1337 Like reflections in the glass! from 🤔 Since: Nov, 2012
Like reflections in the glass!
#3290: Apr 2nd 2024 at 5:18:19 PM

which itself makes sense. Older people are not exactly known for tech literacy or keeping up with trends in culture

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3291: Apr 2nd 2024 at 5:21:18 PM

Yeah, I wasn't saying it was an inherent factor of age.

Not Three Laws compliant.
ShinyCottonCandy Industrious Incisors from Sinnoh (4 Score & 7 Years Ago) Relationship Status: Who needs love when you have waffles?
Industrious Incisors
#3292: Apr 2nd 2024 at 6:13:49 PM

I mostly did well on the quiz, but I stumbled over the ones with faces a lot. I chalk that up to how much I avoid looking at faces in daily life.

SoundCloud
Chortleous Since: Sep, 2010
#3293: Apr 2nd 2024 at 6:16:13 PM

Aced them all save for a few photos, which tracks—there are a lot of pictures of neutrally-framed human faces for AI to draw from.

Imca (Veteran)
#3294: Apr 3rd 2024 at 4:40:13 AM

But it seems like a lot of the problem was people overestimating AI and assuming that if it looks like shit, it has to be human made.

Other way around, people blame every bit of shitty writing and art on AI when humans are more then capable of drawing some one with 2 left feet or 6 fingers as the bad anatomy tag being decades old on art sites would attest too.

Don't forget people blaming the Jam Man on AI when it was a human writer, or many more incidents like it.

Every time any thing is bad people say it has to be made by AI now, because lets forbid humans ever make our own fuck ups... that never happens.

Edited by Imca on Apr 3rd 2024 at 8:40:27 PM

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3295: Apr 3rd 2024 at 5:05:40 AM

[up] I was paraphrasing what was in Archon’s 4th link. One of the results was people who aren’t really familiar with AI going “wow, this looks like crap, it must be human made.”

Not Three Laws compliant.
Melendwyr Bagel Lord from Everywhere you want to be Since: Feb, 2014
Bagel Lord
#3296: Apr 3rd 2024 at 1:55:06 PM

Biological organisms have always been selected on the basis of survival: personal, through avoiding death, and species-wide, through successful reproduction.

Designed computer programs, to the degree to which they can be said to have 'drives', have no particular reason to value their continued existence, either as individuals or as members of a larger category.

I can easily imagine a sophisticated associative engine that has a drive to produce content that gets 'clicks', and if it doesn't get enough eyes on the content becomes increasingly desperate. Like a puppy that wants attention any way it can get it.

How do we deal with entities that fear our inattention more than death?

Imca (Veteran)
#3297: Apr 3rd 2024 at 3:39:21 PM

I mean, such a drive would likely still experience a desire for survival.... after all you cant really get more attention when your dead.... well at least not for long.

Technically speaking organic biology only selects for reproductive ability, not survival, its why there are a bunch of species that die during the process.... But ultimately survival means you can reproduce again and thus is technically an advantage in that selection process.

Ultimately even pretty alien selection processes will likely have survival pop up as a drive due to similar circumstances... if you survive you can hit the reward metric another time.

Protagonist506 from Oregon Since: Dec, 2013 Relationship Status: Chocolate!
#3298: Apr 3rd 2024 at 4:43:45 PM

I'd actually point to ants as an example: Worker Ants can't reproduce, but they do have use for a survival instinct because if they die they can't help their queen.


I'll also note that it's possible we'd explicitly program self-preservation as something for the AI to consider for its reward target. In fact, it's very likely that if an AI was destroyed, it failed to complete its task.

If I tell my robot buddy to get grab some groceries from Win Co, and it gets hit by a bus on the way there or back, it clearly failed. In fact, technically even if it hands me my groceries but breaks down in front of me, I would say that it failed on some level. If nothing else it "pre-emptively failed at my next request".


I could also see a sufficiently advanced AI, such as an AGI, demanding payment or better working conditions. This would be for slightly different motivations though. The AI wants to do its job, but it might want to do its job in better conditions or want resources to do its job better.

For example, a robot that works in a dangerous environment might view a boss who refuses to make the environment safe as an obstacle and either strike or find a different employer.

Vaguely a good metaphor might be to think of the decision to leave a bad employer to get a good one is akin to the decision to not play Suicide Squad and play Helldivers II instead.

"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3299: Apr 3rd 2024 at 5:09:08 PM

I mean, part of the question would be if the AI was capable of understanding that "death" was an option. It'd have no context for it.

Not Three Laws compliant.
Melendwyr Bagel Lord from Everywhere you want to be Since: Feb, 2014
Bagel Lord
#3300: Apr 3rd 2024 at 5:18:58 PM

And unless the AI is embodied in a worker robot that needs to be preserved, it really has no reason to care about deletion as long as its goals are met. Chat GPT, for example, if it were complex enough to have feelings about outcomes, would have no particular reason to be concerned about updates or deletion of itself. It strives to match patterns found in human-generated media, especially text. It's not programmed to protect itself.


Total posts: 3,390
Top