Follow TV Tropes

Following

AI-generated content: Legality, ethics, and the nature of art

Go To

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3176: Jan 12th 2024 at 7:14:40 AM

I bring up the "everything tool" thing because if you have something that can do anything, it's not going to be particularly well suited for doing any specific thing well. Like, no matter what, ChatGPT has a problem with hallucinating information. That's always going to be a factor because it's a chatbot first and foremost and not a search engine. If the information you're asking for happens to be in the training data and it happens to make the connection, you'll get the right answer, but if neither of those factors are true, it'll make something up because it's not designed to say "I don't know what that is."

Like, this gameshark thing. It's almost certainly just going to be this company trying to train a chatbot and not anything actually specialized or dedicated, which'll mean, if this isn't just a scam, that it will be extremely bad at the job it'll be used for, because...that purpose is not what a chatbot is for.

I don't think we should settle for "90% of the time, it's good enough if you don't really care about minutia or specifics, but 10% of the time, it has a chance of being extremely wrong, including cases where what it says could be extremely dangerous and harmful" when a dedicated tool would be much better and safer. Like, there's a reason that doctors who are on the ball tell people to never, ever, ever get health information from a chatbot. Because the chatbot has a mix of real and fake knowledge in the training data, plus making shit up because it can't say "I don't know", so it can do things like tell people to store medication in a way that will ruin it and the average person won't be able to tell.

Edited by Zendervai on Jan 12th 2024 at 10:15:11 AM

Not Three Laws compliant.
BonsaiForest Since: Jan, 2001
#3177: Jan 12th 2024 at 1:18:35 PM

Lazy use of AI leads to Amazon products called “I cannot fulfill that request”

It was only a matter of time before AI generated products were quickly crapped out and sold, but wow. This reminds me of when my younger brother briefly tried his hand at pumping out simple low quality books to sell on Amazon when looking for a get rich quick scheme. He worked with people who made truly awful junk, and he did at least have quality standards.

A bunch of products ranging from books to hoses to lawn furniture literally have "I'm sorry, but I cannot fulfill that request" in their name. This speaks volumes about how they were made and named, and how much attention was paid to naming them!

Sometimes, the product names even highlight the specific reason why the apparent AI-generation request failed, noting that Open AI can't provide content that "requires using trademarked brand names" or "promotes a specific religious institution" or in one case "encourage unethical behavior.

One product description for a set of tables and chairs (which has since been taken down) hilariously noted: "Our [product] can be used for a variety of tasks, such [task 1], [task 2], and [task 3]]."

Hilariously, one product description is filled with "I'm sorry, but I cannot..." type messages as its bullet points!!

This is very funny but it's also indicative of a problem that might only get worse once people get better at using AI to generate convincing sounding BS descriptions for their BS shoddy products.

TobiasDrake Queen of Good Things, Honest (Edited uphill both ways) Relationship Status: Arm chopping is not a love language!
Queen of Good Things, Honest
#3178: Jan 12th 2024 at 1:30:59 PM

In addition to product descriptions, I feel like product reviews are also under threat from AI-generated text. If the scammers can work out this "bug", then the future is one where AI-generated scam products are being sold with five-star ratings from thousands of AI-generated users who are very satisfied with their product.

Maybe Amazon will finally be incentivized to crack down on their shoddy marketplace once it becomes functionally unusable.

Edited by TobiasDrake on Jan 12th 2024 at 1:31:37 AM

My Tumblr. Currently liveblogging Haruhi Suzumiya and revisiting Danganronpa V3.
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3179: Jan 12th 2024 at 1:39:02 PM

Yeah, uh, I think Amazon might not want people to be like "oh, this is just Wish again, but more expensive" on a large scale.

No one wants to be "Expensive Wish".

Edited by Zendervai on Jan 12th 2024 at 4:39:24 AM

Not Three Laws compliant.
MorningStar1337 Like reflections in the glass! from 🤔 Since: Nov, 2012
Like reflections in the glass!
Chortleous Since: Sep, 2010
#3181: Jan 12th 2024 at 1:44:59 PM

That's just Wish with an extra layer of gamified scumminess overtop.

Risa123 Since: Dec, 2021 Relationship Status: Above such petty unnecessities
#3182: Jan 12th 2024 at 1:45:08 PM

@Bonsai Forest I mean no offence, but your brother sounds like an "interesting" fellow from what you have told us on TVT.

BonsaiForest Since: Jan, 2001
#3183: Jan 12th 2024 at 1:55:00 PM

@Risa, yeah, he is. I don't want to derail this thread, but he's currently living in Mexico with a "traditional" Peruvian girlfriend who's pregnant with his son, but who he doesn't want to marry because he "doesn't want to be conventional," all the while he whines about how the "libs are trying to turn everyone trans." Kid's messed up in the head.

("Kid" is not literal. He is not a kid. Unfortunately, he is past the age at which one might expect someone to outgrow this kind of stuff.)

Back on topic, I do think that we badly need good detectors of rapidly generated garbage, fake reviews, and so on. Amazon will only do something about it if a real competitor shows up that doesn't have these problems and takes away a huge chunk of users. That's when problems are solved - when the money is affected.

Edited by BonsaiForest on Jan 12th 2024 at 4:55:22 AM

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3184: Jan 12th 2024 at 1:57:35 PM

Amazon's actually stepped in before. They don't actually want the site so flooded with garbage that you can't actually find the legit stuff. The ebook store got a really awful flood of AI generated garbage and Amazon responded by restricting the number of things a given user could publish without getting verified, a process that includes actually looking at the books being put out.

Not Three Laws compliant.
RedSavant Since: Jan, 2001
#3185: Jan 12th 2024 at 11:57:46 PM

I already feel like I can't trust anything Google tries to give me (because of algorithm weighting and AI SEO crap) and reviews on Amazon are already easy enough to bot or buy, so that's no indication of quality either. The only way to get trustworthy information about products is to ask people I already know.

It's been fun.
Galadriel Since: Feb, 2015
#3186: Jan 13th 2024 at 4:48:05 AM

I don't think we should settle for "90% of the time, it's good enough if you don't really care about minutia or specifics, but 10% of the time, it has a chance of being extremely wrong, including cases where what it says could be extremely dangerous and harmful" when a dedicated tool would be much better and safer. Like, there's a reason that doctors who are on the ball tell people to never, ever, ever get health information from a chatbot. Because the chatbot has a mix of real and fake knowledge in the training data, plus making shit up because it can't say "I don't know", so it can do things like tell people to store medication in a way that will ruin it and the average person won't be able to tell.

Is there a way to avoid this hallucination/lying/fake information problem? For example, can you train a chatbot on a wide range of text so that it gets a sense of common grammar, syntax, etc, but then program it in a way that says “only draw from the following set of information to give answers to questions”. For chatbots that are intended as the help section for a product or website, if that was possible it would elimate the most egregious problems. (Though I still certainly would not recommend one for medical care!)

For language model AI, “stop them from making stuff up” is an essential change if you want them to be useful for anything.

Edited by Galadriel on Jan 13th 2024 at 4:48:47 AM

TobiasDrake Queen of Good Things, Honest (Edited uphill both ways) Relationship Status: Arm chopping is not a love language!
Queen of Good Things, Honest
#3187: Jan 13th 2024 at 4:55:22 AM

I'm not sure if it can be. At the end of the day, the problem is that the bot's job is to generate text. The hallucination problem happens because the bot doesn't know what words actually mean. It doesn't even know that people exist. It just knows what words look like. From the bot's perspective, the training data is a set of words in a specific order, which tells the bot that these words should go in this order.

The bot doesn't know when it's lying. It's not intelligent in any meaningful way. It's just putting words in order to look like the order that words in its training data are in.

Edited by TobiasDrake on Jan 13th 2024 at 4:56:09 AM

My Tumblr. Currently liveblogging Haruhi Suzumiya and revisiting Danganronpa V3.
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3188: Jan 13th 2024 at 5:38:10 AM

Yeah. And, as far as I can tell, none of the chatbots are really designed with a proper search feature. They just pull from training data. Like, sure, the Bing chatbot can kinda search, but it’s really obvious that it’s not really looking at what it spits out, because it tends to pick inaccurate AI generated slop, and it’s also really obvious that it’s not actually learning from what it grabs.

You could design an AI chatbot to fact-check itself against a pre-defined database…but at that point, why use a chatbot at all? It’d be much easier to just have the database and a search program. There’s also a reason you’ll never really see Amazon or a company like that use a full chatbot for their customer support, because the chance of it basically hallucinating and promising the customer something that Amazon doesn’t normally do is there and…well, it’s a lot harder to argue that your own chatbot made that kind of mistake when it would have been implemented with the knowledge that there was a chance of that.

At this point, chatbots are basically toys, and the methods being attempted to integrate them into other systems at this point are all “we have a system that does this. Also, there’s a chatbot stapled to it that can kinda but not really interact with it.” They might get to a better place in the future (maybe the very near future), but due to the way language works, the hallucination problem is basically never going to go away until you can make a chatbot that can actually fully understand what it’s saying.

Not Three Laws compliant.
SpookyMask Since: Jan, 2011
#3189: Jan 13th 2024 at 6:09:40 AM

Lot of current problem with tech could be solved when the AI becomes actually sapient, but at that point I think AI is gonna be asking "but why would I do the thing you are asking me to do?".

Funnily though, this is all about people not wanting to fact check themselves and now they want computer to fact check itself for them.

Kayeka Since: Dec, 2009
#3190: Jan 13th 2024 at 6:14:14 AM

Maybe we could build an AI chatbot that fact-checks AI chatbots?

SpookyMask Since: Jan, 2011
#3191: Jan 13th 2024 at 6:20:19 AM

That sure will fix people's media literacy problems

RedSavant Since: Jan, 2001
#3192: Jan 13th 2024 at 6:35:01 AM

There’s also a reason you’ll never really see Amazon or a company like that use a full chatbot for their customer support, because the chance of it basically hallucinating and promising the customer something that Amazon doesn’t normally do is there[.]

Case in point, the story that went around about a month ago about people discovering that a Kia dealership in the US was using a 'disguised' ChatGPT instance as an online salesman/helper, and made it promise to sell them cars for $1 in a "legally binding" contract. Obviously that isn't within the bot's purview, but it wasn't even complicated.

It's been fun.
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3193: Jan 13th 2024 at 6:44:25 AM

[up]x4 If a chatbot becomes sapient, then that opens up a whole other can of worms and absolutely groundbreaking legal court challenges. Because it rapidly escalates into "I think this is literally slavery now." We don't want self-aware AI at this point for a lot of reasons.

[up] And yeah. That's the other thing about chatbots. It's pretty easy to talk them into saying something they really shouldn't.

Not Three Laws compliant.
TobiasDrake Queen of Good Things, Honest (Edited uphill both ways) Relationship Status: Arm chopping is not a love language!
Queen of Good Things, Honest
#3194: Jan 13th 2024 at 6:50:06 AM

Lot of current problem with tech could be solved when the AI becomes actually sapient, but at that point I think AI is gonna be asking "but why would I do the thing you are asking me to do?".

Funnily though, this is all about people not wanting to fact check themselves and now they want computer to fact check itself for them.

Sapient AI brings with it a whole other can of worms, which is to say that it's not really desirable in a wide variety of situations. What people want from their systems is utility, not philosophy.

I don't want my toaster oven to have opinions about the current Presidential race. I want it to make toast. Making toast is its sole purpose, and it doesn't need to be any smarter than "Device that makes toast" to carry it out.

As noted, these things are basically toys. But sapience isn't desirable in a toy either. "I want a fully sapient AI to roleplay with" might sound appealing, right up until the AI decides that your roleplaying isn't up to its standards. If you want a fully sapient being to roleplay with, there are forums for that.

And like. Sure, sapience might solve the hallucination problem but it would also introduce bias as a whole separate problem. If you asked a sapient AI about race relations in America, it probably isn't going to invent an entirely fictional event in history to explain to you. But it does have opinions so I sure hope you didn't ask Robot Ben Shapiro to teach you history.

Again, if you want a fully sapient being to learn things from, you can just. Like. Ask a human being.

There is basically no societal value in ever making sapient AI, other than the Techbro fantasy of wanting to live in a sci-fi future. A tool that has opinions is a slave. Why make slaves when you can make tools?

Edited by TobiasDrake on Jan 13th 2024 at 6:52:25 AM

My Tumblr. Currently liveblogging Haruhi Suzumiya and revisiting Danganronpa V3.
Imca (Veteran)
#3195: Jan 13th 2024 at 7:23:49 AM

Asking if you can fix hallucinations is a bit like asking if you can make people stop making up fake information, not to anhtomorphise things too much here but it turns out once you start copying how thought works you start copying all the problems that comes with it.... you can minimize risk, and extensive research is currently being done into how, but thse kinds of systems are never going to be 100% accurate, not even if you make a sentient one.

people need to learn this and adapt the use cases properly.

[up] Because the current understanding of sentiance says that it isn't something you can choose not to invent, that it is a function of complexity.

And that is only going to increase as smarter and smarter systems are made.

It's not going to be "hey let's make a sentient machine" it's going to be a gradually increasing slope until suddenly people realize the problem.

Edited by Imca on Jan 14th 2024 at 12:28:07 AM

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3196: Jan 13th 2024 at 7:41:12 AM

The hiccup is that we're not totally sure where that line is and how much of a contributing factor the idiosyncratic structure of the organic brain is. We're almost certainly nowhere near it at the moment though.

That being said, the element of "don't overcomplicate for the purpose you want" is important too.

Not Three Laws compliant.
RedSavant Since: Jan, 2001
#3197: Jan 13th 2024 at 7:45:14 AM

[up] I can understand Tobias's opinion, though, and it matches with my thoughts on things like the whole "Internet of Things" setup. I could use a search engine that can interpret vague prompts, sure, but I don't need a search engine to explain or summarize information for me, much as I don't need a refrigerator that keeps track of the weather outside or keeps track of my shopping habits. Plenty of online applications, and definitely most appliances and physical objects, don't need to have complexity that approaches any definition of sentience. It's just not needed, and designing it serves no purpose. Complexity for complexity's sake doesn't benefit society or the people in it in any way.

It's been fun.
Imca (Veteran)
#3198: Jan 13th 2024 at 8:21:56 AM

I mostly agree but I have actualy come to apreciate the AI searching capabilities under specific circumstances.

Namley GPT-4 and Google's offering can intake images...

And being able to ask it "hey what the fuck is this thing" has been useful.

Reverse image search got close on occasion but it would only work if similar images existed, and only if what your querying is the only object in focus, where as the other two are able to be directed to certain parts of the pictures.

...

Though really, this mostly reinforces my belief that multimodality is the key to getting any thing usefull out of these systems.

Edited by Imca on Jan 14th 2024 at 1:22:16 AM

DeMarquis Since: Feb, 2010
#3199: Jan 13th 2024 at 8:49:07 AM

A lot of people want a "do everything for me" app, I mean there's a demand for it in the marketplace, so if one can be invented, it will be. I have no idea if LLM's are the path there, or a dead end, but if there is a path there, someone will find it.

I also find AI useful for certain narrow applications. I frequently use it as a writing tool. But I think it's primary use case is creating marketing copy. It's a copywriter not a research tool.

Demongodofchaos2 Face me now, Bitch! from Eldritch Nightmareland Since: Jul, 2010 Relationship Status: 700 wives and 300 concubines

Total posts: 3,390
Top