Follow TV Tropes

Following

AI-generated content: Legality, ethics, and the nature of art

Go To

Imca (Veteran)
#3201: Jan 13th 2024 at 12:28:59 PM

The conversation was about it being added to things you run into day to day, while it is very usefull for research most people arent doing that.

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3202: Jan 13th 2024 at 12:54:40 PM

Like, someone I know proposed putting a chatbot in self-checkouts at the store.

He had a really hard time explaining how that was supposed to work and what benefit it would have, he just fixated on the "we should put chatbots in everything" shit.

Not Three Laws compliant.
Protagonist506 from Oregon Since: Dec, 2013 Relationship Status: Chocolate!
#3203: Jan 13th 2024 at 12:56:30 PM

Yeah, I have to agree putting chatbots in self-checkout doesn't make a ton of sense, mostly because you don't really need to "talk" to a machine like that generally.

"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"
RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#3204: Jan 13th 2024 at 1:12:28 PM

Ah, so on top of the checkout getting mad at me because my bananas don't weigh the right amount, it can backtalk, too.

Avatar Source
MorningStar1337 Like reflections in the glass! from 🤔 Since: Nov, 2012
Like reflections in the glass!
#3205: Feb 20th 2024 at 8:00:40 AM

I hear Open AI has a new AI powered video making tool?

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3206: Feb 20th 2024 at 8:07:40 AM

[up] Yeah, but it's still real janky and they're not letting anyone use it yet. The examples also all show telltale signs of being made in really low resolution and being extremely upscaled.

It's interesting, but I really don't think it's very viable in the long term. (it's hard to imagine an actual commercial use case for a thing that can "make a video that approximately matches what you prompted it to make")

Not Three Laws compliant.
BonsaiForest Since: Jan, 2001
#3207: Feb 20th 2024 at 8:08:32 AM

Yup. The public can't use it, but here's a demonstration:

Tremmor19 reconsidering from bunker in the everglades Since: Dec, 2018 Relationship Status: Too sexy for my shirt
reconsidering
#3208: Feb 22nd 2024 at 1:34:13 PM

its genuinely impressive and its a huge jump in capabilities, tho as zendervai pointed out, the same fundamental limitations are still there— this is a cool tech still in search of an actual viable useage

Edited by Tremmor19 on Feb 22nd 2024 at 4:34:41 AM

Chortleous Since: Sep, 2010
#3209: Feb 22nd 2024 at 2:47:33 PM

It has trouble with 3D spaces, especially when it circles and pans around ('Lagos must have a lot of giants oh wait they're normal-sized people on a balcony now I guess'). How much of that is even fixable without incorporating actual 3D?

Moreover, I feel like you're going to run into a pretty hard cap on capabilities when your AI still can't really think or conceptualize like a human brain. Faking it can only go so far.

Edited by Chortleous on Feb 22nd 2024 at 4:49:23 AM

Smeagol17 (4 Score & 7 Years Ago)
#3210: Feb 22nd 2024 at 2:51:31 PM

Eh, human brain can also be fooled by optical illusions pretty easily. The understanding of the world displayed here is impressive given that it is learned from films only.

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3211: Feb 22nd 2024 at 4:34:06 PM

[up][up] Especially since OpenAI is not saying how long any of these prompts actually took to generate.

I've actually noticed that the companies working in generative AI used to all talk about how long a prompt would take, and sometimes even how much energy and all that. But they've gone really quiet on that front, even when you'd think they could like, try and rebut the claims about them being environmentally harmful.

Edited by Zendervai on Feb 22nd 2024 at 7:40:54 AM

Not Three Laws compliant.
TairaMai rollin' on dubs from El Paso Tx Since: Jul, 2011 Relationship Status: Mu
rollin' on dubs
#3212: Mar 18th 2024 at 5:47:23 AM

'‘Massively concerning’: Under Armour’s ‘AI-powered sports commercial’ sparks controversy '

    open/close all folders 
     By Lauren Chadwick | euronews.com | Published on 15/03/ 2024 - 17: 19  
An “AI-powered” advertisement for the sportswear company Under Armour has become the subject of a fervent debate on the use of artificial intelligence (AI) to repurpose commercials.

Wes Walker, the director behind the one-minute-long spot called it an “AI-powered sports commercial”. It was posted on social media by Walker, Under Armour, and British boxer Anthony Joshua.

Yet several directors and others in the industry were quick to criticise it as a spot that used previous footage of the British boxer shot two years prior.

The big problem that's talk about is that the ad used footage of a real person that was fed to the AI. A director of ads pointed out that he could see footage derived from his work in the future and the "director" could claim it's theirs when in fact the AI based it off his work. Another director says that people in the industry may change their contracts to further own their work.

Comments on social media were quick to point out that the people who shot the original footage were not credited.

Edited by TairaMai on Mar 18th 2024 at 6:51:44 AM

All night at the computer, cuz people ain't that great. I keep to myself so I won't be on The First 48
SeptimusHeap from Switzerland (Edited uphill both ways) Relationship Status: Mu
#3213: Mar 18th 2024 at 5:54:36 AM

Seems like we need some kind of "right to one's own image" at some point. Like copyright but for images that are are derived from your face (and body?)

"For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled." - Richard Feynman
Kayeka Since: Dec, 2009
#3214: Mar 18th 2024 at 6:12:27 AM

But then we get into the whole "what about man-made works that used the footage as a model?" question.

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3215: Mar 18th 2024 at 6:34:07 AM

[up] We have an answer for that already. You have to license it and credit the creators/owners of the original footage. Looking at it, it seems at least part of the impulse behind using the AI here is to pull the "but the AI doesn't save the training data directly, so it's not really pulling from it even if it's really obvious!" bullshit.

Turns out that if your prompt is specific enough, with some of these generators, it'll spit out something very obviously pulled from a real thing. Midjourney's starting to get notorious for this, where if you ask it for "a shot from the middle of [movie]" it'll give you a clear copy of it that's kinda distorted but not enough to lose the obvious connection. Like, if you can identify the precise frame from the movie it's trying to replicate, that's too close.

It's starting to raise some questions about what "it doesn't save the training data directly" actually means because some of these things can very obviously reconstruct training data independently from whatever it is saving. It's obviously not saving like, jpegs or whatever, but if it's processing the input and breaking it down into data, it's clearly saving the resultant data (because...obviously, it wouldn't work if it wasn't) and that makes the "well, it doesn't save the training data" argument incredibly weak and, uh, kinda moves it into gaslighting territory because "it's not saving the training data but it is breaking down the training data and saving the output from that" is borderline on a distinction without a difference.

Edited by Zendervai on Mar 18th 2024 at 9:38:49 AM

Not Three Laws compliant.
RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#3216: Mar 18th 2024 at 7:05:13 AM

Well, 'save the training data' would mean taking the raw pixel data, (presumably) compressing it, and storing it for later retrieval. It's an interesting question on if extreme overfitting does count as saving the data. And it is overfitting if you're reproducing something exactly when trying to teach concepts.

Avatar Source
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3217: Mar 18th 2024 at 7:27:40 AM

This is one of those things where the way the human brain works is actually directly relevant.

We don't actually store memories directly as discrete...memory blocks, so to speak. As far as we can tell, we store memory as like, a list of "keys" and when a given memory is needed, we take the "keys" and sorta reconstruct the memory around them. It's part of why eyewitness testimony is extremely unreliable, because it's really, really easy to accidentally introduce new "keys" that alter the memory, because the brain isn't really checking against anything and just kinda goes "well, that makes sense, must have missed that before" and just adds it to the list.

The thing though, is that if I saw, say, Avengers Infinity War, my memory of it is broken down into a set of these keys and most of the granular details are missing until they're reconstructed when all of the keys are put back together. We're not bad at this, but errors tend to creep in. (Some stuff, notably, is really hard to process into these keys. Ever wondered why no one seems to remember any details from Thor 2? It's because almost nothing's memorable enough to get a key and there's very little that can be reconstructed from that). But, say I then do a painting and I claim it's mine and it's not taken from anywhere else and that I made it up and I try to sell it, but it's actually a scene from the middle of the movie that's mostly but not completely correct. I could get sued for that, or at least hit with a cease and desist, because I have just committed copyright infringement. It doesn't really matter that my brain doesn't save the movie as a discrete thing, it matters that my output very closely resembles this specific thing.

The AI approach isn't quite the same, but it is effectively creating a bunch of "keys" and then arranging them in different ways to create novel output...but it is fully possible for them to rearrange in such a way as to recreate a bit of the input. This is more common for text AI, but it happens with image AI and, here's the thing: stopping an AI from doing that is extremely difficult. And if an AI is free to access and the infringing image was totally free and the end user never attempts to make money off of it, that might get a pass...but Midjourney isn't free. And some of the subscriptions you can get include permission to commercially use the output. So if Midjourney spits out a copyrighted image by accident and it's not caught, that means it just gave a copyrighted image the company does not own to someone under a license to use it commercially. That's pretty bad from a legal perspective and it seems like, because of what people are asking Midjourney for, it's spitting out more stuff that would fall under this category, and Midjourney is the one that's being sued for copyright infringement.

The only real way to stop it is to keep all of the training data around and tell the AI to check the output against it and toss anything that's too close...but the AI output is wonky enough that it likely won't be able to tell when something is copyright infringing but not in a particularly competent way. And keeping all the training data around and telling the AI to check the output completely ruins the argument about how the training data isn't accessible to the AI after the training period ends.

And yeah, while the AI isn't saving the training data directly, I don't think that "it's processing the training data into elements that it can use and saving those" is a meaningful distinction if it can occasionally spit out near-copies of copyrighted content that would 100% get a human creating it into legal trouble if it got noticed.

Edited by Zendervai on Mar 18th 2024 at 10:33:45 AM

Not Three Laws compliant.
RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#3218: Mar 18th 2024 at 7:34:26 AM

It's an important distinction for the purposes of passing around the model—i.e., if the model itself doesn't contain the training data verbatim, then the model doesn't contain copyrighted material. It could construct it, but that doesn't mean it contains it. Now, if providing a service that accesses a model and allows copyright of it, then what you need is some sort of screening for copyrighted output (and to reduce overfitting in your training process).

The ability to reproduce copyrighted material with something doesn't mean the thing contains copyrighted material. That gets you into a weird space where you can argue that if you watch copyrighted material, your memory would technically be infringing on their copyright because it could be used to reproduce it.

Avatar Source
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3219: Mar 18th 2024 at 7:38:06 AM

Yeah, my point is more that saying "the AI doesn't contain copyrighted material" isn't a defense against copyright infringement when it clearly spits out a copy of copyrighted material in a context where it can be used for commercial purposes. That particular defense has been used in a way where it's framed as like, "it can't infringe, because it's not saving any infringing material" but like...well, that argument clearly doesn't work, because that's not how it works for humans.

The truth is that the AI can commit copyright infringement in paid commercial contexts, and most of the companies running these things don't really filter for that and that genuinely is a legal issue that most of the standard defenses won't work on.

They can fix it and deal with it, but it means those lawsuits are not without merit. Especially because the precedent is that in this context, it still counts as copyright infringement even if the end user (who is paying for the commercial license) chooses to never use it. They're still being given something they shouldn't legally get access to in this way under the context they're operating in, and the party providing the infringing content can be held liable.

And, uh, there's only one surefire way to guarantee no copyrighted output. No copyrighted input and being able to prove it, because then it can be argued that it's a legitimate coincidence, which is not available when the situation is "it managed to regenerate material that it was fed in the training data".

Edited by Zendervai on Mar 18th 2024 at 10:55:00 AM

Not Three Laws compliant.
RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#3220: Mar 18th 2024 at 8:08:03 AM

But the question with that last part is if you want to guarantee there's no copyrighted output, or accept the liability. Interesting question, really, depending on how good your copyright filtering tools are...

Which should be 'very good' for an AI company. You're basically inserting an extra layer that's looking for X% similarity with a subset of the training data.

Avatar Source
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3221: Mar 18th 2024 at 8:11:02 AM

Yeah, there's a few different approaches. I just find it interesting because it means a lot of the arguments that companies like Midjourney used are clearly wrong or don't have anything to do with the actual situation.

IMO, I think these companies really, really, really won't want to be in a situation where liability comes up, because they all seem really squirrely about anyone looking too closely into how they work and how they run things.

Edited by Zendervai on Mar 18th 2024 at 11:12:33 AM

Not Three Laws compliant.
archonspeaks Since: Jun, 2013
#3222: Mar 19th 2024 at 11:46:56 AM

The AI approach isn't quite the same, but it is effectively creating a bunch of "keys" and then arranging them in different ways to create novel output...but it is fully possible for them to rearrange in such a way as to recreate a bit of the input. This is more common for text AI, but it happens with image AI and, here's the thing: stopping an AI from doing that is extremely difficult.

This isn’t quite how it works.

Nothing from an AI’s training data is being rearranged or reproduced in any way. The best way to think of it is to imagine the training data being used to draw a map. Say there’s a location that represents dogs: this location would be nearby locations representing other 4 legged animals, locations representing various physical features of a dog, and so on. This process eventually creates a gigantic web showing the connections between everything in the training data. The training data itself is then discarded, leaving only the map. When you ask the AI something it looks at the map and decides what to say based on the “closeness” of the various words/concepts on its map.

For the record, it’s almost impossible for an AI to produce something that is on its own and infringement of copyright because of the way this process works. The “perfect recreation of scenes from movies” example from above isn’t really something that can happen, or has happened. Certain outputs can come very close to copyrighted material, but the process of averaging that happens prevents perfect reproductions.

but Midjourney isn't free

US courts have held that this is an incorrect reading of copyright law. The product Midjourney is offering for sale isn’t the output of the AI, it’s the AI itself. If someone took that output and offered it for further sale when it infringed copyright then they might be held liable, but there’s no liability for Midjourney in that situation.

To be frank, anyone still pushing the copyright infringement narrative is either uninformed or motivated by extreme ideological bias. All available legal precedent points to these concerns being non-issues, and courts in the US and Europe have dealt several major blows to cases against AI manufacturers. I’m not aware of any legal review that supports the assertion that copyright infringement is in any way a concern for the AI field, and if anything the conclusion reached is typically the opposite.

They should have sent a poet.
Silasw A procrastination in of itself from A handcart to hell (4 Score & 7 Years Ago) Relationship Status: And they all lived happily ever after <3
A procrastination in of itself
#3223: Mar 19th 2024 at 11:59:26 AM

The best way to think of it is to imagine the training data being used to draw a map.

I should point out that maps can and are subject to copyright, despite geographic information being information that is not subject to copyright.

To move beyond analogy nitpicking, a reproduction does not have to be perfect to violate copyright law. Copyright covers adaptation of a work, an AI might not be able to violate copyright the way a scanner would, but it’s perfectly capable of violating it the way I would by making an unlicensed knock-off.

We’ve actually seen AI user for that purpose in the U.K. Someone decided to put on an AI generated Willy Wonka event, having an AI generate advertising images, an website and a script for the cast. They did this with no legal authority for the relevant copyright holders and didn’t even check the script before sending it out to the actors. A number of parents paid money to attend this event and the only reason they even got anything at all was because the actors (who after turning up at the warehouse realised they were never getting paid) stuck around to try and make something of the mess.

“And the Bunny nails it!” ~ Gabrael “If the UN can get through a day without everyone strangling everyone else so can we.” ~ Cyran
Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#3224: Mar 19th 2024 at 12:45:05 PM

[up][up] Your map analogy is functionally identical to my key analogy. You assumed the "key" meant a specific bit of data that can be directly linked to an existing bit of training data. The "keys" are bits of information that on their own don't contain anything but need to be arranged in specific ways to create output.

A human memory "key" is worthless on its own, it's in the arrangement that the brain can regenerate the rest of the memory without having any of the actual details stored.

And the reason I used the Avengers example wasn't on a whim. This is a thing Midjourney actually does. The theory is that it's actually pulling from a trailer or a promotional image, but if you tell it to create a frame from a movie, it will attempt to create a frame from the movie. Note that I never said that it was ordered to make a specific frame. Just a frame. The result should logically be something that is kinda reminiscent, but not directly placeable. Instead, when it was tested, it spit out a scene from the trailer. It wasn't quite right, but it was very recognizeable as the specific shot.

https://x.com/Rahll/status/1767267822881657285?s=20

An example.

https://static.tvtropes.org/pmwiki/pub/images/img_5726_1.jpeg

This seems weirdly close for something that can’t happen, doesn’t it? It’s got the usual AI jank in it, but if someone tried to sell this as a print, it would get a cease and desist because it’s *really obviously* this particular shot, just done in a kinda shitty manner. And being low quality isn’t a defense against infringement.

https://spectrum.ieee.org/midjourney-copyright

Here’s an article that talks about it. These programs actually are regurgitating training data somehow on a semi-regular and going “well, it doesn’t seem likely” doesn’t change the fact that this is happening. Even if you can’t find the specific copyrighted data in what the program is specifically pulling from, the fact that it can clearly regenerate it somehow *is* an issue for these companies.

And going “well, we can’t find it in the program itself” doesn’t mean anything, because the output is going to be what matters.

Edited by Zendervai on Mar 19th 2024 at 3:51:46 PM

Not Three Laws compliant.
DeMarquis Since: Feb, 2010
#3225: Mar 19th 2024 at 12:59:46 PM

But, unless I misunderstand, Midjourney itself isn't legally liable. Whoever tried to sell the print is.


Total posts: 3,390
Top