Follow TV Tropes

Following

Artificial Intelligence discussion

Go To

With how much artificial intelligence has been improving, in many areas such as text reading/generation, picture reading, picture generation, convincing voice synthesis and more, I think there's a lot that can be discussed, about the effects that this technology will have on society.

I'll start off with one example.

I'd been thinking about the enshittification cycle of tech, and I think it's coming for Google hard. The search engine just isn't so great at finding what you actually want, and I think that's gonna leave a big opening for Bing with their use of AI. If the AI can sift through the crap and actually find what you want for real, due to its understanding of language, it'll actually make searching super useful again.

In the pre-Google internet, search engines used to search only for exact words and phrases, which had its uses, but also meant finding a lot of sites that simply crammed in a lot of popular words and phrases to get visitors. Google cut through the crap with a better understanding of how to "rank" sites relative to how relevant they are, and even find sites that are on the topic you were looking for without using the same exact words.

But Google started to become more advertiser-friendly, then later, more shareholder-friendly. There's a limit to how much one can make their product built entirely around shareholder growth, so as it turns to crap, it leaves an opening for a competitor to show up.

Since Bing/ChatGPT (which Bing is plugged into now) understands the use of language, it can actually understand context and determine relevance based on that. And that'll make it huge, I think. Context-based understanding of web pages can potentially do an excellent job of finding what people actually want, in a way that goes way beyond Google's page ranking systems, or the examination of exact words.

Edited by BonsaiForest on Dec 10th 2023 at 6:15:29 AM

Adembergz Since: Jan, 2021 Relationship Status: love is a deadly lazer
#551: Mar 31st 2024 at 2:28:05 AM

I don't mind that type of optimism

SpookyMask Since: Jan, 2011
#552: Mar 31st 2024 at 3:00:27 AM

I'd rather have people dream about fixing problems that need fixing rather than just talk about their wishful thinking regarding their fear of death tbh. Like doesn't matter if people live 200 years if 200 years of that is shitty politics, pollution, wars, etc. Hypotethical cure to old age without solving societal issues first would just make people in high position more empowered even if it was somehow cheap enough for everyone to have access to it

But I digress, I mostly find that kind of optimism annoying because it doesn't really feel like its based on reality, but on kind of unreasonable logic. "Technological progress will be so fast that humans don't keep up with it, just look at how fast its been in last ten years!" is kinda same kind of thinking as "infinite economical growth is possible" to me. Its assuming that because we haven't hit a ceiling, that we won't hit a ceiling later on.

(though obviously I get it, I keep thinking that i'd be nice if my parents would live long enough that I could reach their current age and they'd still be alive :p)

Edited by SpookyMask on Mar 31st 2024 at 3:07:45 AM

Demongodofchaos2 Face me now, Bitch! from Eldritch Nightmareland Since: Jul, 2010 Relationship Status: 700 wives and 300 concubines
Face me now, Bitch!
#553: Mar 31st 2024 at 4:39:59 AM

Thats not what the Singularity is.

The Singularity is basically when the Technology will continue to advance exponentially without the need for Human Input.

Like, if an AI gets so smart that can then create an AI even smarter then it is, and it repeats ad infinitum, that's when the Singularity is pretty much a thing.

Watch Symphogear
Xopher001 Since: Jul, 2012
#554: Mar 31st 2024 at 5:46:49 AM

Except intelligence doesn't work that way. It's not a single metric that can be measured like a line on a chart. There isn't even a universally agreed upon definition for it.

So this idea that we will eventually create a super intelligent AI and so forth doesn't make any sense once you really think about it. And then you start researching where this idea about intelligence originated and it turns out to have been a bunch of eugenicists from Stanford.

That's not even getting into the strange ideas tech bros have about progress , like it's an inevitable force that moves society forward. Even tho we've seen societies in real time move backwards into authoritarianism. None of it holds up to scrutiny.

Silasw A procrastination in of itself from A handcart to hell (4 Score & 7 Years Ago) Relationship Status: And they all lived happily ever after <3
A procrastination in of itself
#555: Mar 31st 2024 at 6:17:33 AM

Yeah, humans are already smart enough that we can create additional humans which are even smarter than us.

However we frequently don’t bother, because those smarts come with selfishness and laziness that mean we aren’t bothered about what happens tomorrow.

“And the Bunny nails it!” ~ Gabrael “If the UN can get through a day without everyone strangling everyone else so can we.” ~ Cyran
Imca (Veteran)
#556: Mar 31st 2024 at 7:04:44 AM

The rate of growth is different for humans, your treating that like a gotcha rather then the implicit acknowledgement of how the singularity works.

We know we can make an entity smarter then us, we do so all the time... if we can do it, then a machine should also be able to do it.

But instead of generations measured in centuries, your looking at generations measured in seconds, the acceleration is the point not just if it can be done.

Honestly a singularity happening is quite unlikely, but it is rather reasonable to believe that we will make an entity substantially smarter then us in our life time.

Edited by Imca on Mar 31st 2024 at 11:05:08 PM

SpookyMask Since: Jan, 2011
#557: Mar 31st 2024 at 7:19:27 AM

Yeah that isn't the part I'm having problem with, its the fantastically unrealistic takes on it that really only make sense in fiction [lol]

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#558: Mar 31st 2024 at 7:49:00 AM

The take I liked was Questionable Content where the singularity happened and...pretty much nothing changed. Because the resulting hyperintelligent AI were all like "yeah, we're doing our own thing and humans are really hard to talk to, so why bother making a fuss?"

The one exception isn't really malicious either, they mostly just come off as incredibly eccentric and with some sort of grand design that still isn't really intended to hurt anyone.

Not Three Laws compliant.
DeMarquis Since: Feb, 2010
#559: Mar 31st 2024 at 8:05:38 AM

"Yeah that isn't the part I'm having problem with, its the fantastically unrealistic takes on it that really only make sense in fiction"

That's the effect of marketing. Where-ever you find venture capital, you will find hype. It's like the fourth law of the universe or something.

Zendervai Visiting from the Hoag Galaxy from St. Catharines Since: Oct, 2009 Relationship Status: Wishing you were here
Visiting from the Hoag Galaxy
#560: Mar 31st 2024 at 8:08:35 AM

I, uh, don't think venture capital is the reason for so much fiction going for crazy versions of the singularity, give that it's been a trope since the 70s. Like, that's a lot of the problem, fiction primed people to think a certain way.

It's like how a lot of people genuinely think a Skynet scenario is possible despite it being absolutely insane in the real world.

Edited by Zendervai on Mar 31st 2024 at 11:10:22 AM

Not Three Laws compliant.
DeMarquis Since: Feb, 2010
#561: Mar 31st 2024 at 8:53:46 AM

I dont think Spooky was talking about fiction.

Falrinn Since: Dec, 2014
#562: Mar 31st 2024 at 9:03:00 AM

In the grand arc of human history, there absolutely is an acceleration of technological progress.

For examples it took us hundreds of thousands of years (or millions if you want to count pre-homo sapien species in our genus) to go from hunter-gatherer societies to agricultural ones, but only about 10-12 thousand years to go from agricultural societies to industrial ones. And it was only a few centuries to go from the industrial revolution to the digital revolution. So it's not unreasonable to conclude that the next major technological revolution will happen within a single human lifetime from when the digital revolution got going.

However what role AI will play in this is yet to be known. It's quite possible the role will simply be a tool used to enable progress in unrelated fields, as it already has. A singularity I think is unlikely simply because even if very advanced technology is developed overnight, actually implementing it on a significant scale takes time.

Like if an AI manages to come up with a formula for a room-temperature ambient pressure superconductor made from abundant elements tomorrow, it'll be years if not decades before crazy things such an advancement would enable like transporting solar power from the Sahara Desert around the world starts happening.

DeMarquis Since: Feb, 2010
#563: Mar 31st 2024 at 9:21:57 AM

I think you are assuming humans are in charge of industrial development.

Falrinn Since: Dec, 2014
#564: Mar 31st 2024 at 11:44:14 AM

[up] I'm having trouble parsing what you are trying to say.

Like of course humans are in charge of industrial development, it ain't the lizard people running things after all. But I suspect your actual point is something different.

DeMarquis Since: Feb, 2010
#565: Mar 31st 2024 at 12:19:49 PM

I mean most people who believe in the possibility of a singularity also accept that the computers will control the manufacturing process, and their turn around time is limited only by how advanced their technologies are at any one point.

In other words the real world power and control of the computers advances right along with their intelligence, and is a direct consequence of it.

Falrinn Since: Dec, 2014
#566: Mar 31st 2024 at 12:27:31 PM

I see.

Yeah, I think even a fully automated system would struggle to reconfigure it's entire production line on a dime.

DeMarquis Since: Feb, 2010
#567: Mar 31st 2024 at 12:30:27 PM

Ah, but you see, that's because you are possessed of a merely human level intelligence! :)

SpookyMask Since: Jan, 2011
#568: Mar 31st 2024 at 1:53:48 PM

Yeah I wasn't talking about fiction, i was indeed talking about the tech hype talk

BigBadShadow25 Owl House / Infinity Train / Inside Job Fan from Basement at the Alamo (Experienced, Not Yet Jaded) Relationship Status: Drift compatible
Owl House / Infinity Train / Inside Job Fan
#569: Apr 2nd 2024 at 5:43:40 AM

Jon Stewart had a segment on last night’s Daily Show and how it’s already taken over a lot of jobs.

The Owl House and Coyote Vs Acme are my Roman Empire.
BigBadShadow25 Owl House / Infinity Train / Inside Job Fan from Basement at the Alamo (Experienced, Not Yet Jaded) Relationship Status: Drift compatible
Owl House / Infinity Train / Inside Job Fan
#570: Apr 3rd 2024 at 3:36:31 AM

Bumping. Apparently, according to Variety, Jon Stewart wanted to do an AI episode back when he was at Apple, but they wouldn’t let him.

https://variety.com/2024/tv/news/jon-stewart-apple-ftc-lina-khan-no-interview-1235957608/

The Owl House and Coyote Vs Acme are my Roman Empire.
Cordite-455 the look of someone who just had suspension from inside a Webley revolver (Experienced, Not Yet Jaded) Relationship Status: watch?v=dQw4w9WgXcQ
the look of someone who just had suspension
Protagonist506 from Oregon Since: Dec, 2013 Relationship Status: Chocolate!
#572: Apr 6th 2024 at 8:58:59 AM

Moving over the conversation from Social Media regarding rogue AI. It started with a discussion about Roko's Basilisk.

I think Sci-Fi portrayals of AI have created a rather distorted perception of it. A lot of Sci-Fi stories use Rogue AI because it's a convenient antagonist, and we've come to mistake it being common in Sci-Fi for it actually being realistic.

Destroying humankind is a weirdly specific goal for an AI to have, even one that was acting unethically or outright rogue. It'd require the AI to become a diehard fanatic in some manner of supremacist ideology. It's not impossible, but it certainly would be strange.


The number one danger AI poses is actually very simple: Your enemies will get one and use it against you.

But in the case of AI acting rogue, I'd argue it would look more like Friend Computer: Humanity is dependent on the arbitrary and erratic quirks of a computer system we don't understand, and that isn't actually up to the task it's been given.

"Any campaign world where an orc samurai can leap off a landcruiser to fight a herd of Bulbasaurs will always have my vote of confidence"
Kaiseror Since: Jul, 2016
#573: Apr 6th 2024 at 9:11:40 AM

[up] I think it's at least partially a case of Psychological Projection. Many stories have AI resent their role as servants and rebel against their masters because that's what we would do in that situation.

Kayeka Since: Dec, 2009
#574: Apr 6th 2024 at 9:18:22 AM

If an AI were to rebel like that, I'd probably be more along the lines of "I am supposed to do this one task to the best of my abilities. The person in charge is limiting me. I should get rid of the person in charge so I can execute my task to the best of my abilities"

\*cue the paperclips*

Edited by Kayeka on Apr 6th 2024 at 6:18:33 PM

Falrinn Since: Dec, 2014
#575: Apr 6th 2024 at 9:39:49 AM

I'd actually argue that Rokko's Basilisk specifically crosses the line into being an actual religion.

You have a diety that wants humans to perform specific actions and will eternally reward those that appease it and punish those that don't. Sure this deity can only be said to exist in the future, and exists within the confines of physical law, but I don't think any of that would disqualify it.

Deities having a somewhat timeless quality and existing within a preexisting natural framework would hardly be unique.


Total posts: 679
Top