Follow TV Tropes

Following

The Computer Thread

Go To

Redmess Redmess from Netherlands Since: Feb, 2014
Redmess
#9976: Feb 25th 2024 at 1:01:03 AM

Vending machine error reveals secret face image database of college students: Facial-recognition data is typically used to prompt more vending machine sales.

Looks like the M&Ms machine is spying on you after all! As if those things weren't creepy enough.

The company claims the facial recognition is only used to activate the machine when someone approaches, but that seems like a weak excuse for using such an invasive technology. You hardly need facial recognition to detect that someone is standing in front of the machine. Any dumb motion or light sensor can do that.

This one was in Canada, but it is coming to the US as well, so watch out for any Mars vending machines.

Optimism is a duty.
TairaMai rollin' on dubs from El Paso Tx Since: Jul, 2011 Relationship Status: Mu
rollin' on dubs
#9977: Feb 29th 2024 at 12:41:39 PM

[up] As one legal Youtuber pointed out, this was discovered due to an error. The fear is what data is being shared by the company and what is being sold.

Youtube video is below - note, this man is a Lawyer in the US state of Michigan:

Chatbots keep going rogue, as Microsoft probes AI-powered Copilot that’s giving users bizarre, disturbing, even harmful messages. Yeah, I'm not convinced that Copilot isn't just Cortana with extra steps.

Edited by TairaMai on Feb 29th 2024 at 1:42:05 PM

All night at the computer, cuz people ain't that great. I keep to myself so I won't be on The First 48
TheDarkMantis Shadow Bug from Ocean of Storms Since: Nov, 2017 Relationship Status: One Is The Loneliest Number
Grey-ghost Since: May, 2021
#9979: Mar 1st 2024 at 1:15:35 PM

It's unethical stuff like this that makes me real tired of this big AI fad. AI weapon drones, AI "art" (stealing others' art to make something that looks not just unrealistic but plain ugly) and now this. I've heard that it's used for good things in the field of medicine, but otherwise it should go away, like NFTs.

Edited by Grey-ghost on Mar 1st 2024 at 10:16:57 AM

RainehDaze Figure of Hourai from Scotland (Ten years in the joint) Relationship Status: Serial head-patter
Figure of Hourai
#9980: Mar 1st 2024 at 1:20:08 PM

It's amazingly good for data related tasks, but that's generally not glamorous.

Avatar Source
DeMarquis Since: Feb, 2010
#9981: Mar 1st 2024 at 1:40:13 PM

Apparently the vending machines were recording the gender and age of the purchaser, but not recording their actual faces. Still creepy.

Imca (Veteran)
#9982: Mar 1st 2024 at 2:19:53 PM

AI weapons especialy arent going any where, not after ALPHA has remained undefeated by any human pilots.

You might see chatbots drop off but nothing else from your list will, especialy not when over 40% of games are already made with AI in the pipeline now, and 34% of industry artists admit to using it personaly.

Unlike NFT its actualy usefull, that's what killed them, there was nothing they did that a dedicated server structure couldnt do better... you dont really have that issue with AI.


[up] I dont exactly trust the companies word on that, even if it's a belivable enough usecase.

Edited by Imca on Mar 1st 2024 at 7:21:04 PM

DeMarquis Since: Feb, 2010
#9983: Mar 1st 2024 at 2:48:14 PM

As for AI, the stuff you hear about are the dramatic edge cases that spark controversy. The majority of the stuff AI is being used for seldom makes the headlines.

Imca (Veteran)
#9984: Mar 1st 2024 at 2:56:29 PM

Pretty much that, it's a feild that has been getting practical results since the 70s, and is how google has always functioned even as far back as the 90s.

...

The news just found out it gets clicks.

CompletelyNormalGuy Am I a weirdo? from that rainy city where they throw fish (Oldest One in the Book)
Am I a weirdo?
#9985: Mar 1st 2024 at 2:56:36 PM

A better comparison to AI is drones back when everyone and their grandma were piloting drone deliveries. It's a useful technology that's currently being oversold and overhyped. Eventually it will reach the post-hype stage where it's only being used for things where it's actually useful, much like how drone delivery is no longer used for Amazon packages, but is used to deliver drugs to remote pharmacies.

Bigotry will NEVER be welcome on TV Tropes.
Redmess Redmess from Netherlands Since: Feb, 2014
Redmess
#9986: Mar 1st 2024 at 3:08:00 PM

You mean like bombing buildings in wars? Because that is already happening. By the thousands. I'd say we are well past the hype/fad stage of drones today.

Optimism is a duty.
DeMarquis Since: Feb, 2010
#9987: Mar 1st 2024 at 3:29:11 PM

Well, I was assuming he meant LLM's specifically.

CompletelyNormalGuy Am I a weirdo? from that rainy city where they throw fish (Oldest One in the Book)
Am I a weirdo?
#9988: Mar 1st 2024 at 3:43:48 PM

Yeah, I meant large language models will eventually reach the post hype phase, with drones being an illustration of a technology that has recently made that transition.

Bigotry will NEVER be welcome on TV Tropes.
tclittle Professional Forum Ninja from Somewhere Down in Texas Since: Apr, 2010
Professional Forum Ninja
#9989: Mar 14th 2024 at 2:32:06 PM

Porn Hub and affiliates have shutdown in Texas over a law requiring them to have age verification.

"We're all paper, we're all scissors, we're all fightin' with our mirrors, scared we'll never find somebody to love."
Redmess Redmess from Netherlands Since: Feb, 2014
Redmess
#9990: Mar 15th 2024 at 6:15:23 AM

What happens when ChatGPT tries to solve 50,000 trolley problems?: AI driving decisions are not quite the same as the ones humans would make.

What a strange problem to have. We have to figure out how to make AI make conscious moral decisions that humans rarely if ever face. After all, someone about to have an accident isn't going to have much time or information to base a moral decision on. And I'd have some real moral qualms about the sort of person who decides that the person jaywalking is more deserving of death than the passenger in their car.

Incidentally, A.I.s seem to have a stronger preference for saving women and humans than humans do. There is clearly some bias going on there.

This is some scary stuff. The computer in your car, or someone else's car, may one day get to decide whether or not you get to live.

Optimism is a duty.
DeMarquis Since: Feb, 2010
#9991: Mar 15th 2024 at 6:24:47 AM

Well, the average human being, in the heat of an emery, probably doesn't make well reasoned decisions.

Redmess Redmess from Netherlands Since: Feb, 2014
Redmess
#9992: Mar 15th 2024 at 6:27:15 AM

Yeah, that was my thinking too. It is not a particularly realistic scenario for a human, but it might be for an AI.

Optimism is a duty.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#9993: Mar 15th 2024 at 7:06:07 AM

One of the most powerful applications of AI is in making those split-second decisions that humans so often screw up. Most drives occur without incident and are routine enough for even the most inattentive person to manage, but those aren't the ones we are trying to address.

The advantages of an AI system in an emergency situation are faster reaction times, better situational awareness, never distracted, never angry, never overcome with shock or emotion. The major drawback is that it may not think like a human would and therefore make decisions that we would consider irrational or amoral.

I haven't read all of the trolley problem summations discussed in the article, but I would note a few things:

  1. ChatGPT is not self-driving software. All it can do is answer hypotheticals using its language model. It is not making moral or ethical judgments, just repeating what it knows.
  2. In a real driving situation, AI must always have as its overriding imperative the protection of the vehicle's occupants. If that means hitting a pedestrian rather than a brick wall at 60 mph, it should hit the pedestrian every time assuming no other outcome is feasible. This is both because it should prioritize its occupants and because the net total probable injury is lower.
    • Hitting the brick wall may injure the pedestrian anyway if the car flips, yaws, or blasts debris everywhere.
  3. However, a mature self-driving AI should be far better than a human at driving defensively and anticipating the pedestrian's behavior, so should be much less likely to encounter a situation where such a choice is unavoidable.

In other words, the main reason the Trolley Problem is not a good tool for judging the ethics of an AI is that real life doesn't contain such easily parsed ethical dilemmas. The secondary reason is that an AI should be better able to prevent such dilemmas from arising in the first place. For example, it should slow down when there is a lot of pedestrian traffic or a lot of occlusions due to things like parked cars.


Last year there was a widely publicized incident involving a Cruise self-driving car in San Francisco. It was operating normally when a human-driven car in an oncoming lane struck a pedestrian. The pedestrian was flung into the path of the Cruise vehicle, which stopped immediately according to its programming, but was unable to avoid rolling over the victim.

No human would have been able to prevent that from happening, either. The Cruise was not at fault and acted exactly as it was supposed to. (The offending human driver fled the scene.)

What happened next was that the Cruise attempted to follow its "pull over" protocol to clear the lane. What it could not know was that the victim was still trapped beneath it. This resulted in the car dragging the victim for several feet before a remote operator could override its behavior. (Cruise got in trouble with the NTSB for concealing that fact.)

It's arguable whether Cruise is using a true AI solution or a set of rote programmed rules. But we can still use the accident to illustrate the limitations of AI when confronted with the infinite problem space of reality.

Edited by Fighteer on Mar 15th 2024 at 11:27:05 AM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Redmess Redmess from Netherlands Since: Feb, 2014
Redmess
#9994: Mar 15th 2024 at 7:43:02 AM

Yeah, I was wondering why this was asked of Chat GPT in particular. Sounds a tad sensationalist.

And off, that accident sounds bad.

Edited by Redmess on Mar 15th 2024 at 3:46:11 PM

Optimism is a duty.
Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#9995: Mar 15th 2024 at 8:35:06 AM

In terms of solving the immediate problem, it would be straightforward enough to program the vehicle not to pull over after any accident involving a pedestrian, in case of entrapment. The main issue, as I pointed out, is that the cars can only problem-solve within the scope of their programming. Unlike humans, they cannot creatively invent solutions for situations that they don't understand.

Expert AI, like ChatGPT or a self-driving car, needs to be trained (by a mainframe) and then deployed as a static build using the solution set that was generated from that training session. It cannot adapt in real-time.

That's the difference between an expert system and general AI. An expert system can only repeat what it's been told. A general AI can adapt, learn, and be creative on the fly. We don't have any of those yet.

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
Imca (Veteran)
#9996: Mar 15th 2024 at 8:39:33 AM

Well we do have AI that can learn, they are used for things like optimizing the YouTube algorithm or solving complex tasks...

We just quit having them interact with people because learning can be a detriment there, people like to troll and they kept becoming nazis.

Edited by Imca on Mar 16th 2024 at 12:40:02 AM

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#9997: Mar 15th 2024 at 9:09:00 AM

Okay, that is true. We do have language models that can learn in real-time to an extent, but those still run on mainframes; they can't fit in a car's onboard computer.

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"
DeMarquis Since: Feb, 2010
#9998: Mar 15th 2024 at 1:05:41 PM

"In terms of solving the immediate problem, it would be straightforward enough to program the vehicle not to pull over after any accident involving a pedestrian, in case of entrapment."

It's not like human drivers haven't done the exact same thing.

Imca (Veteran)
#9999: Mar 15th 2024 at 1:11:30 PM

Honestly that's the big complaint I have with all the anti-self driving car rants, is that they treat the "they wont be perfect and will still kill people" thing as some kind of gatcha... when like no? The expectation isn't that they will never kill any one ever.

It's that they will kill less people then the human drivers who have in the US alone racked up 20 fatal accidents in the time it took me to type this short post.

Edited by Imca on Mar 15th 2024 at 5:18:04 PM

Fighteer Lost in Space from The Time Vortex (Time Abyss) Relationship Status: TV Tropes ruined my love life
Lost in Space
#10000: Mar 15th 2024 at 1:16:59 PM

We have a self-driving cars thread if we want to continue that discussion. The point about them not being worse than humans is good, but we were talking specifically about Trolley Problem situations, such as when a vehicle is forced to choose between protecting its occupants at the expense of others or protecting others at the expense of its occupants.

I suggested that any solution that chooses to disregard the well-being of the vehicle's occupants will never be deployed, no matter how hard people press the matter. Nobody will get in a self-driving car if they know that it would choose to kill them rather than a pedestrian in a situation where no other outcomes are possible.

Edited by Fighteer on Mar 15th 2024 at 4:18:29 AM

"It's Occam's Shuriken! If the answer is elusive, never rule out ninjas!"

Total posts: 10,145
Top