Follow TV Tropes

Following

A.I secretaries : Pros and cons

Go To

MCE Grin and tonic from Elsewhere Since: Jan, 2001
Grin and tonic
#101: Aug 2nd 2012 at 4:18:11 PM

I think we've strayed from clever programming with voice recognition and a e-calender to a genius level, world threatening program that could become skynet.

edited 2nd Aug '12 4:21:40 PM by MCE

My latest Trope page: Shapeshifting Failure
breadloaf Since: Oct, 2010
#102: Aug 2nd 2012 at 5:35:59 PM

That's the American public's perception of what AI means so I suppose that's not a surprise.

All I want for myself is a smart shopper AI. Do my groceries and then do shopping requests.

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#103: Aug 3rd 2012 at 12:16:20 AM

EDIT: Also, is some of your concern based on super-fast trading algorithms? I personally think stocks are traded too quickly in the first place, so that is a partially related topic. I was just suggesting an algorithm to "suggest" changes to a portfolio.
I think that it would be one problem in all cases, but I think that super-fast trading would be especially sensitive, yes.

This raises another difficulty: if I am not mistaken, making money by predicting the market is a zero-sum game. If I buy low and sell high and make money, there must have been other people who have bought high and sold low and lost money (I am simplifying more than a little here, and I am no financial expert — please correct me if I'm missing something.)

Now, if I invest in a company because I want to get part of its profits as dividends, that's a different issue; but if we only consider the zero-sum part (which, if I am not mistaken, is the one that makes for the bulk of investment gains), making everybody better at predicting the markets will not make anybody get richer.

edited 3rd Aug '12 12:16:50 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
breadloaf Since: Oct, 2010
#104: Aug 3rd 2012 at 1:51:40 AM

You're thinking of betting on stocks which is not the same as investing in stocks. Investing is not zero-sum.

Deboss I see the Awesomeness. from Awesomeville Texas Since: Aug, 2009
I see the Awesomeness.
#105: Aug 3rd 2012 at 10:33:17 AM

I think we've strayed from clever programming with voice recognition and a e-calender to a genius level, world threatening program that could become skynet.

I'm cool with either.

Fight smart, not fair.
DeMarquis Since: Feb, 2010
#106: Aug 3rd 2012 at 10:53:25 AM

"I think we've strayed from clever programming with voice recognition and a e-calender to a genius level, world threatening program that could become skynet."

No, not necessarily. Maybe I'm exaggerating to some degree, but I cant help myself- I'm just vastly amused that here we have this thing that the guys at the Singularity Institute wet themselves worrying about, and here someone is suggesting using this for a personal secretary. The idea is really entertaining.

Obviously you dont need an AI to do what you are suggesting. But what fun is that? Take the idea seriously for a moment- what could an actual AI do for you that an expert system couldnt?

breadloaf Since: Oct, 2010
#107: Aug 3rd 2012 at 11:04:10 AM

Be able to expand the number of options available to assisting you in your daily life beyond it's original situation and adapt to each person individually in a way an ES could not. An ES is what it is upon purchase, and performs somewhat better after certain adaptations.

However, is that really worth all that extra trouble? Probably not. It's likely more to do with government tools where AI grows, or corporate tools and so on. Market analysis, key overnight interest rate calculator, economic policy analyser, labour skill pool management etc

DeMarquis Since: Feb, 2010
#108: Aug 3rd 2012 at 11:56:56 AM

"An ES is what it is upon purchase, and performs somewhat better after certain adaptations."

Hm. I'm not sure it actually works like that. My understanding is that an AI and an ES are fundamentally different things, such that one isn't going to lead to the other. But what you're describing is an AI, not an ES. Sounds like the unit would start with a simple set of superordinate goals, like create a baseline of owner behavior and outcomes, identify owners rank order of outcomes to improve and owners criteria defining "improvement", monitor current owner behavior and provide forecasts based on current choices with recommendations regarding alternate strategies.

breadloaf Since: Oct, 2010
#109: Aug 3rd 2012 at 1:20:44 PM

AI isn't magical, it grows out of a sufficiently good ES, as far as current understanding sees it because many of the ES is based on what occurs in nature except it performs worse. Unless you're a person that believes AI needs a "soul", it's nothing more than an ES that starts to exceed expectations dramatically.

ES, depending on the design works basically like:

  • Develop the "feature set", the characteristics that the AI will look at to make its decisions
  • Typically have a weighting system based on the inputs, or some other algorithm to use those characteristics with a variable setting that can be fine-tuned over time
  • Produce an output that can be interpreted as a useful answer for the user

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#110: Aug 4th 2012 at 12:06:52 AM

My understanding is that an AI and an ES are fundamentally different things, such that one isn't going to lead to the other.
This is not clear. There are some researchers who think that an AI might grow out of an expert system (that's what the Cyc project is about, for example), and there are others who think that this is a silly idea.

Myself, I can state with the utmost confidence that I have no clue tongue

edited 4th Aug '12 12:07:13 AM by Carciofus

But they seem to know where they are going, the ones who walk away from Omelas.
DeMarquis Since: Feb, 2010
#111: Aug 4th 2012 at 9:57:00 AM

@Bread: "it grows out of a sufficiently good ES, as far as current understanding sees it because many of the ES is based on what occurs in nature except it performs worse."

Well, yes and no. I think you are technically correct, in the sense that the term "Artificial Intelligence" actually refers to the whole range of computational science devoted to better understanding intelligence, and that "Expert Systems" are a subfield of that. So yes, and ES is a type of AI. But I've been using the term "AI" in it's colloquial sense- a system that, given a super-ordinate goal, autonomously self-improves until the goal is reached. Such a system is intended to search a problem field and find the most efficient path to a solution, irregardless of whether any natural system, including humans, uses anything like that path or not. My understanding of ES's, on the other hand, is that they are designed to mimic some aspect of human decision making. This most often takes the form of "Profession in a box", i.e. a computer-therapist, or a computer chess-master. Eliza and Deep Blue are the kinds of things I'm thinking of. Neither program is intended to autonomously self-improve, except in very limited ways. No such system could change or determine it's own areas of expertise, the way I think a so-called "true AI" should be able to do. To be technically accurate I should call this thing an "AGI" or Artificial General Intelligence, that is a machine capable of general intelligent action. Here's a resource page, if you are interested in learning more: https://sites.google.com/site/narswang/home/agi-introduction

Of course, I'm not an expert on any of this, so my understanding of things might be wrong.

Now, personal assistants are a profession, so naturally you could design an expert system to simulate what they do. Of course, someone is claiming that they now have an app for that: http://www.imserba.com/forum/speerio-voice-organizer-personal-ai-secretary-v3-1-1-xscale-wm2003-i-corepda-t111033/

edited 4th Aug '12 9:58:08 AM by DeMarquis

breadloaf Since: Oct, 2010
#112: Aug 5th 2012 at 12:38:14 PM

Well I understand the confusion over semantics but I wanted to inform you that an ES has nothing in its definition that prevents it from improving autonomously. That is actually part of its definition. We just currently don't have an ES that is awesome enough that improves autonomously in a way that the layman's definition of AI would do so. In essence, our current slew of ES isn't good enough but they are not defined to be this sucky.

Carciofus Is that cake frosting? from Alpha Tucanae I Since: May, 2010
Is that cake frosting?
#113: Aug 5th 2012 at 12:55:32 PM

That is actually part of its definition
Is it? I thought that the early expert systems — you know, MYCIN and so on — had no learning at all, and relied entirely on hardcoded rules.

Still, revising the system's degree of confidence in rules according to their success rates is a very mainstream concept by now, as far as I know.

But they seem to know where they are going, the ones who walk away from Omelas.
breadloaf Since: Oct, 2010
#114: Aug 5th 2012 at 1:01:19 PM

Most ES of today have learning in them and Mycin had "training" as well. Yes yes, semantics and all, but learning is learning. The scale to which it can learn stuff is limited by the number of "features" you put into an ES-based AI, and the next step would be an AI that can pick up new features but would anybody in the AI world no longer call that an ES? No, because it'd just be a more awesome ES.

I think there's a serious misunderstanding of what anything means in the AI world. Expert Systems have changed substantially in the past 40 years from originally being static engines, such as a logic engine, to new train-able systems such as artificial neural networks (even a simple weighted perceptron) or Bayesian filters. We didn't stop calling them Expert Systems but the newer systems require training and can learn from new data, but they are limited by the fact that they cannot gain new features. Yet, they're substantially more powerful than before because they can learn automatically as you feed it more data. And then what will Expert Systems be like 40 years from now?

This is why I think there's this misunderstanding with an image that there are these magical "clear lines" of separation between learning and not learning, when there is no such thing.

edited 5th Aug '12 1:05:13 PM by breadloaf

DeMarquis Since: Feb, 2010
#115: Aug 5th 2012 at 1:01:58 PM

The problem with ES style self-improvement is that they are designed to be limited. A medical diagnosis ES simply cant recognize data that doesn't pertain to medical diagnoses (why would it?). That's a design feature, not a shortcoming. A true AGI wouldn't have that limitation.

Hence my amusement when I thought that such a thing was proposed as a kind of super-palm pilot.

edited 5th Aug '12 1:03:06 PM by DeMarquis

Add Post

Total posts: 115
Top