TVTropes Now available in the app store!
Open

Follow TV Tropes

Following

What do you think of LessWrong?

Go To

DeMarquis (4 Score & 7 Years Ago)
#176: Apr 4th 2011 at 7:26:47 PM

Well, heck. If someone could PM me regarding Ardiente, please do so. Thanks.

I'm done trying to sound smart. "Clear" is the new smart.
Desertopa Not Actually Indie Since: Jan, 2001
Not Actually Indie
#177: Apr 4th 2011 at 7:27:08 PM

I suspect that the real issue separating people with differing opinions in the Amanda Knox isn't Bayes; it's how seriously people take ideas like "reasonable doubt".

Did you read the link? I am pretty sure you would not be saying that if you read the link. This is very frustrating to me you know. When people practiced in rational interpretation of evidence and debugging cognitive biases tend to assign probabilities of guilt at small fractions of a percent while people practicing uninformed rationalism argue about things like what constitutes reasonable doubt, there is a significant difference going on.

...eventually, we will reach a maximum entropy state where nobody has their own socks or underwear, or knows who to ask to get them back.
rbx5 Rbx5 Since: Jan, 2001
Rbx5
#178: Apr 4th 2011 at 7:43:16 PM

So, what with the whole "omniscient supercomputer from the future/recursive causality" thing...does that mean Less Wrong considers Terminator a dire work of prophecy?

Sorry, just couldn't resist evil grin It's not even that funny, but it tickles me.

I'll turn your neocortex into a flowerpot!
silver2195 Since: Jan, 2001
#179: Apr 4th 2011 at 7:44:33 PM

I did read the link. The average (presumably mean, not median) estimate of guilt by Less Wrong readers is 35%. Taking into account the fact that many of them are spouting hyperbolically low numbers, and the difference in opinion isn't really so large.

edited 4th Apr '11 7:49:58 PM by silver2195

Currently taking a break from the site. See my user page for more information.
Uchuujinsan Since: Oct, 2009
#180: Apr 4th 2011 at 8:02:23 PM

After reading (part of) that Amanda Knox article, I remembered what I dislike about Less Wrong. It's not that the text is difficult to read. It's that they are incomplete. I can't read even a single article that has a conclusive and flawless argument. In the Amanda Knox case, he's going about "locality", that it "is" a law of the universal physics. Well, we don't actually know that. We consider it a law.

Another case was assuming the linearity of inconvenience, that 2 dust speckles in one eye somehow are double as inconvenient as one dust speckle in one eye. The problem here wasn't that it was an explicit assumption, it was an implicit assumption that was simply ignored and not mentioned as such.

In the few articles I read I almost always got the feeling that the line of reasoning for proving F under the assumption of A goes like A->B->C

D->E->F with the magic hidden somewhere in the article.

I have to agree that the general ideas are interesting and thought provoking, but I feel there is a lack of scepticism about their own oppinions. Not enough self-doubt - and it shows.

Pour y voir clair, il suffit souvent de changer la direction de son regard www.xkcd.com/386/
BlackHumor Since: Jan, 2001
#181: Apr 4th 2011 at 8:19:01 PM

In theory, you can construct a brain from scratch, and in principle, you can even construct one with specific memories and personality, but you can't make it identical a brain whose information you don't have access to. It would be like trying to reproduce the lost plays of Aristophanes based on knowledge of his writing style and the titles.

I can reproduce any lost play of Aristophanes you want, if you give me the length and a Greek font.

The caveats are it will take longer than the universe has existed

, and more relevantly to this example you wouldn't actually be able to pick it out from all possible plays by Aristophanes. But I can do it. (In fact I can do it much more efficiently than I said up there if I can assume that Aristophanes wrote in proper Greek. It will still be slow —too slow to bother actually trying it— , but not having to generate tons of absolute gibberish would help quite a bit.)

If all you wanted was to survive, and if an information copy of your brain is truly you, it wouldn't matter if you were rez'd with literally every other possible intelligence. So the big problem would be reducing "constructing every possible human mind from scratch" to something small enough it resolves before the machine dies.

If Moore's Law holds infinitely (as all singularians seem to assume) the second problem would be trivial to solve. So there shouldn't be any problem, in The Future, with resurrecting any human being on earth even without any information about them at all. Of course future people might develop a way to do this without so much waste, but my method still holds despite that.

So then, stepping back a level, I've just used only premises Eliezer accepts

to prove that cryonics is unnecessary. He would, therefore, either have to accept my argument, or else meta-argue that it's not possible to prove something with the kind of leaky real-world premises I'm using —and thus kill his own argument, because that's the kind of argument he used to argue for cryonics.*

Pykrete NOT THE BEES from Viridian Forest Since: Sep, 2009
NOT THE BEES
#182: Apr 4th 2011 at 8:21:19 PM

If Moore's Law holds infinitely (as all singularians seem to assume)

That's a rather dangerous assumption, seeing how there are physical limits to it and we're already rather close to them.

Oscredwin Cold. from The Frozen East Since: Jan, 2001
Cold.
#183: Apr 4th 2011 at 9:26:56 PM

Plan A: Not die

Plan B: Get vitrified after death and revived at a later date.

Plan C: Be friends with a lot of people the fAGI pays close attention to and be reconstructed to the point they can't tell it apart from me.

Also, I'm going to become a cryonics insurance salesman as a sideline sometime in Mayish. I've formed the second (explicit) chapter of the Bayesian Conspiracy, the first is at Hogwarts. I'm dating a rationalist who I met through Less Wrong who is smarter than me. I"m the happiest I've ever been. And now, lurking on a forum I used to post on, I found people bringing me up.

=)

edited 4th Apr '11 9:27:25 PM by Oscredwin

Sex, Drugs, and Rationality
Acatalepsy The Map To Madness Since: Mar, 2010
The Map To Madness
#185: Apr 4th 2011 at 10:30:17 PM

Plan A: Not die

Plan B: Get vitrified after death and revived at a later date.

Plan C: Be friends with a lot of people the fAGI pays close attention to and be reconstructed to the point they can't tell it apart from me.

I like that plan. It's pretty much the one I have. Personally, how do you intend to accomplish number one? It seems pretty important, and in all honesty the most likely to succeed, but actually implementing it is...tricky, to say the least. So far mostly I've just kept an eye on the Methuselah Fondation website, but likely that I've got to actually begin some form of treatment in the next decade, and I don't see anything actually being tested in the near future as safe for use on humans, and I don't have the medical knowledge myself to determine what is the best bet if nothing is.

Thoughts on this?

Oscredwin Cold. from The Frozen East Since: Jan, 2001
Cold.
#186: Apr 4th 2011 at 10:38:26 PM

Mostly donante to Aubrey de Gray. Take care of myself (I'm turning 27 in two weeks). If the singularity hits by 2050 I'll be 66. It's not hard to hit that age. Make sure if I die my brain is preserved. I think my chances are pretty good.

Sex, Drugs, and Rationality
Acatalepsy The Map To Madness Since: Mar, 2010
The Map To Madness
#187: Apr 4th 2011 at 10:46:57 PM

I tend to assume that the Singularity, at least as Less Wrong thinks of it, is a major longshot. Robin Hanson's version is more likely...but even then, that's not a particularly good scenario. I can't envision a research program that gets us from here to there, and given the nature of such predictions...it's generally best to add a factor of safety of five or so - IE, if your best estimate if 20 years, assume 100, if your best estimate is 40, then 200, etc.

For me, it seems incredibly...wishful, I suppose...to presume that immortality you can use will be developed within your natural lifespan. All too prone to biases that I need not elaborate. And if immortality will be developed within your life, great! If not...if it's not, and you don't plan on dying, you need a plan B. The product of the odds of being wrong and the cost thereof is simply too high to ignore.

http://img36.imageshack.us/i/smbcimmortal.gif/

edited 4th Apr '11 10:50:30 PM by Acatalepsy

Oscredwin Cold. from The Frozen East Since: Jan, 2001
Cold.
#188: Apr 4th 2011 at 10:58:43 PM

For me, it seems incredibly...wishful, I suppose...to presume that immortality you can use will be developed within your natural lifespan. All too prone to biases that I need not elaborate.
You're right. I think I've compensated for those biases, but who can be sure. That's the reason I'm also signed up for cryonics.

Frankly, on the subject of the singularity, I"m trusting the estimates of many different people who are smarter than me, and a few that aren't but have studied it more than I have. Most of the long term estimates are like yours, citing caution. All the reasons for the 100+ year timeline are intuition. That seems like a mistake to me. There have been 3 events like this in history (replicators, cambrian explosion, humans). I'm also reminded of the prediction after the Wright bros. plane got manufacturing that in 50 years airplanes would be useful in war for reconnoissance missions. This was dismissed as wildly optimistic.

Sex, Drugs, and Rationality
Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#189: Apr 4th 2011 at 11:01:00 PM

Yes, well, for every anecdote about accurate predictions being dismissed as wild, there are plenty about wild predictions being dismissed as wild. Causation's improper anyway.

[1] This facsimile operated in part by synAC.
Acatalepsy The Map To Madness Since: Mar, 2010
The Map To Madness
#190: Apr 4th 2011 at 11:15:21 PM

The 100+ timeline is mostly because what I know about nanotechnology (trying to specialize in it, but my uni's not being cooperative, research positions are rare, etc) is that there are a lot of really hard engineering problems that don't lend themselves to easy solutions. And no one knows anything about AI development; ironically the buzz over Watson made me even more cynical about research in this regard. It was touted a major achievement..when it really wasn't. It really really wasn't.

My basic assumptions on this are: assume no singularity. Simply put, the concepts are fairly speculative and while I certainly think it's a possibility, if one happens my predictions are useless anyway, might as well ignore it for now. Second...looking at the human track record on improving life spans is not so good. We've been stuck with pretty much the same maximum lifespan for a while now, even as the average increases, this is mostly due to the ability to save people who would have died, only for them to die later within their 'natural' lifespan. So far we have almost literally zero useful experience actively prolonging human life. De Grey's work and awareness is heartening...but even if we imagine that one of treatments being currently worked on or discovered by the M-Prize people, how long does that take to go from mice to men (and women, of course)? In general this has actually been a relatively short time, historically; if something works, it gets approved pretty quickly. But proving that a long term treatment has an effect is difficult. And again, we have literally no experience with this sort of thing. Most medical treatments are designed to stop some condition; this is designed to change the human condition. And I think we both know that cryonics is a long shot. Better than nothing, to be sure...but a shot you don't want to have to take.

To me, that means that "Plan A: Don't Die" will require some action on my part. Do you disagree?

Tzetze DUMB from a converted church in Venice, Italy Since: Jan, 2001
DUMB
#191: Apr 4th 2011 at 11:17:31 PM

It was touted a major achievement..when it really wasn't. It really really wasn't.

This has been what AI research has been like since AI research started. Luckily, projects do go places... except when they're touted as amazing and revolutionary and so on, in which case they're almost certainly vaporware. In the middle of Paradigms of Artificial Intelligence Programming, aw yeah

[1] This facsimile operated in part by synAC.
Oscredwin Cold. from The Frozen East Since: Jan, 2001
Cold.
#192: Apr 4th 2011 at 11:29:17 PM

Firstly, the popular media has no clue about anything regarding technology development. The last three words of the sentence may have been extraneous. Secondly, even if we can't build an AI from scratch, neuroimaging is a fast developing science. We're getting A LOT of data on how the brain is set up, this information isn't spreading to the AI people as fast as it might, but if it does they'll get a jump. Also, this seems like a area where there would be no visible results until FOOM. EY has posts on why this happens. I'll dig them up if you really want.

Sex, Drugs, and Rationality
Acatalepsy The Map To Madness Since: Mar, 2010
The Map To Madness
#193: Apr 5th 2011 at 6:39:39 AM

Actually, there won't be no visible results until FOOM. There are a ton of intermediate steps between where we are now and where FOOM happens. No system currently under development or planned to be in development today is FOO Mable. We still have very little idea how to code skill acquisition - what machines do, machine learning, is more like a really awesome curve-fitting algorithm - machine learning is not unimportant, but it's very far from the sort of things that humans can do routinely, and is absolutely a prerequisite to an AI that can go FOOM.

TheyCallMeTomu Since: Jan, 2001 Relationship Status: Anime is my true love
#194: Apr 5th 2011 at 7:22:55 AM

I'm frankly more concerned about curing brain cancer. I mean, if the data gets corrupted before you die, it's not going to let you be frozen in order to get it back.

Desertopa Not Actually Indie Since: Jan, 2001
Not Actually Indie
#195: Apr 5th 2011 at 7:34:23 AM

If Moore's Law holds infinitely (as all singularians seem to assume) the second problem would be trivial to solve.

I know one member of Less Wrong who thinks this, and everyone else thinks he's being ridiculous.

edit: I went back and rechecked the discussion with him, and he ultimately admitted that he doesn't think Moore's Law can really continue infinitely. Which brings the total count I'm aware of to zero.

I did read the link. The average (presumably mean, not median) estimate of guilt by Less Wrong readers is 35%. Taking into account the fact that many of them are spouting hyperbolically low numbers, and the difference in opinion isn't really so large.

Hyperbolically low? There's virtually no reason to suppose that Knox and Sollecito committed the murder, no reason to assign a higher likelihood to their guilt than the prior probability, given the scenario of a group of friends sharing an apartment, that they would kill her. Hearing high probabilities discussed for their likelihood of complicity produces an anchoring effect that tags probabilities like, say, 10% or 5% as "reasonable" adjustments, but if you simply follow the evidence from the source, there's no reason to suppose they even approach a one in a thousand likelihood of complicity. The fact that so few people elsewhere realize this is frankly depressing.

.35 is the average probability given in the independent analysis, which is admittedly not nearly as good as it could have been, although given that it's not a log average, the very low estimates are given unduly little weight. After further discussion, so that the members could see the reasons given by those who had assigned extremely low probabilities, most of those who had assigned non-negligible probability to Knox and Sollecito's guilt realized they had been wrong. It's not a persistent controversy.

edited 5th Apr '11 7:52:36 AM by Desertopa

...eventually, we will reach a maximum entropy state where nobody has their own socks or underwear, or knows who to ask to get them back.
Oscredwin Cold. from The Frozen East Since: Jan, 2001
Cold.
#196: Apr 5th 2011 at 7:51:39 AM

"Computational power will continue to increase till it reaches the theoretical maximum" is a much more comment thought on Less Wrong. Hiccups are expected, especially when integrated circuits hit their physical limit if quantum computing isn't a real industry yet.

Computational power will increase is a different idea than Moore's Law will continue.

Sex, Drugs, and Rationality
Desertopa Not Actually Indie Since: Jan, 2001
Not Actually Indie
#197: Apr 5th 2011 at 7:59:18 AM

There are theoretical limits for how much computing power you can get out of a given amount of matter anyway; some calculations are unfeasible even in theory given the amount of matter in the universe that exists to be turned into computationally optimized material. But there's not much need to invoke entities with nigh infinite computing power when so much is changed simply by calling upon computation which is many orders of magnitude more powerful than what we have, something which is far easier to achieve.

...eventually, we will reach a maximum entropy state where nobody has their own socks or underwear, or knows who to ask to get them back.
TheyCallMeTomu Since: Jan, 2001 Relationship Status: Anime is my true love
#198: Apr 5th 2011 at 8:02:23 AM

Well, obviously, the entire planet is actually one big super-computer.

Is it wrong of me to think of the limitations of Cryonics as the same as Keep Circulating the Tapes?

edited 5th Apr '11 9:21:06 AM by TheyCallMeTomu

Capt.Fargle Since: Dec, 1969
#199: Apr 7th 2011 at 10:21:00 PM

You know, thinking about it, that actually seems like a remarkably apt comparison.

Interesting.

Mr.Cales Since: Oct, 2009
#200: Oct 26th 2012 at 3:42:26 PM

Immortality would be awesome, but yeah... it's unlikely at best. Besides, if it was invented, it'd be out of our price ranges; you think they'd sell that shit cheap, or even give it to people? That's real control; the rich and powerful get to be immortal and we don't. I can't imagine power greater than that.

Will it break? Yes; nothing lasts forever, not even forever (heat death or the Big Crunch will get it first, depending on which view of the universe holds right). But it'll be a damn long time before immortality goes to the common folk and that's *after* it gets invented.


Total posts: 210
Top