Follow TV Tropes

Following

History Blog / LessWrong

Go To

OR

Is there an issue? Send a MessageReason:
Added example(s)

Added DiffLines:

* IKnowYouKnowIKnow: The heart of acausal trade as a concept is the ability to simulate the decision-making of your counterparty based on what they are in a position to know as they simulate your decision-making based on what you are in a position to know.[[note]]Many Less Wrong regulars (and Yudkowsky in particular) are familiar with the traditional {{Chessmaster}}[=-duel=] version of the trope from works such as Manga/DeathNote.[[/note]]

Added: 1140

Changed: 44

Removed: 1096

Is there an issue? Send a MessageReason:
Spelling/grammar fix(es), Alphabetizing example(s)


* DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff, [[StreisandEffect not that that helped]].
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.



* {{Neologizer}}: The community has been criticised for making up their own terms for things, often even when they know that the concepts already have names; for example, [[https://www.lesswrong.com/posts/NEeW7eSXThPz7o4Ne/thou-art-physics "requiredism"]] instead of [[https://tvtropes.org/pmwiki/pmwiki.php/Main/SlidingScaleOfFreeWillVsFate "compatibilism"]].
* DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.

to:

* {{Neologizer}}: The community has been criticised for making up their own terms for things, often even when they know that the concepts already have names; for example, [[https://www.lesswrong.com/posts/NEeW7eSXThPz7o4Ne/thou-art-physics "requiredism"]] instead of [[https://tvtropes.org/pmwiki/pmwiki.php/Main/SlidingScaleOfFreeWillVsFate "compatibilism"]].
* DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.
[[SlidingScaleOfFreeWillVsFate "compatibilism"]].
Is there an issue? Send a MessageReason:
Added example(s)

Added DiffLines:

* {{Neologizer}}: The community has been criticised for making up their own terms for things, often even when they know that the concepts already have names; for example, [[https://www.lesswrong.com/posts/NEeW7eSXThPz7o4Ne/thou-art-physics "requiredism"]] instead of [[https://tvtropes.org/pmwiki/pmwiki.php/Main/SlidingScaleOfFreeWillVsFate "compatibilism"]].
Is there an issue? Send a MessageReason:
None


* InsideAComputerSystem: Even aside from ''Fanfic/FriendshipIsOptimal'', the concept of being part of a computer simulation is discussed surprisingly often, most notably as an aspect of [[https://www.lesswrong.com/tag/acausal-trade acausal trade]] and situations like [[https://www.lesswrong.com/posts/JEonKyNJBcx5LJskE/a-full-explanation-to-newcomb-s-paradox Newcomb's paradox]].

to:

* InsideAComputerSystem: Even aside from ''Fanfic/FriendshipIsOptimal'', ''Fanfic/FriendshipIsOptimal'' and other discussion of BrainUploading, the concept of being part of a computer simulation is discussed surprisingly often, most notably as an aspect of [[https://www.lesswrong.com/tag/acausal-trade acausal trade]] with AIs and situations like [[https://www.lesswrong.com/posts/JEonKyNJBcx5LJskE/a-full-explanation-to-newcomb-s-paradox Newcomb's paradox]].paradox]] or Roko's Basilisk.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* InsideAComputerSystem: Even aside from ''Fanfic/FriendshipIsOptimal'', the concept of being part of a computer simulation is discussed surprisingly often, most notably as an aspect of [[https://www.lesswrong.com/tag/acausal-trade acausal trade]] and situations like [[https://www.lesswrong.com/posts/JEonKyNJBcx5LJskE/a-full-explanation-to-newcomb-s-paradox Newcomb's paradox]].
Is there an issue? Send a MessageReason:
None


* MechanicalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.

to:

* MechanicalAbomination: DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.

Added: 1180

Changed: 17

Is there an issue? Send a MessageReason:
None


* MechanicalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.



* TheSingularity: With the twist that it's seen in a (mostly) positive light.



* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal, though it should be noted their emphasis on ''why'' is kind of skewed compared to others; see LivingForeverIsAwesome. Most Transhumanists are more in it to make themselves and others better.

to:

* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal, though it should be noted their emphasis on ''why'' is kind of skewed compared to others; see LivingForeverIsAwesome. Most Transhumanists are more in it to make [[ForHappiness themselves and others better.better]].


** The community itself. Given their focus on cold logic and disdain for public opinion, once in a while they develop ideas that are tactless or simply absurd to an outsider (Roko's basilisk, musings on utilitarian value of racism, cryonics as ethical necessity etc.).
Is there an issue? Send a MessageReason:
Please don't unnecessarily spread infohazards, even debunked ones, as they can lead people to imagining genuine infohazards. TV Tropes may ruin your life, but that doesn't mean it should try. If you MUST mention it, please update the bullet point for the fact that Less Wrong no longer deletes references to it (though obviously does not go out of its way to spread them). As well, I would recommend replacing the Rationnal Wiki link with the following Less Wrong wiki link, as it discusses things more clearly and concisely. https://wiki.lesswrong.com/wiki/Roko%27s_basilisk


* BrownNote: '''Roko's Basilisk''', to the point that any mention of it on Less Wrong's forums is deleted. Learn about it (at your own risk) [[https://www.youtube.com/watch?v=OzAzb2V7gzU here.]] A rebuttal for the terrified may be found [[http://rationalwiki.org/wiki/Roko%27s_basilisk here.]]
Is there an issue? Send a MessageReason:
None


* DeusEstMachina: Yudkowsky and some other members of Less Wrong from the Singularity Institute for Artificial Intelligence are working on making one. [[TheSingularity Singularity]] is eagerly awaited.

to:

* DeusEstMachina: Yudkowsky and some other members of Less Wrong from the Singularity Institute for Artificial Machine Intelligence Research Institute are working on making one. [[TheSingularity Singularity]] is eagerly awaited.
Is there an issue? Send a MessageReason:
None


The mainstream community on ''Less Wrong'' is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).

to:

The mainstream community on ''Less Wrong'' is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, Creator/EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).
Is there an issue? Send a MessageReason:
None


* BrownNote: '''Roko's Basilisk''', to the point that any mention of it on Less Wrong's forums is deleted. Learn about it (at your own risk) [[https://www.youtube.com/watch?v=OzAzb2V7gzU here]]

to:

* BrownNote: '''Roko's Basilisk''', to the point that any mention of it on Less Wrong's forums is deleted. Learn about it (at your own risk) [[https://www.youtube.com/watch?v=OzAzb2V7gzU here]]here.]] A rebuttal for the terrified may be found [[http://rationalwiki.org/wiki/Roko%27s_basilisk here.]]
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* TheHorseshoeEffect: Frequently mentioned and discussed.
Is there an issue? Send a MessageReason:
None


-->--''[[http://lesswrong.com/lw/oj/probability_is_in_the_mind Eliezer Yudkowsky (Less Wrong)]]''

to:

-->--''[[http://lesswrong.-->-- ''[[http://lesswrong.com/lw/oj/probability_is_in_the_mind Eliezer Yudkowsky (Less Wrong)]]''



LessWrong is the source of much of the popularity of RationalFic.
* Literature/ThreeWorldsCollide is [[http://lesswrong.com/lw/y4/three_worlds_collide_08/ hosted here]].
* FanFic/HarryPotterAndTheMethodsOfRationality is occasionally [[http://lesswrong.com/r/discussion/tag/harry_potter/ discussed here]].
* FanFic/FriendshipIsOptimal [[http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_little_pony_fanfic/ originated there]].

to:

LessWrong ''Less Wrong'' is the source of much of the popularity of RationalFic.
* Literature/ThreeWorldsCollide ''Literature/ThreeWorldsCollide'' is [[http://lesswrong.com/lw/y4/three_worlds_collide_08/ hosted here]].
* FanFic/HarryPotterAndTheMethodsOfRationality ''Fanfic/HarryPotterAndTheMethodsOfRationality'' is occasionally [[http://lesswrong.com/r/discussion/tag/harry_potter/ discussed here]].
* FanFic/FriendshipIsOptimal ''Fanfic/FriendshipIsOptimal'' [[http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_little_pony_fanfic/ originated there]].there]].




!! This blog provides examples of:

to:

\n!! This blog provides examples of:\n!!Tropes:
Is there an issue? Send a MessageReason:
None


* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal.

to:

* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal.goal, though it should be noted their emphasis on ''why'' is kind of skewed compared to others; see LivingForeverIsAwesome. Most Transhumanists are more in it to make themselves and others better.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* BrownNote: '''Roko's Basilisk''', to the point that any mention of it on Less Wrong's forums is deleted. Learn about it (at your own risk) [[https://www.youtube.com/watch?v=OzAzb2V7gzU here]]
Is there an issue? Send a MessageReason:
Changing the wick from a redirect to the trope.


* LogicFailure: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid.

to:

* LogicFailure: LogicalFallacies: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

* FanFic/FriendshipIsOptimal [[http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_little_pony_fanfic/ originated there]].

Added: 268

Changed: 190

Is there an issue? Send a MessageReason:
None


** The community itself. Given their focus on cold logic and disdain for public opinion, once in a while they develop ideas that are tactless or simply absurd to an outsider (Roko's basilisk, musings on utilitarian value of racism, cryonics as ethical necessity etc.).



* HumansAreFlawed: As a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution.

to:

* HumansAreFlawed: As Explained as a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution.



* TalkingYourWayOut: The AI-Box Experiment.

to:

* TalkingYourWayOut: The AI-Box Experiment.Experiment is a thought experiment intended to show how a superhuman intellect (like a hyper-intelligent AI) could talk its captors into anything, in particular releasing it into the world.
Is there an issue? Send a MessageReason:
None


LessWrong is the source of much of the popularity of RationalFiction.

to:

LessWrong is the source of much of the popularity of RationalFiction.RationalFic.
Is there an issue? Send a MessageReason:
None


LessWrong is the source of much of the popularity of RationalFic.

to:

LessWrong is the source of much of the popularity of RationalFic.RationalFiction.
Is there an issue? Send a MessageReason:
None

Added DiffLines:

LessWrong is the source of much of the popularity of RationalFic.

Added: 266

Removed: 266

Is there an issue? Send a MessageReason:
None


* StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist.]]



* StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist.]]
Is there an issue? Send a MessageReason:
None


* FandomBerserkButton: [[http://rationalwiki.org/wiki/Roko%27s_basilisk Roko's Basilisk]]
Is there an issue? Send a MessageReason:
moved from Main + cleaning

Added DiffLines:

->''"Uncertainty exists in the map, not in the territory."''
-->--''[[http://lesswrong.com/lw/oj/probability_is_in_the_mind Eliezer Yudkowsky (Less Wrong)]]''

''[[http://lesswrong.com/ Less Wrong]]'' is a community blog devoted to rationality. Contributors draw upon many scientific disciplines for their posts, from quantum physics and Bayesian probability to psychology and sociology. The blog focuses on human flaws that lead to misconceptions about the sciences. It's a gold mine for interesting ideas and unusual views on any subject. The clear writing style makes complex ideas easy to understand.

The mainstream community on ''Less Wrong'' is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).

* Literature/ThreeWorldsCollide is [[http://lesswrong.com/lw/y4/three_worlds_collide_08/ hosted here]].
* FanFic/HarryPotterAndTheMethodsOfRationality is occasionally [[http://lesswrong.com/r/discussion/tag/harry_potter/ discussed here]].
----

!! This blog provides examples of:

* AntiAdvice: [[DefiedTrope Called out as fallacious]]; [[http://wiki.lesswrong.com/wiki/Reversed_stupidity_is_not_intelligence reversed stupidity is not intelligence]].
* BackFromTheDead: Some in the Less Wrong community hope to achieve this through [[HumanPopsicle cryonics]].
* BanOnPolitics: It's generally agreed that talking about contemporary politics leads to {{FlameWar}}s and little else. See PhraseCatcher, below.
* BlueAndOrangeMorality: One of the core concepts of Friendly AI is that it's entirely possible to make something as capable as a human being that has completely alien goals. Luckily, there's already an example of [[http://lesswrong.com/lw/kr/an_alien_god/ an 'optimization process' completely unlike a human mind]] right here on Earth that we can use to see how good we are at truly understanding the concept.
-->"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
* ConceptsAreCheap: [[http://lesswrong.com/lw/jb/applause_lights/ Applause Lights]].
* DeusEstMachina: Yudkowsky and some other members of Less Wrong from the Singularity Institute for Artificial Intelligence are working on making one. [[TheSingularity Singularity]] is eagerly awaited.
* FandomBerserkButton: [[http://rationalwiki.org/wiki/Roko%27s_basilisk Roko's Basilisk]]
* HumansAreFlawed: As a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution.
* LivingForeverIsAwesome: Almost everyone on Less Wrong. Hence, the strong Transhumanist bent.
* LogicFailure: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid.
* PhraseCatcher: The FlameBait topic of politics is met with "[[http://wiki.lesswrong.com/wiki/Politics_is_the_Mind-Killer politics is the mind-killer]]".
* TalkingYourWayOut: The AI-Box Experiment.
* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal.
* StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist.]]
* WikiWalk: It is fairly easy to go on one due to the links in the articles to other articles. Also, certain lines of thought about similar issues are organized into 'sequences' which make them more conveniently accessible.
----

Top