Is there an issue? Send a MessageReason:
Added example(s)
Added DiffLines:
* IKnowYouKnowIKnow: The heart of acausal trade as a concept is the ability to simulate the decision-making of your counterparty based on what they are in a position to know as they simulate your decision-making based on what you are in a position to know.[[note]]Many Less Wrong regulars (and Yudkowsky in particular) are familiar with the traditional {{Chessmaster}}[=-duel=] version of the trope from works such as Manga/DeathNote.[[/note]]
Is there an issue? Send a MessageReason:
Spelling/grammar fix(es), Alphabetizing example(s)
* DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff, [[StreisandEffect not that that helped]].
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.
Changed line(s) 28,30 (click to see context) from:
* {{Neologizer}}: The community has been criticised for making up their own terms for things, often even when they know that the concepts already have names; for example, [[https://www.lesswrong.com/posts/NEeW7eSXThPz7o4Ne/thou-art-physics "requiredism"]] instead of [[https://tvtropes.org/pmwiki/pmwiki.php/Main/SlidingScaleOfFreeWillVsFate "compatibilism"]].
* DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.
* DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.
to:
* {{Neologizer}}: The community has been criticised for making up their own terms for things, often even when they know that the concepts already have names; for example, [[https://www.lesswrong.com/posts/NEeW7eSXThPz7o4Ne/thou-art-physics "requiredism"]] instead of [[https://tvtropes.org/pmwiki/pmwiki.php/Main/SlidingScaleOfFreeWillVsFate "compatibilism"]].
* DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.[[SlidingScaleOfFreeWillVsFate "compatibilism"]].
* DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.
Is there an issue? Send a MessageReason:
Added example(s)
Added DiffLines:
* {{Neologizer}}: The community has been criticised for making up their own terms for things, often even when they know that the concepts already have names; for example, [[https://www.lesswrong.com/posts/NEeW7eSXThPz7o4Ne/thou-art-physics "requiredism"]] instead of [[https://tvtropes.org/pmwiki/pmwiki.php/Main/SlidingScaleOfFreeWillVsFate "compatibilism"]].
Is there an issue? Send a MessageReason:
None
Changed line(s) 25 (click to see context) from:
* InsideAComputerSystem: Even aside from ''Fanfic/FriendshipIsOptimal'', the concept of being part of a computer simulation is discussed surprisingly often, most notably as an aspect of [[https://www.lesswrong.com/tag/acausal-trade acausal trade]] and situations like [[https://www.lesswrong.com/posts/JEonKyNJBcx5LJskE/a-full-explanation-to-newcomb-s-paradox Newcomb's paradox]].
to:
* InsideAComputerSystem: Even aside from ''Fanfic/FriendshipIsOptimal'', ''Fanfic/FriendshipIsOptimal'' and other discussion of BrainUploading, the concept of being part of a computer simulation is discussed surprisingly often, most notably as an aspect of [[https://www.lesswrong.com/tag/acausal-trade acausal trade]] with AIs and situations like [[https://www.lesswrong.com/posts/JEonKyNJBcx5LJskE/a-full-explanation-to-newcomb-s-paradox Newcomb's paradox]].paradox]] or Roko's Basilisk.
Is there an issue? Send a MessageReason:
None
Added DiffLines:
* InsideAComputerSystem: Even aside from ''Fanfic/FriendshipIsOptimal'', the concept of being part of a computer simulation is discussed surprisingly often, most notably as an aspect of [[https://www.lesswrong.com/tag/acausal-trade acausal trade]] and situations like [[https://www.lesswrong.com/posts/JEonKyNJBcx5LJskE/a-full-explanation-to-newcomb-s-paradox Newcomb's paradox]].
Is there an issue? Send a MessageReason:
None
Changed line(s) 27 (click to see context) from:
* MechanicalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.
to:
* MechanicalAbomination: DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.
Is there an issue? Send a MessageReason:
None
* MechanicalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff.
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.
* TheSingularity: With the twist that it's seen in a (mostly) positive light.
Changed line(s) 30 (click to see context) from:
* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal, though it should be noted their emphasis on ''why'' is kind of skewed compared to others; see LivingForeverIsAwesome. Most Transhumanists are more in it to make themselves and others better.
to:
* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal, though it should be noted their emphasis on ''why'' is kind of skewed compared to others; see LivingForeverIsAwesome. Most Transhumanists are more in it to make [[ForHappiness themselves and others better.better]].
Is there an issue? Send a MessageReason:
Deleted line(s) 21 (click to see context) :
** The community itself. Given their focus on cold logic and disdain for public opinion, once in a while they develop ideas that are tactless or simply absurd to an outsider (Roko's basilisk, musings on utilitarian value of racism, cryonics as ethical necessity etc.).
Is there an issue? Send a MessageReason:
Please don't unnecessarily spread infohazards, even debunked ones, as they can lead people to imagining genuine infohazards. TV Tropes may ruin your life, but that doesn't mean it should try. If you MUST mention it, please update the bullet point for the fact that Less Wrong no longer deletes references to it (though obviously does not go out of its way to spread them). As well, I would recommend replacing the Rationnal Wiki link with the following Less Wrong wiki link, as it discusses things more clearly and concisely. https://wiki.lesswrong.com/wiki/Roko%27s_basilisk
Deleted line(s) 22 (click to see context) :
* BrownNote: '''Roko's Basilisk''', to the point that any mention of it on Less Wrong's forums is deleted. Learn about it (at your own risk) [[https://www.youtube.com/watch?v=OzAzb2V7gzU here.]] A rebuttal for the terrified may be found [[http://rationalwiki.org/wiki/Roko%27s_basilisk here.]]
Is there an issue? Send a MessageReason:
None
Changed line(s) 24 (click to see context) from:
* DeusEstMachina: Yudkowsky and some other members of Less Wrong from the Singularity Institute for Artificial Intelligence are working on making one. [[TheSingularity Singularity]] is eagerly awaited.
to:
* DeusEstMachina: Yudkowsky and some other members of Less Wrong from the Singularity Institute for Artificial Machine Intelligence Research Institute are working on making one. [[TheSingularity Singularity]] is eagerly awaited.
Is there an issue? Send a MessageReason:
None
Changed line(s) 6,7 (click to see context) from:
The mainstream community on ''Less Wrong'' is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).
to:
The mainstream community on ''Less Wrong'' is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, Creator/EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).
Is there an issue? Send a MessageReason:
None
Changed line(s) 22 (click to see context) from:
* BrownNote: '''Roko's Basilisk''', to the point that any mention of it on Less Wrong's forums is deleted. Learn about it (at your own risk) [[https://www.youtube.com/watch?v=OzAzb2V7gzU here]]
to:
* BrownNote: '''Roko's Basilisk''', to the point that any mention of it on Less Wrong's forums is deleted. Learn about it (at your own risk) [[https://www.youtube.com/watch?v=OzAzb2V7gzU here]]here.]] A rebuttal for the terrified may be found [[http://rationalwiki.org/wiki/Roko%27s_basilisk here.]]
Is there an issue? Send a MessageReason:
None
Added DiffLines:
* TheHorseshoeEffect: Frequently mentioned and discussed.
Is there an issue? Send a MessageReason:
None
Changed line(s) 2,3 (click to see context) from:
-->--''[[http://lesswrong.com/lw/oj/probability_is_in_the_mind Eliezer Yudkowsky (Less Wrong)]]''
to:
Changed line(s) 8,11 (click to see context) from:
LessWrong is the source of much of the popularity of RationalFic.
* Literature/ThreeWorldsCollide is [[http://lesswrong.com/lw/y4/three_worlds_collide_08/ hosted here]].
* FanFic/HarryPotterAndTheMethodsOfRationality is occasionally [[http://lesswrong.com/r/discussion/tag/harry_potter/ discussed here]].
* FanFic/FriendshipIsOptimal [[http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_little_pony_fanfic/ originated there]].
* Literature/ThreeWorldsCollide is [[http://lesswrong.com/lw/y4/three_worlds_collide_08/ hosted here]].
* FanFic/HarryPotterAndTheMethodsOfRationality is occasionally [[http://lesswrong.com/r/discussion/tag/harry_potter/ discussed here]].
* FanFic/FriendshipIsOptimal [[http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_little_pony_fanfic/ originated there]].
to:
*
*
*
Changed line(s) 13,15 (click to see context) from:
!! This blog provides examples of:
to:
Is there an issue? Send a MessageReason:
None
Changed line(s) 31 (click to see context) from:
* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal.
to:
* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal.goal, though it should be noted their emphasis on ''why'' is kind of skewed compared to others; see LivingForeverIsAwesome. Most Transhumanists are more in it to make themselves and others better.
Is there an issue? Send a MessageReason:
None
Added DiffLines:
* BrownNote: '''Roko's Basilisk''', to the point that any mention of it on Less Wrong's forums is deleted. Learn about it (at your own risk) [[https://www.youtube.com/watch?v=OzAzb2V7gzU here]]
Is there an issue? Send a MessageReason:
Changing the wick from a redirect to the trope.
Changed line(s) 26 (click to see context) from:
* LogicFailure: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid.
to:
* LogicFailure: LogicalFallacies: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid.
Is there an issue? Send a MessageReason:
None
Added DiffLines:
* FanFic/FriendshipIsOptimal [[http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_little_pony_fanfic/ originated there]].
Is there an issue? Send a MessageReason:
None
** The community itself. Given their focus on cold logic and disdain for public opinion, once in a while they develop ideas that are tactless or simply absurd to an outsider (Roko's basilisk, musings on utilitarian value of racism, cryonics as ethical necessity etc.).
Changed line(s) 22 (click to see context) from:
* HumansAreFlawed: As a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution.
to:
* HumansAreFlawed: As Explained as a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution.
Changed line(s) 27 (click to see context) from:
* TalkingYourWayOut: The AI-Box Experiment.
to:
* TalkingYourWayOut: The AI-Box Experiment.Experiment is a thought experiment intended to show how a superhuman intellect (like a hyper-intelligent AI) could talk its captors into anything, in particular releasing it into the world.
Is there an issue? Send a MessageReason:
None
Changed line(s) 8 (click to see context) from:
LessWrong is the source of much of the popularity of RationalFiction.
to:
LessWrong is the source of much of the popularity of RationalFiction.RationalFic.
Is there an issue? Send a MessageReason:
None
Changed line(s) 8 (click to see context) from:
LessWrong is the source of much of the popularity of RationalFic.
to:
LessWrong is the source of much of the popularity of RationalFic.RationalFiction.
Is there an issue? Send a MessageReason:
None
Added DiffLines:
LessWrong is the source of much of the popularity of RationalFic.
Is there an issue? Send a MessageReason:
None
* StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist.]]
Deleted line(s) 27 (click to see context) :
* StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist.]]
Is there an issue? Send a MessageReason:
None
Deleted line(s) 21 (click to see context) :
* FandomBerserkButton: [[http://rationalwiki.org/wiki/Roko%27s_basilisk Roko's Basilisk]]
Is there an issue? Send a MessageReason:
moved from Main + cleaning
Added DiffLines:
->''"Uncertainty exists in the map, not in the territory."''
-->--''[[http://lesswrong.com/lw/oj/probability_is_in_the_mind Eliezer Yudkowsky (Less Wrong)]]''
''[[http://lesswrong.com/ Less Wrong]]'' is a community blog devoted to rationality. Contributors draw upon many scientific disciplines for their posts, from quantum physics and Bayesian probability to psychology and sociology. The blog focuses on human flaws that lead to misconceptions about the sciences. It's a gold mine for interesting ideas and unusual views on any subject. The clear writing style makes complex ideas easy to understand.
The mainstream community on ''Less Wrong'' is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).
* Literature/ThreeWorldsCollide is [[http://lesswrong.com/lw/y4/three_worlds_collide_08/ hosted here]].
* FanFic/HarryPotterAndTheMethodsOfRationality is occasionally [[http://lesswrong.com/r/discussion/tag/harry_potter/ discussed here]].
----
!! This blog provides examples of:
* AntiAdvice: [[DefiedTrope Called out as fallacious]]; [[http://wiki.lesswrong.com/wiki/Reversed_stupidity_is_not_intelligence reversed stupidity is not intelligence]].
* BackFromTheDead: Some in the Less Wrong community hope to achieve this through [[HumanPopsicle cryonics]].
* BanOnPolitics: It's generally agreed that talking about contemporary politics leads to {{FlameWar}}s and little else. See PhraseCatcher, below.
* BlueAndOrangeMorality: One of the core concepts of Friendly AI is that it's entirely possible to make something as capable as a human being that has completely alien goals. Luckily, there's already an example of [[http://lesswrong.com/lw/kr/an_alien_god/ an 'optimization process' completely unlike a human mind]] right here on Earth that we can use to see how good we are at truly understanding the concept.
-->"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
* ConceptsAreCheap: [[http://lesswrong.com/lw/jb/applause_lights/ Applause Lights]].
* DeusEstMachina: Yudkowsky and some other members of Less Wrong from the Singularity Institute for Artificial Intelligence are working on making one. [[TheSingularity Singularity]] is eagerly awaited.
* FandomBerserkButton: [[http://rationalwiki.org/wiki/Roko%27s_basilisk Roko's Basilisk]]
* HumansAreFlawed: As a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution.
* LivingForeverIsAwesome: Almost everyone on Less Wrong. Hence, the strong Transhumanist bent.
* LogicFailure: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid.
* PhraseCatcher: The FlameBait topic of politics is met with "[[http://wiki.lesswrong.com/wiki/Politics_is_the_Mind-Killer politics is the mind-killer]]".
* TalkingYourWayOut: The AI-Box Experiment.
* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal.
* StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist.]]
* WikiWalk: It is fairly easy to go on one due to the links in the articles to other articles. Also, certain lines of thought about similar issues are organized into 'sequences' which make them more conveniently accessible.
----
-->--''[[http://lesswrong.com/lw/oj/probability_is_in_the_mind Eliezer Yudkowsky (Less Wrong)]]''
''[[http://lesswrong.com/ Less Wrong]]'' is a community blog devoted to rationality. Contributors draw upon many scientific disciplines for their posts, from quantum physics and Bayesian probability to psychology and sociology. The blog focuses on human flaws that lead to misconceptions about the sciences. It's a gold mine for interesting ideas and unusual views on any subject. The clear writing style makes complex ideas easy to understand.
The mainstream community on ''Less Wrong'' is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).
* Literature/ThreeWorldsCollide is [[http://lesswrong.com/lw/y4/three_worlds_collide_08/ hosted here]].
* FanFic/HarryPotterAndTheMethodsOfRationality is occasionally [[http://lesswrong.com/r/discussion/tag/harry_potter/ discussed here]].
----
!! This blog provides examples of:
* AntiAdvice: [[DefiedTrope Called out as fallacious]]; [[http://wiki.lesswrong.com/wiki/Reversed_stupidity_is_not_intelligence reversed stupidity is not intelligence]].
* BackFromTheDead: Some in the Less Wrong community hope to achieve this through [[HumanPopsicle cryonics]].
* BanOnPolitics: It's generally agreed that talking about contemporary politics leads to {{FlameWar}}s and little else. See PhraseCatcher, below.
* BlueAndOrangeMorality: One of the core concepts of Friendly AI is that it's entirely possible to make something as capable as a human being that has completely alien goals. Luckily, there's already an example of [[http://lesswrong.com/lw/kr/an_alien_god/ an 'optimization process' completely unlike a human mind]] right here on Earth that we can use to see how good we are at truly understanding the concept.
-->"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
* ConceptsAreCheap: [[http://lesswrong.com/lw/jb/applause_lights/ Applause Lights]].
* DeusEstMachina: Yudkowsky and some other members of Less Wrong from the Singularity Institute for Artificial Intelligence are working on making one. [[TheSingularity Singularity]] is eagerly awaited.
* FandomBerserkButton: [[http://rationalwiki.org/wiki/Roko%27s_basilisk Roko's Basilisk]]
* HumansAreFlawed: As a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution.
* LivingForeverIsAwesome: Almost everyone on Less Wrong. Hence, the strong Transhumanist bent.
* LogicFailure: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid.
* PhraseCatcher: The FlameBait topic of politics is met with "[[http://wiki.lesswrong.com/wiki/Politics_is_the_Mind-Killer politics is the mind-killer]]".
* TalkingYourWayOut: The AI-Box Experiment.
* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal.
* StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist.]]
* WikiWalk: It is fairly easy to go on one due to the links in the articles to other articles. Also, certain lines of thought about similar issues are organized into 'sequences' which make them more conveniently accessible.
----