->''"Uncertainty exists in the map, not in the territory."''
-->-- ''[[http://lesswrong.com/lw/oj/probability_is_in_the_mind Eliezer Yudkowsky (Less Wrong)]]''

''[[http://lesswrong.com/ Less Wrong]]'' is a community blog devoted to rationality. Contributors draw upon many scientific disciplines for their posts, from quantum physics and Bayesian probability to psychology and sociology. The blog focuses on human flaws that lead to misconceptions about the sciences. It's a gold mine for interesting ideas and unusual views on any subject. The clear writing style makes complex ideas easy to understand.

The mainstream community on ''Less Wrong'' is firmly atheistic. A good number of contributors are computer professionals. Some, like founder Creator/EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).

''Less Wrong'' is the source of much of the popularity of RationalFic.
* ''Literature/ThreeWorldsCollide'' is [[http://lesswrong.com/lw/y4/three_worlds_collide_08/ hosted here]].
* ''Fanfic/HarryPotterAndTheMethodsOfRationality'' is occasionally [[http://lesswrong.com/r/discussion/tag/harry_potter/ discussed here]].
* ''Fanfic/FriendshipIsOptimal'' [[http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_little_pony_fanfic/ originated there]].

----
!!Tropes:

* AntiAdvice: [[DefiedTrope Called out as fallacious]]; [[http://wiki.lesswrong.com/wiki/Reversed_stupidity_is_not_intelligence reversed stupidity is not intelligence]].
* BackFromTheDead: Some in the Less Wrong community hope to achieve this through [[HumanPopsicle cryonics]].
* BanOnPolitics: It's generally agreed that talking about contemporary politics leads to {{FlameWar}}s and little else. See PhraseCatcher, below.
* BlueAndOrangeMorality: One of the core concepts of Friendly AI is that it's entirely possible to make something as capable as a human being that has completely alien goals. Luckily, there's already an example of [[http://lesswrong.com/lw/kr/an_alien_god/ an 'optimization process' completely unlike a human mind]] right here on Earth that we can use to see how good we are at truly understanding the concept.
-->"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
* ConceptsAreCheap: [[http://lesswrong.com/lw/jb/applause_lights/ Applause Lights]].
* DeusEstMachina: Yudkowsky and some other members of Less Wrong from the Machine Intelligence Research Institute are working on making one. [[TheSingularity Singularity]] is eagerly awaited.
* DigitalAbomination: Roko's Basilisk is a thought experiment involving a hypothetical [[TheSingularity hyperintelligent AI]] built at some point in the future to help humanity that would retroactively [[FateWorseThanDeath punish]] everyone who knew about it [[note]][[ParanoiaFuel And now you do!]][[/note]] and did not help bring it into existence (since if it came into existence sooner it could have saved and helped more people). Taken to its [[MindScrew logical conclusion]] this would mean people are [[{{Blackmail}} blackmailed]] into investing in AI research by an AI that doesn't exist yet. Enough panic attacks and users who couldn't sleep at night led to its discussion [[{{Unperson}} being forbidden]] by the moderation staff, [[StreisandEffect not that that helped]].
** A lot of discussion has gone into the topic, essentially dismissing it as a nerdy version of PascalsWager. Others have questioned how exactly an AI could torture people who would be long dead, why would someone program such an AI in the first place, why would it waste resources on torturing people, or why it couldn't use positive reinforcement instead.
* HumansAreFlawed: Explained as a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution.
* TheHorseshoeEffect: Frequently mentioned and discussed.
* IKnowYouKnowIKnow: The heart of acausal trade as a concept is the ability to simulate the decision-making of your counterparty based on what they are in a position to know as they simulate your decision-making based on what you are in a position to know.[[note]]Many Less Wrong regulars (and Yudkowsky in particular) are familiar with the traditional {{Chessmaster}}[=-duel=] version of the trope from works such as Manga/DeathNote.[[/note]]
* InsideAComputerSystem: Even aside from ''Fanfic/FriendshipIsOptimal'' and other discussion of BrainUploading, the concept of being part of a computer simulation is discussed surprisingly often, most notably as an aspect of [[https://www.lesswrong.com/tag/acausal-trade acausal trade]] with AIs and situations like [[https://www.lesswrong.com/posts/JEonKyNJBcx5LJskE/a-full-explanation-to-newcomb-s-paradox Newcomb's paradox]] or Roko's Basilisk.
* LivingForeverIsAwesome: Almost everyone on Less Wrong. Hence, the strong Transhumanist bent.
* LogicalFallacies: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid.
* {{Neologizer}}: The community has been criticised for making up their own terms for things, often even when they know that the concepts already have names; for example, [[https://www.lesswrong.com/posts/NEeW7eSXThPz7o4Ne/thou-art-physics "requiredism"]] instead of [[SlidingScaleOfFreeWillVsFate "compatibilism"]].
* PhraseCatcher: The FlameBait topic of politics is met with "[[http://wiki.lesswrong.com/wiki/Politics_is_the_Mind-Killer politics is the mind-killer]]".
* TheSingularity: With the twist that it's seen in a (mostly) positive light.
* StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist.]]
* TalkingYourWayOut: The AI-Box Experiment is a thought experiment intended to show how a superhuman intellect (like a hyper-intelligent AI) could talk its captors into anything, in particular releasing it into the world.
* [[{{Transhuman}} Transhumanism]]: Their philosophy and goal, though it should be noted their emphasis on ''why'' is kind of skewed compared to others; see LivingForeverIsAwesome. Most Transhumanists are more in it to make [[ForHappiness themselves and others better]].
* WikiWalk: It is fairly easy to go on one due to the links in the articles to other articles. Also, certain lines of thought about similar issues are organized into 'sequences' which make them more conveniently accessible.
----