History Main / LessWrong

8th Aug '13 2:51:15 AM justanid
Is there an issue? Send a Message
moved to Blog namespace
->''"Uncertainty exists in the map, not in the territory."'' -->--''[[http://lesswrong.com/lw/oj/probability_is_in_the_mind Eliezer Yudkowsky (Less Wrong)]]'' [[http://lesswrong.com/ Less Wrong]] is a community blog devoted to rationality. Contributors draw upon many scientific disciplines for their posts, from quantum physics and Bayesian probability to psychology and sociology. The blog focuses on human flaws that lead to misconceptions about the sciences. It's a gold mine for interesting ideas and unusual views on any subject. The clear writing style makes complex ideas easy to understand. The mainstream community on Less Wrong is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds). * ThreeWorldsCollide is [[http://lesswrong.com/lw/y4/three_worlds_collide_08/ hosted here]]. * HarryPotterAndTheMethodsOfRationality is occasionally [[http://lesswrong.com/r/discussion/tag/harry_potter/ discussed here]]. ---- !! ''Less Wrong'' contains examples of: * AntiAdvice: [[DefiedTrope Called out as fallacious]]; [[http://wiki.lesswrong.com/wiki/Reversed_stupidity_is_not_intelligence reversed stupidity is not intelligence]]. * BackFromTheDead: Some in the Less Wrong community hope to achieve this through [[HumanPopsicle cryonics]]. * BanOnPolitics: It's generally agreed that talking about contemporary politics leads to {{FlameWar}}s and little else. See PhraseCatcher, below. * BlueAndOrangeMorality: One of the core concepts of Friendly AI is that it's entirely possible to make something as capable as a human being that has completely alien goals. Luckily, there's already an example of [[http://lesswrong.com/lw/kr/an_alien_god/ an 'optimization process' completely unlike a human mind]] right here on Earth that we can use to see how good we are at truly understanding the concept. -->"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." * ConceptsAreCheap: [[http://lesswrong.com/lw/jb/applause_lights/ Applause Lights]]. * DeusEstMachina: Yudkowsky and some other members of Less Wrong from the Singularity Institute for Artificial Intelligence are working on making one. [[TheSingularity Singularity]] is eagerly awaited. * FandomBerserkButton: [[http://rationalwiki.org/wiki/Roko%27s_basilisk Roko's Basilisk]] * HumansAreFlawed: As a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution. * LivingForeverIsAwesome: Almost everyone on Less Wrong. Hence, the strong Transhumanist bent. * LogicFailure: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid. * PhraseCatcher: The FlameBait topic of politics is met with "[[http://wiki.lesswrong.com/wiki/Politics_is_the_Mind-Killer politics is the mind-killer]]". * TalkingYourWayOut: The AI-Box Experiment. * [[{{Transhuman}} Transhumanism]]: Their philosophy and goal. * StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist.]] * WikiWalk: It is fairly easy to go on one due to the links in the articles to other articles. Also, certain lines of thought about similar issues are organized into 'sequences' which make them more conveniently accessible. ----
to:
->''"Uncertainty exists in the map, not in the territory."'' -->--''[[http://lesswrong.com/lw/oj/probability_is_in_the_mind Eliezer Yudkowsky (Less Wrong)]]'' [[http://lesswrong.com/ Less Wrong]] is a community blog devoted to rationality. Contributors draw upon many scientific disciplines for their posts, from quantum physics and Bayesian probability to psychology and sociology. The blog focuses on human flaws that lead to misconceptions about the sciences. It's a gold mine for interesting ideas and unusual views on any subject. The clear writing style makes complex ideas easy to understand. The mainstream community on Less Wrong is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds). * ThreeWorldsCollide is [[http://lesswrong.com/lw/y4/three_worlds_collide_08/ hosted here]]. * HarryPotterAndTheMethodsOfRationality is occasionally [[http://lesswrong.com/r/discussion/tag/harry_potter/ discussed here]]. ---- !! ''Less Wrong'' contains examples of: * AntiAdvice: [[DefiedTrope Called out as fallacious]]; [[http://wiki.lesswrong.com/wiki/Reversed_stupidity_is_not_intelligence reversed stupidity is not intelligence]]. * BackFromTheDead: Some in the Less Wrong community hope to achieve this through [[HumanPopsicle cryonics]]. * BanOnPolitics: It's generally agreed that talking about contemporary politics leads to {{FlameWar}}s and little else. See PhraseCatcher, below. * BlueAndOrangeMorality: One of the core concepts of Friendly AI is that it's entirely possible to make something as capable as a human being that has completely alien goals. Luckily, there's already an example of [[http://lesswrong.com/lw/kr/an_alien_god/ an 'optimization process' completely unlike a human mind]] right here on Earth that we can use to see how good we are at truly understanding the concept. -->"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." * ConceptsAreCheap: [[http://lesswrong.com/lw/jb/applause_lights/ Applause Lights]]. * DeusEstMachina: Yudkowsky and some other members of Less Wrong from the Singularity Institute for Artificial Intelligence are working on making one. [[TheSingularity Singularity]] is eagerly awaited. * FandomBerserkButton: [[http://rationalwiki.org/wiki/Roko%27s_basilisk Roko's Basilisk]] * HumansAreFlawed: As a result of having been 'designed' slowly and very much imperfectly by the 'idiot god' that is evolution. * LivingForeverIsAwesome: Almost everyone on Less Wrong. Hence, the strong Transhumanist bent. * LogicFailure: Revealed to be shockingly common for normal human minds, and something for rationalists to avoid. * PhraseCatcher: The FlameBait topic of politics is met with "[[http://wiki.lesswrong.com/wiki/Politics_is_the_Mind-Killer politics is the mind-killer]]". * TalkingYourWayOut: The AI-Box Experiment. * [[{{Transhuman}} Transhumanism]]: Their philosophy and goal. * StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist.]] * WikiWalk: It is fairly easy to go on one due to the links in the articles to other articles. Also, certain lines of thought about similar issues are organized into 'sequences' which make them more conveniently accessible. ----[[redirect:Blog/LessWrong]]
4th Jul '13 3:42:01 AM Ronfar
Is there an issue? Send a Message
Added DiffLines:
* BanOnPolitics: It's generally agreed that talking about contemporary politics leads to {{FlameWar}}s and little else. See PhraseCatcher, below.
5th Jun '13 8:15:24 PM billybobfred
Is there an issue? Send a Message
Added DiffLines:
* AntiAdvice: [[DefiedTrope Called out as fallacious]]; [[http://wiki.lesswrong.com/wiki/Reversed_stupidity_is_not_intelligence reversed stupidity is not intelligence]].
20th May '13 11:05:48 PM Rabukurafuto
Is there an issue? Send a Message
A Hollywood Atheist is never a real person, regardless of behaviour. The trope is for fictional examples, not Less Wrong members.
* HollywoodAtheist: Most often {{averted|Trope}}, but there may be some who act like the {{Jerkass}} variety. Religion is rarely a topic of discussion, as [[http://lesswrong.com/lw/1e/raising_the_sanity_waterline/ the social baseline is strongly atheist]] (though a spirited early-2011 thread [[http://lesswrong.com/lw/3uj/theists_are_wrong_is_theism/ briefly brought it up]]).
14th Mar '13 6:03:36 PM jojabar
Is there an issue? Send a Message
Added DiffLines:
* FandomBerserkButton: [[http://rationalwiki.org/wiki/Roko%27s_basilisk Roko's Basilisk]]
20th Jan '13 10:39:31 AM nitpickeryandsuch
Is there an issue? Send a Message
* StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist]].
to:
* StrawVulcan: {{Averted|Trope}}. Less Wrong community members [[http://lesswrong.com/lw/hp/feeling_rational/ do not consider rationality to *necessarily* be at odds with emotion]]. [[http://lesswrong.com/lw/go/why_truth_and/ Also, Spock is a terrible rationalist]].rationalist.]]
30th Jul '12 3:19:47 PM Ronfar
Is there an issue? Send a Message
The mainstream community on Less Wrong is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI"[[hottip:*:[[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]]], and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).
to:
The mainstream community on Less Wrong is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design "Friendly AI"[[hottip:*:[[AIIsACrapshoot AI" ([[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]]], Crapshoot]]), and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).
30th Jul '12 10:55:17 AM cdombroski
Is there an issue? Send a Message
kill the smrt quotes
The mainstream community on Less Wrong is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design “Friendly AI”[[hottip:*:[[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]]], and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).
to:
The mainstream community on Less Wrong is firmly atheistic. A good number of contributors are computer professionals. Some, like founder EliezerYudkowsky, work in the field of ArtificialIntelligence; particularly, Less Wrong has roots in Yudkowsky's effort to design “Friendly AI”[[hottip:*:[[AIIsACrapshoot "Friendly AI"[[hottip:*:[[AIIsACrapshoot AI That Is]] [[AvertedTrope Not A Crapshoot]]]], and as a result often uses AI or transhumanist elements in examples (though this is also so as to speak of minds-in-general, as contrasted with our particular human minds).
6th May '12 1:40:00 PM Ekuran
Is there an issue? Send a Message
* Transhumanism: A philosophy, rather than the TVTropes use of the term.
to:
* Transhumanism: A philosophy, rather than the TVTropes use of the term.[[{{Transhuman}} Transhumanism]]: Their philosophy and goal.
11th Apr '12 3:11:01 PM grendelkhan
Is there an issue? Send a Message
mention the Blue and Orange morality that is a non-human set of metaethics.
Added DiffLines:
* BlueAndOrangeMorality: One of the core concepts of Friendly AI is that it's entirely possible to make something as capable as a human being that has completely alien goals. Luckily, there's already an example of [[http://lesswrong.com/lw/kr/an_alien_god/ an 'optimization process' completely unlike a human mind]] right here on Earth that we can use to see how good we are at truly understanding the concept. -->"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
This list shows the last 10 events of 47. Show all.