Okay, so Friendship is Optimal is done... what other interesting rational fics are out there?
"Here to welcome our new golden-eyed overlords," said Addy promptly.Harry Potter and the Natural 20 isn't exactly rationalistic, but the main character's extremely munckinful mentality makes it come close, in many ways.
Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.Are you guys aware of any teen wolf fanfics?
"Here to welcome our new golden-eyed overlords," said Addy promptly.You mean like fanfics of White Fang or The Call Of The Wild or Balto?
Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.I mean rational Teen Wolf fanfics
"Here to welcome our new golden-eyed overlords," said Addy promptly.Yes, yes, but by "Teen Wolf" do you mean
- adolescent wolves or
- teenager werewolves?
OH! There's an actual franchise named that? Talk about originality...
So, Teen Wolf and Teen Wolf. Huh.
Well, I don't know about that, but it shouldn't be too hard to find intelligent fanfiction of Werewolf: The Forsaken that deals about teenagers...
Well, I found nothing on the former, but the teen wolf series fanfic recs page has this to offer. Seems intelligent and engaging.
edited 5th Dec '12 1:34:05 PM by TheHandle
Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.I mean the Teen Wolf television series... They essentially use "Teen" as a more polite version of "Hormone Addled Idiots" because... the main character is just a moron, like, I wanted to smack him during the entire season 1, he got better in season 2, but not completely.
"Here to welcome our new golden-eyed overlords," said Addy promptly.We're talking werewolves. Even in Luminosity, they weren't the sharpest tools in the shed.
Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.How about one where he plays basketball?
Of course, don't you know anything about ALCHEMY?!- Twin clones of Ivan the GreatI have a question — it is very much possible that it is discussed in Less Wrong somewhere, but I have not read about it so far.
Obviously, I am not an AI researcher in the least, and there is a very high chance that this is complete nonsense; but still, I'd be curious to hear what you think about this.
As far as I can tell, the whole "Friendly AI" problem and paradigm is based on the idea that A.I.s, if they ever are developed, will be self-improving utility maximizers, working tirelessly to bring the whole universe to some optimal state. This makes badly-designed utility functions potentially catastrophic: the classical example is the paperclip-maximizer, but really, attempting to maximize almost any unbounded utility function is very likely to bring about horrific consequences in the hands of a sufficiently capable agent.
However, it seems to me that the assumption that the utility function of an intelligent agent is unbounded does not necessarily hold (strictly speaking, it actually cannot hold — brains and computers are finite state machines, and they cannot represent arbitrarily big numbers in their memory — but that is not the main point, I think).
An intelligent agent with a bounded utility function would strive to reach its maximum utility, and, once it's there, would be in a state of homeostasis — it would simply react to external stimuli in such a way as to keep its own utility as close to the maximum as possible. It's a bit like what I do when I'm on vacation — if I feel like reading or going for a walk or whatever, I do so, if I am hungry I eat, if I am sleepy I sleep, and so on, but I do not even try to pursue some galaxy-spanning master plan. Why would I?
In a way, even talking about "utility maximization" makes, I think, a whole bunch of assumptions about what a "rational" agent should be and how it should behave, assumptions that do not fully apply to human beings and would not necessarily have to apply to human-equivalent A.I.s. Maybe it's because I've been reading a bit about cybernetics lately, but it seems to me that an agent might act in ways we could perceive as goal-oriented without having an explicitly defined utility function to begin with. It would simply be a very complex feedback system, reacting to external stimuli in ways that preserve some internal aspect of the system, and maybe having some sort of partial internal representation of aspects of the external world.
This seems to be to be closer to the real behaviour of human beings and other animals: I mean, obviously I like some situations better than others, in the sense that some situations cause me distress and others make me happy, but I do not attribute specific values to events and start scheming in order to maximize some sort of total utilitỵ. I just, well, do whatever sounds right at the moment.
I guess that one could say that this just implies that I am not a rational agent in the Less Wrong sense, and neither are most other human beings or animals. Fair enough. But still, developing an AI with capabilities similar to mine — or to those of your average dog, for that matter — would be a major breakthrough compared to the current state of the art: so why are we assuming that AI development will go through self-improving utility maximizers, when at the moment we know of no entities whatsoever that fit that description?
edited 11th Dec '12 5:01:00 AM by Carciofus
But they seem to know where they are going, the ones who walk away from Omelas.Does anyone know rational ways to relax?
"Here to welcome our new golden-eyed overlords," said Addy promptly.As opposed to irrational ones? What does that even mean?
Shinigan (Naruto fanfic)If you have to think, you are doing it wrong.
I'm a (socialist) professional writer serializing a WWII alternate history webnovel.Never mind.
edited 12th Apr '13 4:20:27 PM by Myrmidon
Kill all math nerdsPerhaps one might experiment with various activities, estimate their effectiveness as a form of relaxation (yeah, total precision is not possible; but it's not really necessary either — a simple scale from one to five would be more than enough) and see what works best for them?
I mean, putting aside whatever rationality is, it certainly is not reasonable to simply assume that all people are equally relaxed (or stressed) by the same activities.
But they seem to know where they are going, the ones who walk away from Omelas.yes, opposed to irrational relaxiation such as: thinking all problems will be solved by bearded invisible giant in the sky, which is pretty much was the only way I was taught to relax.
"Here to welcome our new golden-eyed overlords," said Addy promptly.By the ordinary definitions, that's not so much a means of relaxation as a worldview.
Shinigan (Naruto fanfic)Most people recommend finding a quiet place, getting comfortable, clearing your mind and focusing on taking deep breaths in and out if you really need to relax. The important thing is to think of nothing except your breathing.
Of course if your stress builds up again right away that means you should seriously consider altering your circumstances or altering your worldview to one less stress conducive.
Well, on similar lines, I heard that congratulating themselves for not believing in bearded invisible giants in the sky can be very relaxing
But kidding aside, I would suggest walking. No music through earbuds, no haste, no nothing. Just walk, look around you, listen, and think about whatever comes to your mind.
edited 15th Apr '13 10:21:39 PM by Carciofus
But they seem to know where they are going, the ones who walk away from Omelas.I've found riding around slowly on a bike to be very helpful. Works best if the weather is warm enough that you don't have to be entirely covered up.
Shinigan (Naruto fanfic)wrong thread
edited 22nd May '13 4:51:24 PM by Myrmidon
Kill all math nerdsSometimes I wish that a certain book and/or series would have a rational fanfiction made from it, but I'm far too irrational to acomplish it myself.
More than that, they can be downright rude. If they feel you're "being an idiot" they'll never miss a chance to tell you. They'll also never miss the opportunity to make smartass comments whenever you leave yourself open to them. Some fora I enter preparing for a fight, but only in Less Wrong do I enter with the mindset for a freaking chess match.
edited 19th Nov '12 9:15:51 PM by TheHandle
Darkness cannot drive out darkness; only light can do that. Hate cannot drive out hate; only love can do that.