Follow TV Tropes

Following

The philosophy thread general discussion

Go To

demarquis Since: Feb, 2010
#2276: Apr 1st 2014 at 6:15:06 PM

Oh no, I've been arguing against an objective morality since I joined the thread. I dont even think math is objective, let alone morality.

"The point is that if that nobody actually does." I'm not sure that's true. If you could construct a box that would stimulate the pleasure centers of the brain, and offer to let someone be locked into the box for the rest of their life, I'm not sure that no one would take it, esp. if you provided for their physical needs.

"Everyone is dead, replaced by happiness engines. Anything and everything anyone ever cared about except happiness, is gone." So? How is that relevent? Are you giving a privilaged position to the values of survival and existence? Why?

Are you actually trying to formulate a practical morality? One or more sets of coherent moral values that one or more entities could utilize in the real world in a sustainable fashion?

supermerlin100 Since: Sep, 2011
#2277: Apr 2nd 2014 at 2:43:19 PM

Look this isn't suppose to convince a perfect philosopher of total emptiness (a rock). Human values don't just boil down to just happiness. We can clearly get confused about what are values are, and jump to the wrong conclusion. The problem is that plans that maximize our simplified values, don't general maximize morality. So we still lose. The happiness engine that takes our place might win, but we don't.

Greenmantle V from Greater Wessex, Britannia Since: Feb, 2010 Relationship Status: Hiding
V
#2278: Apr 2nd 2014 at 2:58:48 PM

[up] How can one maximise morality?note 

Keep Rolling On
supermerlin100 Since: Sep, 2011
#2279: Apr 2nd 2014 at 6:15:09 PM

Okay the grammer there wasn't good, but it should still be obvious what I mean, given all of the talk of utility functions.

demarquis Since: Feb, 2010
#2280: Apr 2nd 2014 at 8:22:00 PM

"Human values don't just boil down to just happiness. We can clearly get confused about what are values are, and jump to the wrong conclusion. The problem is that plans that maximize our simplified values, don't general maximize morality."

Maybe they do, and maybe they dont, but you claimed that "multiple sets of moral preferences, come from mistaken generalizations" (post 2270). Calling something objectively wrong is a very strong claim, and I'm still trying to figure out what you meant by that. If someone values a very simple, single value (if you regard "happiness" as too unrealistic, how about "Group Loyalty"?) you claim this undermines "general morality." How do you define this general morality? Is it the greatest good for the greatest number? Does this exclude anyone who isnt a utilitarian?

You cant program a computer (for example) to work toward a goal unless you can objectively define what that goal is. What's your goal?

edited 2nd Apr '14 8:22:56 PM by demarquis

supermerlin100 Since: Sep, 2011
#2281: Apr 3rd 2014 at 2:48:28 PM

First of all I don't claim to know the details of morality. After all I blaming the disagreements of the subject on the confusion caused by our values being a huge mess we can't even directly read.

Without getting into how they add up here's a short list of things that seem to me to have value in their own right.

Happiness, sense of self, boredom (the desire to do new things), sympathy, the desire for our emotions to be about something. friendship, romantic love, family, self respect, fairness, striving, reasonable challenge, sense of duty, honesty, self determination, preservation of people's lives, avoidance of suffering, ect ect. I know I'm missing some here.

A Godly happiness maximizer is hardly better then a Godly paperclip maximizer.

ImperialSunlight A Practical Observer from Tolaria West Since: Apr, 2013 Relationship Status: I'm just high on the world
A Practical Observer
#2282: Apr 4th 2014 at 7:40:50 PM

[up]

Beyond most of those things more generally fitting into or being desirable for the purpose of happiness, they are also things that humans desire and so we see them as having value. Yet there is no reason to consider those things more valuable than others beyond human bias.

''The eternal question of reality, it still stands today.''
CassidyTheDevil Since: Jan, 2013
#2283: Apr 5th 2014 at 12:15:08 AM

I think avoidance of suffering is pretty much a universal for anything that has a brain.

higurashimerlin Since: Aug, 2012
#2284: Apr 5th 2014 at 5:25:10 AM

[up]For anything that can suffer.

When life gives you lemons, burn life's house down with the lemons.
supermerlin100 Since: Sep, 2011
#2285: Apr 5th 2014 at 1:53:41 PM

[up][up][up] I already explained why it is a mistake to lump them all under happiness.

There's no reason for a perfect philosopher of total emptiness to care about them or even itself.

But who cares. The notion of error only makes sense if there is a criteria. The perfect philosopher doesn't have any, it's a rock. We do even if we for the purpose of planing are confused about what that criteria is.

You could try to empty yourself, but why?

A Yazt isn't doing anything illogical by maximizing paperclips. One wouldn't expect a yazt that knew its source code, and every clippy argument, including arugument about what counts as a valid clippy argument, to do something else.

demarquis Since: Feb, 2010
#2286: Apr 5th 2014 at 4:22:47 PM

So you're trying to figure out what the underlying basis of human morality is? In other words, it's a descriptive approach, not a prescriptive one? Most humans appear to be using some sort of algorythm that roughly operates along the lines of "advance your own interests by advancing those of your in-group."

BestOf FABRICATI DIEM, PVNC! from Finland Since: Oct, 2010 Relationship Status: Falling within your bell curve
FABRICATI DIEM, PVNC!
#2287: Apr 5th 2014 at 4:40:43 PM

If Dawkins is right, our morality would tend to be based on principles that result in the preservation and replication of our genes.

If you are useful to your genes by having a faculty in your mind that makes you sacrifice yourself for the benefit of your group (on the basis that usually that group would include a disproportionate number of members of your genotype), then you will sacrifice yourself for your group. Even though you die, others with the same genes have a greater chance of survival because of your sacrifice.

If, on the contrary, your genes are better off if you are willing to kill other members of your group, then that's what you'll most likely do.

Whichever version of you is the most effective at preserving and replicating its genes will eventually win out, and we all come from a rather long line of generations that have come through this process of natural selection. Therefore we are most likely to behave in the way determined by the genes that were best at getting replicated and preserved. It seems that for the most part we're all better off if most of us aren't very selfish. This is what Dawkins means when he talks about the Selfish Gene - the gene that is best at replicating and preserving itself will prevail, even if it means that the individuals carrying it will sometimes suffer.

Similar principles apply for memes, as well. When you take both of those you get a descriptive account of how we came to have our morality. The development of that morality, though, is an ongoing process - so we can't just sit back and say that our instincts are always correct.

I would add, on a more personal note, that even if our intellectual efforts at developing a morality come at conclusions that are not the best possible option for the preservation of our genes, there's no reason to say that just because it's in a sense unnatural to not want to optimise the survival of our genes it's also wrong. Prescriptive is not the same as descriptive.

edited 5th Apr '14 4:41:25 PM by BestOf

Quod gratis asseritur, gratis negatur.
demarquis Since: Feb, 2010
#2288: Apr 5th 2014 at 4:43:48 PM

The problem with biological explanations of social-level behavior is that it doesnt really explain why the institutions we have today were the ones that led to better survival in our ancestors. Which moral principles will be more successful under what conditions? Right now, socio-biology isnt specific enough to generate testable hypotheses, so it doesnt help us understand very much...

BestOf FABRICATI DIEM, PVNC! from Finland Since: Oct, 2010 Relationship Status: Falling within your bell curve
FABRICATI DIEM, PVNC!
#2289: Apr 5th 2014 at 4:50:41 PM

Meme theory is trying to cover that - well, it's one of the fields it's trying to cover. It's an emerging theory, though, so don't expect huge things from it just yet.

I do think that memes are one of Dawkins' best ideas. It's not unlikely that that will be his most enduring legacy, after we're all gone.

But maybe I shouldn't assume that everyone's familiar with Dawkins here - this thread is about philosophy, not biology. If someone reading this has no idea what I mean by memes I won't mind writing a post that explains the concept; but I won't write it if no one asks. If you think it's just the Internet phenomenon where someone sees a joke or phrase and repeats it, you don't have a full picture of what a meme is.

Quod gratis asseritur, gratis negatur.
higurashimerlin Since: Aug, 2012
#2290: Apr 5th 2014 at 4:53:19 PM

Evolution explains why we might want something but, humans didn't even know genes until recent history let alone care about them for their own sake.

We might have evolve to be moral because it helped us pass on our genes, but we care about morality for its own sake.

When life gives you lemons, burn life's house down with the lemons.
demarquis Since: Feb, 2010
#2291: Apr 5th 2014 at 5:21:14 PM

Well, technically, that means we must have evolved to care about morality for morality's sake, since there is no other source for an ancient behavior. But again, that doesnt tell us very much about Supermerlin's issue: What specific principles underlie the human approach to morality? Another way of putting this (the way Super himself did) would be: How would you program a computer (more precisely a population of AI programs running on a computer) to reproduce the development of human morality over time? What algorythm would they be using?

BestOf FABRICATI DIEM, PVNC! from Finland Since: Oct, 2010 Relationship Status: Falling within your bell curve
FABRICATI DIEM, PVNC!
#2292: Apr 5th 2014 at 5:35:27 PM

It's algorithm.

I suppose one trait that seems to be a built-in feature of most humans is empathy. Most people are inherently capable of imagining things from someone else's point-of-view. Another, perhaps equally inherent aspect of us is that we generally have something resembling the "Golden Rule" within us: we generally won't do to others what we wouldn't want done to ourselves, and usually we'll behave in ways that we would want others to also adopt.

It's not hard to imagine how something like that could have developed through natural selection, or how genes carrying those traits would've been better suited for survival than ones that didn't include those traits.

So I suppose empathy is one likely starting point. An instinct for equality is another: most of us feel that fundamentally everyone is equal, and to make us think otherwise takes a lot of training. This notion of equality has actually been observed in some other species, so it's not even unique to humans. Most of us just can't get rid of the feeling that everyone should get the same reward or punishment for the same behaviour, and if this rule isn't observed we feel discomfort.

edited 5th Apr '14 5:36:38 PM by BestOf

Quod gratis asseritur, gratis negatur.
demarquis Since: Feb, 2010
#2293: Apr 5th 2014 at 5:48:28 PM

I think that, historically, most of what you are discussing primarily applied only within one's in-groups. How one categorizes and weights all the various people one meets and the relationships one has with them would be critical in determining which set of moral standards are in effect in any given situation.

higurashimerlin Since: Aug, 2012
#2294: Apr 5th 2014 at 6:11:14 PM

I think supermerlin is talking about meta-ethics.

When life gives you lemons, burn life's house down with the lemons.
demarquis Since: Feb, 2010
Greenmantle V from Greater Wessex, Britannia Since: Feb, 2010 Relationship Status: Hiding
V
#2297: Apr 8th 2014 at 12:11:49 AM

What do we know about consciousness?

Keep Rolling On
Euodiachloris Since: Oct, 2010
#2298: Apr 8th 2014 at 1:46:34 AM

...That there's a tonne we don't know? And, that... (at a rough guestimate) a good 96% (if not more) of our thought processes don't bother having anything to do with it. <_<

Once upon a time it was thought that most of our thinking was conscious. Or, at least guided by it in some way. Um. No. If anything... the unconscious wags that which perceives itself as the dog.

edited 8th Apr '14 1:50:45 AM by Euodiachloris

Elfive Since: May, 2009
#2299: Apr 8th 2014 at 1:55:05 AM

You're more like the captain of your body than the pilot. You plot the general course but then the crew does all the heavy lifting and sometimes takes matters into their own hands. And occasionally mutiny.

demarquis Since: Feb, 2010
#2300: Apr 8th 2014 at 7:22:33 AM

I'm afraid even that's not a consensus position among neurological researchers, Elfive. There's precious little evidence that your consciousness "decides" anything of importance.

Anyway, "Consciousness". The first problem is that there is no standard, objective definition of it. We cant even tell if it's one single thing or a set of independent functions. See here.

Have a book on the topic. But be warned: "Understanding consciousness is the major unsolved problem in biology."

Another article on the various components of consciousness. It covers: ALERTNESS | AWARENESS | ATTENTION | MEMORY | THE MOTIVATIONAL SYSTEM | COGNITION AND THE FOCUSING OF AWARENESS

(Sorry for the all caps, I just copy/pasted directly from the article)


Total posts: 9,078
Top