Forum
A place to discuss topics/games with other webDiplomacy players.
Page 565 of 1419
FirstPreviousNextLast
dontbcruel (175 D)
14 Apr 10 UTC
Ancient Game Going
Really have wanted to mix up the map. Come play.

http://webdiplomacy.net/board.php?gameID=26697
0 replies
Open
Maniac (189 D(B))
13 Apr 10 UTC
Spot the liar...
Michael Howard or David Cameron (See inside)
19 replies
Open
aneumann (405 D)
14 Apr 10 UTC
Do ghost ratings take into account a difference between press and no press?
I am just wondering if ghost ratings take into account different types of games? I have only played gunboat or public press games on here, simply because of time constraints/internet access issues. However, it occurred to me that the gunboat vs. regular are so totally different games that maybe there should be two different ratings systems.
4 replies
Open
`ZaZaMaRaNDaBo` (1922 D)
14 Apr 10 UTC
gameID=26686
A new live gunboat game for you addicts out there.
9 replies
Open
Stukus (2126 D)
13 Apr 10 UTC
Your Ideal Woman, Man, &c.
How would you describe your perfect partner?
44 replies
Open
Jimbozig (0 DX)
14 Apr 10 UTC
new live gunboat
2 replies
Open
TAWZ (0 DX)
14 Apr 10 UTC
LIVE GAME
Anybody up for ???

gameID=26678
0 replies
Open
Jimbozig (0 DX)
14 Apr 10 UTC
live gunboat
in a little over an hour: gameID=26672
0 replies
Open
Thucydides (864 D(B))
13 Apr 10 UTC
What if God killed himself?
?
167 replies
Open
Steve_God (0 DX)
14 Apr 10 UTC
What Happens If A Game Doesn't Get Filled?
First time creating a game, gave it 5 days for people to join, at the moment it looks like it might not quite be full at the start time.

What would happen once the start time is reached?
8 replies
Open
Vitus (100 D)
06 Apr 10 UTC
Goondip Chaos Games
Chaos is a variant of diplomacy in which each SC of the original diplomacy map is occupied by a different player, who must use clever diplomacy with loads of other players to create a situation in which they are the one to advance.
26 replies
Open
doofman (201 D)
14 Apr 10 UTC
Live (5min) Gunboat in an hour
1 hour
http://www.webdiplomacy.net/board.php?gameID=26658
17 replies
Open
TAWZ (0 DX)
14 Apr 10 UTC
War is hell...LIVE
gameID=26670

Lets have some fun here
2 replies
Open
doofman (201 D)
14 Apr 10 UTC
Live (5min) Gunboat in 30mins
http://www.webdiplomacy.net/board.php?gameID=26666
2 replies
Open
shadowlurker (108 D)
14 Apr 10 UTC
12 hour game
0 replies
Open
phantom420 (100 D)
14 Apr 10 UTC
JOIN "The Big Cheese" NOW!!!!
We need ppl to play join up!
1 reply
Open
Jimbozig (0 DX)
14 Apr 10 UTC
live game
does anyone want to play one right now? gunboat, preferably.
9 replies
Open
melI980 (0 DX)
13 Apr 10 UTC
The Return of MEL, from 1980...
You forgot to ban doofman, thats another one of my account(s).
14 replies
Open
obiwanobiwan (248 D)
12 Apr 10 UTC
Today On The Philosopher's Corner: Kant's Categorical vs. Mill's Action-and-Utility
I LOVE Philosophy. And I LOVE my Philosophy of Ethics class in the college I go to- the class is active and actually inquisitive, great material, and the professor's hilarious, engaging, and sharp, Dr. Zhu. (THICK accent.) In our class, every session there's a showdown: its obiwanobiwan vs. The Cop (both of us share the actual real name.) I champion Mill, and ACT Utilitarianism. The Cop, being used to strictness and cold law, LOVES Kant. Categorical vs. Utility?
obiwanobiwan (248 D)
12 Apr 10 UTC
Three things to be clear my position before I begin:

-The Cop in class uses another name so we don't get confused with us having the same name and all
-I know I champion Nietzsche here a lot, but he's not often needed in my debates, so it IS Mill, and backup from Hume, Hobbes, and Locke, in different doses, and obviously there's some contradiction going on there (Hobbes vs. Locke for starters) but I haven't had an issue yet... but in any case, it IS Mill I champion (and yes, I know Bentham and JAMES Mill came before, but J.S. is the general big kahuna on this and fits my needs, so its him... I'll use others, not limiting myself to just Mill or just the English, but Mill and ACT Utilitarianism is my key note, bringing my third point...)
-I champion ACT Utilitarianism, not RULE. I LOVE the first 3/4 of Mill's "Utilitarianism" and then he attempts to take a perfectly good idea that has just enough grounding without making it dogmatic and allowing for freedom of action in service of a very basic and open-to-occasion-and-interpretation idea... and then attempts to ground it into dogma. I HATE that. So my arguments SHOULD gear mainly on Act Utilitarianism, and when I say Utility or Utilitarianism, I mean Mill's version of Act Utility, not Rule. If I slip on that... well, I'm sure with all the sharp folks here that seem to catch me at least once in every debate here ;) I'll recognize the mis-phrasing or idea, but just to clarify WHICH and WHOSE idea of Utility I mean and which VERSION... John Stuart Mill's Act Utilitarianism.
lulzworth (366 D)
12 Apr 10 UTC
The fundamental problem with Mill is that one would like to accept what he says - it plays very well to modern sensibility - but if actually examined its very difficult to defend any of his arguments.

The opposite is true of Kant. His ethics are extremely unfashionable in that their implications sound like nothing you'd be alright with, but I'd be shocked to hear you actually dismantle one of his arguments.
obiwanobiwan (248 D)
12 Apr 10 UTC
Well, that's rather vague... can you elaborate?

And I'd think I can take on any of Kantian categorical idea on the grounds that a categorical idea, a dogmatic idea, is both immoral and illogical when taken to the full conclusion.

THAT, too, is vague... but you explain your vague response before I explain mine (and as its late where I am, likely clear up any gaping holes I've aready laid in my argument... nearly time for bed, after all...) ;)
Acosmist (0 DX)
12 Apr 10 UTC
bwahahaha

you give the kids one philosophy class and they're experts

a little knowledge is a dangerous thing
TheGhostmaker (1545 D)
12 Apr 10 UTC
These are two philosophies I have only a slight knowledge of, and really should be more knowledgeable of.

Unless there is an "other" option, I'm really not sure between the two, although I suspect I'll prefer Kant whilst disagreeing totally with him.
nola2172 (316 D)
12 Apr 10 UTC
Both philosophies have significant deficiencies, but Utilitarianism is far worse. The major problem is that it can be used to justify almost any action, no matter how bad it might be, if some good that outweighs it results (or can at least be posited to result). For instance, if someone were to propose that we do some medical experiments that would result in the death of 10,000 perfectly healthy people aged 35-45 (and chosen randomly) but would allow us, with 95% probability, to find a cure for something or another that would result in 15,000 people aged 35-45 not dying of a disease, then this would be justified by utilitarianism (assuming that everyone being affected would have equal happiness, etc.). However, in reality, this is just not a viable way of determining what to do.
TheGhostmaker (1545 D)
12 Apr 10 UTC
I have to admit that I think when you followed it through, act utilitarianism would very closely resemble rule utilitarianism, on the grounds that the best way to act is almost always to follow some predetermined guidelines rather than try to make the decision there and then.
rlumley (0 DX)
12 Apr 10 UTC
You're utilitarian? Really?

Well, my days of not taking you seriously have come to a middle...
hopsyturvy (521 D)
12 Apr 10 UTC
Only if you judge the death of 10,000 people due to an experiment as equal in cost to the death of 10,000 people due to disease. However, the value in terms of human happiness is not equal, at least in part due to our innate (and possibly illogical) preference for death by inaction over death by deliberate causes
TheGhostmaker (1545 D)
12 Apr 10 UTC
"You're utilitarian? Really?"

I'm guessing that that you aren't addressing that to me?
rlumley (0 DX)
12 Apr 10 UTC
No... I was addressing obiwon. But the statement applies to anyone who thinks that Utilitarianism is even a remotely valid <i>ethical</i> theory.
Thucydides (864 D(B))
12 Apr 10 UTC
It definitely is valid.

People deride it, and point to Nazi's or mental hospitals etc as examples of why it's not a good moral philosophy.

This is crazy. The Nazis etc just grossly misinterpreted it. The overall general welfare is the highest aim.
TheGhostmaker (1545 D)
12 Apr 10 UTC
There are massive flaws in utilitarianism, it demands murder as morally imperative in circumstances where the person killed is totally innocent.
TheGhostmaker (1545 D)
12 Apr 10 UTC
Basically the question is deontology vs consequentialism.
lulzworth (366 D)
12 Apr 10 UTC
@obiwan - Sorry for the delayed response. The categorical imperative is both "illogical and immoral" when taken to its natural conclusions? So you're pushing a reductio ad absurdum angle here: alright.

Some questions: How can it be "immoral"? Since it is an ethical theory, it has the underpinning of being the FOUNDATION for some form of morality. Now, it can be immoral IF you accept another, contradictory moral system, but what you're saying seems to be an appeal to meta-moral judgement, which is exactly the sort of confused nonsense that Kant is wary of.

Second: Since the categorical imperative is entirely predicated on a universalization principle (essentially: take as your maxims those things which aline with pure practical reason, derived a priori and not empirically as individual maxims are wont to be), I am very skeptical of the notion that they can suffer from reductio ad absurdum, UNLESS you make the aforementioned appeal to meta-morality (that is: you can say, "look, the categorical imperative requires the death penalty. I think the death penalty is immoral, so this system is too!". But that is an appeal to a separate moral system that misses the point of ethics attempting to establish a valid framework for morals)

Illogical. Thats the one thing it could be. And so again: Please point to a section of Kant's argument that you object to on logical grounds and we can have that debate. Remember, to object to an argument in logic you need to either (a) object to one of its premises, or (b) argue that the conclusion does not follow from those premises.
Chrispminis (916 D)
12 Apr 10 UTC
"But that is an appeal to a separate moral system that misses the point of ethics attempting to establish a valid framework for morals)"

But we're not building a framework for morals in a vacuum here. While I find dogmatic utilitarianism and categorical imperative to be logically consistent in their conclusions, many examples can clearly be found where the conclusions so directly clash with any common sense sort of morality. If you're building a frame for morals they should at least fit the picture right? How else would you compare the validity of two or more logically consistent moral frameworks if not by agreement with your own personal moral sense?

The real problem is that our natural moral sense is imperfect, inconsistent, and certainly not logically complete. This is simply due to the fact that it was built up hodge podge by the forces of evolution. We've inherited a moral sense that, by all means, was never 'designed' to analyze complex ethical issues such as abortion, organ transplant waitlist order, or trolleys and switch style hypothetical situations.

Apparently from the American Philosophical Association:
"Consider the following case:

On Twin Earth, a brain in a vat is at the wheel of a runaway trolley. There are only two options that the brain can take: the right side of the fork in the track or the left side of the fork. There is no way in sight of derailing or stopping the trolley and the brain is aware of this, for the brain knows trolleys. The brain is causally hooked up to the trolley such that the brain can determine the course which the trolley will take.

On the right side of the track there is a single railroad worker, Jones,who will definitely be killed if the brain steers the trolley to the right. If the railman on the right lives, he will go on to kill five men for the sake of killing them, but in doing so will inadvertently save the lives of thirty orphans (one of the five men he will kill is planning to destroy a bridge that the orphan's bus will be crossing later that night). One of the orphans that will be killed would have grown up to become a tyrant who would make good utilitarian men do bad things. Another of the orphans would grow up to become G.E.M. Anscombe, while a third would invent the pop-top can.

If the brain in the vat chooses the left side of the track, the trolley will definitely hit and kill a railman on the left side of the track, "Leftie" and will hit and destroy ten beating hearts on the track that could (and would) have been transplanted into ten patients in the local hospital that will die without donor hearts. These are the only hearts available, and the brain is aware of this, for the brain knows hearts. If the railman on the left side of the track lives, he too will kill five men, in fact the same five that the railman on the right would kill. However, "Leftie" will kill the five as an unintended consequence of saving ten men: he will inadvertently kill the five men rushing the ten hearts to the local hospital for transplantation. A further result of "Leftie's" act would be that the busload of orphans will be spared. Among the five men killed by "Leftie" are both the man responsible for putting the brain at the controls of the trolley, and the author of this example. If the ten hearts and "Leftie" are killed by the trolley, the ten prospective heart-transplant patients will die and their kidneys will be used to save the lives of twenty kidney-transplant patients, one of whom will grow up to cure cancer, and one of whom will grow up to be Hitler. There are other kidneys and dialysis machines available, however the brain does not know kidneys, and this is not a factor.

Assume that the brain's choice, whatever it turns out to be, will serve as an example to other brains-in-vats and so the effects of his decision will be amplified. Also assume that if the brain chooses the right side of the fork, an unjust war free of war crimes will ensue, while if the brain chooses the left fork, a just war fraught with war crimes will result. Furthermore, there is an intermittently active Cartesian demon deceiving the brain in such a manner that the brain is never sure if it is being deceived.

QUESTION: What should the brain do?"
lulzworth (366 D)
12 Apr 10 UTC
@Chrisp: In short, your point that our own natural moral sense is imperfect is the point in case. Unless you want to dispute the notion that it is preferable to seek and follow consistent, rational conclusions that whatever you can think up, it seems to follow that accepting an ethical system as logically consistent and built on indisputable foundations at the expense of your own wariness is preferable to throwing out a well-built system because your imperfect instincts are uncomfortable with it.

I don't dispute that our evolutionary sense of morality is flawed. Again: exactly why the meta-moral appeal appears so often, and exactly why it isn't a valid counter-argument to claim that the well-supported conclusions of an ethical argument just "don't feel right to you".
lulzworth (366 D)
12 Apr 10 UTC
Unrelated sidenote: obiwan, a kid who relentlessly argues the same case in every philosophy class with another kid who is equally consuming of everybody else's time is generally referred to as "that kid", and despite what you might think, there are at least five people in that room who contain an impulse to hit you every time you speak.

As a TA for undergraduate philosophy, I can say that your professor may be amongst the five.
Thucydides (864 D(B))
12 Apr 10 UTC
Ghostmaker:

I won't ask you to describe a situation like that, I believe you.

But I maintain as any utilitarian would that if killing that totally innocent person truly would increase the overall happiness of mankind more than it would decrease it, then it is morally acceptable to do so.

How can you disagree with that? How could even the murdered innocent man disagree?

In fact let me pose the scenario, and we will see if it really is as awful as you say:

There is a man. He is a maniacally evil man, and he happens to be the President of the United States.

Uh oh.

So he decides to nuke China.

Uh oh.

But you are his chief of staff. You happened to be ignorant (for whatever reason) of all his evilness until one day, you walk in on him as he on the big bad laptop ordering the strikes. You tell him to stop, you reach for a gun. You are about to kill him.

Now let's stop there. I think most people would say its okay to kill him. For God's sake.... he is about to murder 1billion, and peripherally cause the death of many more. Okay but let's spice it up:

However, as you grab the gun, he grabs a human shield. A dude just happened to be walking by.

Now you notice a countdown. You have 2 seconds to do this. You cannot negotiate, there is nothing you can do. You gun them both down and hit disarm.

Now...... was that the right thing to do?

Of course. Where is the debate here?
Thucydides (864 D(B))
12 Apr 10 UTC
Also: lulzworth, I will second that. There are probably more than 5. Lol.
lulzworth (366 D)
12 Apr 10 UTC
Thucy: Thanks.

Of course, in your situation, Kantian ethics demands you not pull the trigger. Utilitarianism demands that you do.

To tie it into Chrisp's point: Our instinctual moral sense says that we have to do it. But the actual argumentation that underlies that notion is weak, whereas Kant, who ultimately will tell us that The President's choices are not our own and that only by shooting him do we become inconsistent with ourselves and violate basic ethical standards, and thus, we cannot. Of course, The President is making some egregious violations of the categorical imperative.
Thucydides (864 D(B))
12 Apr 10 UTC
Yes... I see your point, which is, I think, that I basically set this whole story up to make utilitarianism "feel right."

But I challenge you, give me a situation where it "feels wrong," because I haven't found one yet.
Stukus (2126 D)
12 Apr 10 UTC
+1 lulzworth. Better than Question Kid, at least. But Time-Monopolization Kid is on the list, too.
rlumley (0 DX)
12 Apr 10 UTC
You can get any number of asinine things from utilitarianism...

Utilitarianism says that it is in fact immoral to end a Ponzi scheme.

Utilitarianism says that a society in which one person is continually tortured for the happiness of everyone else is perfectly OK.

Utilitarianism easily justifies eugenics.

Need I go on?
lulzworth (366 D)
13 Apr 10 UTC
Utilitarianism claims to be able to avoid all cases of for-the-good-of-the-many torture through the mysterious process of "utilitarian calculus", but I think its a code word for, "It would be very difficult to calculate every possible contingency of your actions, which is why consequentialism is an epistemically unsound system of ethics."

Of course,you can agree to some general guidelines for utilitarian decision-making so as to simplify things bu-- OH FUCK YOU JUST SLID INTO DEONTOLOGICAL ETHICS.
Thucydides (864 D(B))
13 Apr 10 UTC
"Utilitarianism says that a society in which one person is continually tortured for the happiness of everyone else is perfectly OK."

And so do I. What exactly is wrong with that? As long as the happiness of everyone else outweighs the one person's misery...

I would argue that ending a Ponzi scheme in the long run makes everyone happier because justice is served and people stop losing their money. I don't see how utilitarianism claims its immoral to end a Ponzi scheme.

Same goes for eugenics. I think that eugenics, yes, was JUSTIFIED by utilitarianism, but as I said about Hitler, they incorrectly applied it.

Say you want to kill off or sterilize a group. Let's say poor people, anyone making less than $5000 a year.

Okay. Cool. Now go ahead and use your skewed utilitarian ethics to justify.

But then do it and tell me that we were not happier and better off before such a thing was done. Eugenics is not justified by utilitarianism. Genocide/sterilization produces so much qualitative AND quantitative suffering that it outweighs any bits of fleeting happiness the others might achieve.
Thucydides (864 D(B))
13 Apr 10 UTC
And lulzworth, you are right, no one can know the consquences of their actions, which is why it is only reasonable to follow your common sense and reason the best you can.

If you fuck up, well, then that's what happens. But people after you learn from your mistakes.

Take Hitler and assume he was a totally sincere utilitarian who really thought the human race as a whole would be better off without Jews. That's a stretch but we'll just assume that. So he did what he did. But it is very easy to look back now, at the consequences, and say:

That was a bad idea. We won't do that again, because it turns out it doesn't produce more happiness than suffering... in fact... quite the opposite.
lulzworth (366 D)
13 Apr 10 UTC
Thucy: I feel like we are going in circles. You're back to "follow your moral intuition and adjust as you go along". I thought we'd argued earlier that human moral judgement was inherently flawed, and that a system based on some sort of pure practical reason was preferable.

Yet now we seem back to what "feels right". If you could demonstrate that NO rational system of ethics exists, then common sense is the next best thing we have. But we haven't established that/
Thucydides (864 D(B))
13 Apr 10 UTC
Oh no definitely not lol I never argued that myself lulzworth. It is USEFUL, but not necessarily PREFERABLE.

sometimes all you have is your instinct.
@Chrisp: There is a hole in that example: If you kill one of the men, the other one will still kill the five men, so the orphans will still live no matter what.
Having said that and because of that the logical answer would be to take the left turn. You save 20 people despite killing the 10 plus leftie, and Hitler and the guy that cures cancer kind of cancel eachother out.
Chrispminis (916 D)
13 Apr 10 UTC
@Conservative Man, that's not a hole. The same five men will be killed but in different circumstances with different intentions. Would you say murder is morally equal to manslaughter? Also, I'm not sure I follow the utilitarian arithmetic whereby a cancer cure and Hitler cancel out.

@lulzworth, so then it does not matter what you do, as long as what you do follows a logically consistent set of rules, then you are equally moral as anyone else following a logically consistent set of moral rules? How would you compare the validity of different sets? Or would you argue that Kant's categorical imperative is the only logically consistent set of moral rules that currently exists?

"Utilitarianism claims to be able to avoid all cases of for-the-good-of-the-many torture through the mysterious process of "utilitarian calculus", but I think its a code word for, "It would be very difficult to calculate every possible contingency of your actions, which is why consequentialism is an epistemically unsound system of ethics."

Of course,you can agree to some general guidelines for utilitarian decision-making so as to simplify things bu-- OH FUCK YOU JUST SLID INTO DEONTOLOGICAL ETHICS."

There are different brands of utilitarianism. I'm sure some would perfectly accept the torture of one person for the happiness of others. The only reason utilitarians seek to argue that utilitarianism does not allow for this is because they recognize the contradiction between their philosophy and their internal moral sense. Would you not accept a logically consistent utilitarian who holds no concern for whether or not their philosophy yields absurd conclusions which contradict common sense morality as long as it is logically consistent?

I could point out that many proponents of the categorical imperative might also accidentally slip into consequentialist ethics when challenged with a situation where categorical imperative so blatantly contradicts their internal moral sense. For example, let me propose the universal maxim that one should not harm oneself. Now let us imagine a situation where a man takes me hostage with a gun and tells me that either I prick my finger with a pin or he will kill me. Would I be violating my duty to myself by pricking my finger with a pin to avoid my own death?
@Chrisp: I was just saying that the 5 men killed has no effect on the decision (at least if I was the brain. And it doesn't matter if murder and manslaughter are equal (since they aren't). But the 5 men die either way, and it has the same result. So it should not be considered in someones decision.
Oh, and I actually would say that the good from the cure from cancer would outweigh the bad of Hitler, since a lot more people would be saved in the long run.
edit to my previous post: Add 'that Hitler killed ' to the end of the last sentence.
Chrispminis (916 D)
13 Apr 10 UTC
@Conservative Man, ok, that's a standard consequentialist stance, and that's fine. But I don't believe that you are really OK with your choice. For one, your choice of left by logic of saying that you save 20 people at the expense of 10 suggests that you would be OK with letting ten people awaiting heart transplant die to save twenty people awaiting kidney transplants.

Are you ok with this? What if it required active action? Would it be justified to kill a random innocent person for the sake of harvesting two kidneys, a heart, and a liver, to save four other people? If active action makes a difference then how do you equalize killing five men for the sake of killing them and killing five men by accident when you're on your way to save ten men? You're telling me you would be indifferent to killing a man who committed manslaughter while trying to save others and a man who committed murder? You're choosing between killing a good man and an evil man.
@Chrisp: Well if, like a reglar person, and I couldn't see the what the men would do in the future, but I could tell that one was good and one was evil, and I was in the same situation, I would go right. And in real life, I would rather kill the evil man. However, in this particular situation, it is better to go left. And in real life I would not kill a person to harvest there organs, as I believe that (murder) is a sin. Letting the people die that need transplants however, is not a sin, since I did not have anything to do with their needing transplants. And they might find transplants somewhere else.
lulzworth (366 D)
13 Apr 10 UTC
@Chrisp: You're missing the point. We keep coming back to contradictions with your "own moral sense" that make certain conclusions "absurd". Again, unless you want to reopen to the notion that moral instinct is flawed, you'll need to adjust your definition of "absurd" to apply only to those things which fail to meet any standard of logical rigor. The point is: unless you have a decent counter-argument, IF the categorical imperative demanded you take death over minor self-harm (it doesn't, really), AND IF Kant's reasoning is perfect (I am not claiming it is), THEN you sort of need to accept that that IS the correct course of action, REGARDLESS of your instincts.

I don't really see how you keep missing this point. Deduction 101.
lulzworth (366 D)
13 Apr 10 UTC
Also, sidenote: ConservativeMan, if you are talking about "sin" in a Judeo-Christian sense, your Bible reading got mixed up with the Chicago School of Economics and an Ayn Rand pamphlet somewhere back there. I am fairly certain that Matthew 25:40 doesn't just mean "what you are directly responsible for - allowing suffering through inaction is not included."
Chrispminis (916 D)
13 Apr 10 UTC
"Letting the people die that need transplants however, is not a sin, since I did not have anything to do with their needing transplants. And they might find transplants somewhere else."

Your choice directly deprived them of the hearts that they needed to live! What if the trolley, instead of running over hearts, caused a fire which burned down a food bank resulting in the starvation of ten people who's kidneys were later used to save twenty people?

@lulzworth, haha, no I *think* I get your point. I brought up the example of the pinprick because I was hoping you yourself would fall into consequentialist ethics. I was somewhat expecting you to point out a flaw by saying something like, well the maxim could be modified to "do not harm oneself unless not doing so would result in even more harm to oneself." But you didn't really take the bait. I'm curious if you personally hold yourself to the Kantian standard in practice or if you merely accept its theoretical legitimacy. =P

Still, I feel like you haven't addressed the point I made in the first paragraph, but I'll reiterate. Are all sets of logically consistent sets of ethical rules equally valid? Or perhaps the categorical imperative is the only one you believe exists? If more than one exists, say you accept that full on utilitarianism is at least consistent, then how would one compare the validity of the two systems if not by agreement with your internal moral sense?

See here's the issue I take with your conclusion that we ought to choose the logically consistent philosophical framework over our animal moral sense. In my mind, the natural moral sense quite adequately solves simple moral dilemmas. We've been given a sense of fairness to see that two people committing the same crime ought to be given the same punishment. We've got the capacity for empathy where we can understand that we should treat others the way we would like to be treated.

It's the more complex moral dilemmas that really screw up the natural moral sense and it is with these complex dilemmas in mind that we turn to ethical philosophy. Now, it's great because with the categorical imperative and utilitarianism, many of the more complex moral dilemmas become much easier to solve thanks to a logically consistent framework. But wait! What's this? Somehow some of the more simple moral dilemmas are solved quite contrary to the natural moral sense when taken to the logical conclusion demanded by the new ethical theories we've developed. What should we do? Am I so foolish to reject the dogma of rigid ethical philosophy when it so blatantly stands at odds with an internal moral sense on simple issues where the internal moral sense might be considered to be more authoritative?

I think the real search is for a logically consistent ethical philosophy that does not stand at odds with the internal moral sense on the more simple ethical issues, but allows us to resolve the more complex ones that confound our animal ethics. I'm sure that's a major reason why the field of ethics has certainly not let Kant have the last word.
TheGhostmaker (1545 D)
13 Apr 10 UTC
Lulzworth is right, any argument from “look at this example, I think that that is immoral” is begging the question, unless you make a meta-ethical claim that morality is subjective (which to me implies ethics is really non-existent too)


“Ghostmaker:

I won't ask you to describe a situation like that, I believe you.

But I maintain as any utilitarian would that if killing that totally innocent person truly would increase the overall happiness of mankind more than it would decrease it, then it is morally acceptable to do so.

How can you disagree with that? How could even the murdered innocent man disagree?

In fact let me pose the scenario, and we will see if it really is as awful as you say:

There is a man. He is a maniacally evil man, and he happens to be the President of the United States.

Uh oh.

So he decides to nuke China.

Uh oh.

But you are his chief of staff. You happened to be ignorant (for whatever reason) of all his evilness until one day, you walk in on him as he on the big bad laptop ordering the strikes. You tell him to stop, you reach for a gun. You are about to kill him.

Now let's stop there. I think most people would say its okay to kill him. For God's sake.... he is about to murder 1billion, and peripherally cause the death of many more. Okay but let's spice it up:

However, as you grab the gun, he grabs a human shield. A dude just happened to be walking by.

Now you notice a countdown. You have 2 seconds to do this. You cannot negotiate, there is nothing you can do. You gun them both down and hit disarm.

Now...... was that the right thing to do?

Of course. Where is the debate here?”

Ok, in this circumstance, noting that I am not a Kantian.
a) You have no obligation to take any action. Being an egoist, I think that, if you have a strong reason not to want to fire, you shouldn’t fire. Of course, the harm done to yourself by the death of a billion people & Nuclear Winter is considerable, so perhaps it’s in your interest to stop the attack.
b) It is perfectly okay to shoot the president in this stage. I don’t think he has any rights whatsoever if he is violating other people’s rights.
c) In matters of defence of the property right, if somebody gets killed in the crossfire, it is the person who was initially violating the property right who is blameworthy, not the guy who fired the gun.
TheGhostmaker (1545 D)
13 Apr 10 UTC
Notes:

1. I do believe in the property right, so don't criticise my egoism on the grounds "well, you'd just go around stealing then"

2. A moral system can't survive just by being logically consistent. It must be logically necessary too.
TheGhostmaker (1545 D)
13 Apr 10 UTC
""Letting the people die that need transplants however, is not a sin, since I did not have anything to do with their needing transplants. And they might find transplants somewhere else."

Your choice directly deprived them of the hearts that they needed to live!"

Nope, you cannot consider action and inaction equivalent. Demanding an action of somebody requires a very different kind of justification to demanding an inaction.

You cannot consider somebody to be deprived of something because I don't given it to them, only if I take it from them. You really need some kind of prior justification for your ethical beliefs.
lulzworth (366 D)
13 Apr 10 UTC
@TGM, Chrisp:

No, I am myself not a Kantian Saint. I am not even wholly convinced that all of Kant's arguments are valid, but honestly, even if I was the likelihood that I would hold to the categorical imperative is very low. I couldn't even play this game, really, since so much lying is involved.

HOWEVER, I do believe that TGM raises something I should have raised awhile ago: I kept using "logically consistent" since the term was raised elsewhere, but its true, I also mean logically necessary in the technical sense: it needs be demonstrated not only that all conclusions of the system agree with one another, but that they are universally true conclusions derived from some form of a priori reasoning, which is the entire basis of Kantian ethics, so I suppose a point in his favor.

Chrisp: Again, when you say our "personal moral sense is adequate for solving most simple moral dilemmas", I understand the laymen's truth of that statement, but from a philosophical perspective, you are once again begging the question by using your personal moral sense to assume the conclusions it generates are adequate. Without a theoretical grounding, they might very well be wildly unacceptable.
@Chrisp: I actually just realized another reason for not killing a guy to harvest his organs: Other people would probably start doing it to, resulting in lots and lots of murders, and eventually there would be no one left to give the organs to.
Chrispminis (916 D)
13 Apr 10 UTC
@TGM, nor do I, but Conservative Man earlier made the claim that the cirumstances and intentions preceding the death of five men were equivalent because the same five men are dead, so I was wondering why he made the distinction when it came to organ donation.

@lulzworth, ok fair enough. Still, I guess I've never been satisfied with any top-down theoretical ethics since to me it's apparent that the only reason we construct such theories is because we are endowed with an internal moral sense. Given that I do not accept theistic moral authority, I don't believe that morality actually has any transcendental value beyond the lubrication of social values and the stability of society. Ethics arises simply out of the necessity of humans, as social animals, to co-operate and engage in reciprocal relationships with other humans for their mutual benefit. Asocial creatures, such as octopi have no use for such capacities, and despite their superior cognitive abilities, animals as simple as city pigeons might have a more developed ethical sense, though this is speculation on my part.
I also agree with what Ghostmaker said. It's not my fault they need organs, therefore, it is not my responsibility. Plus, they could get organs elsewhere.
Chrispminis (916 D)
14 Apr 10 UTC
Actually the problem implies that the heart transplant patients will not get their hearts elsewhere, while the kidney patients will, but the brain does only knows about the hearts and not the kidneys.
lulzworth (366 D)
14 Apr 10 UTC
Its also worth noting that the epistemic notes in the puzzle are just designed to further overwhelm your gut morality. If anything, its meant to illustrate you natural faculty's easy capacity for being overwhelmed. Also, as someone in philosophy, their phrasing of "the brain knows..." is really amusing.

Chrisp: I think whether or not moral law is derived from a justified source is part of the responsibility of each individual theory. Its not really sensible to discount the possibility since as either of us can easily conceive of their being such a thing as a functional moral system, it IS possible, in a strict metaphysical sense.

Have you read Critique of Practical Reason? It does some very good work towards that point. Although having read Critique of Pure Reason helps with understanding some of its assumptions, and that book is a mess for the untrained.


48 replies
Jimbozig (0 DX)
14 Apr 10 UTC
Another gunboat game
in 20 min gameID=26636
6 replies
Open
Draugnar (0 DX)
12 Apr 10 UTC
Interesting Gunboat to watch and enjoy.
gameID=16346 - I won't comment as it is an ongoing game and would appreciate it if none of you did beyond the basic "Nice move by ___ in ___" kind of thing. We are in Fall 13 with 6 nations still fighting hard.
12 replies
Open
Jimbozig (0 DX)
14 Apr 10 UTC
Gunboat game
live in 20 min. gameID=26637
3 replies
Open
IKE (3845 D)
13 Apr 10 UTC
Hall of Fame Gunboat.
http://webdiplomacy.net/board.php?gameID=26452
I invited 15 players above me in the Hall of Fame. 3 have joined.
If you want to play in this game post here or PM me for password.
Top players will get in first.
5 replies
Open
5nk (0 DX)
13 Apr 10 UTC
Live wta gunboat
2 replies
Open
S.E. Peterson (100 D)
14 Apr 10 UTC
WTA Live Gunboat in 1 hour (30 pt bet)
http://webdiplomacy.net/board.php?gameID=26629
0 replies
Open
yamchagoku1 (161 D)
13 Apr 10 UTC
LIVE Ancient Med Game starting in 30 min!
Self explanatory
http://www.webdiplomacy.net/board.php?gameID=26624
Join here and get in on the fun!
2 replies
Open
Mr Pidge (243 D)
13 Apr 10 UTC
Live game, 10:30 PM (GMT)
http://webdiplomacy.net/board.php?gameID=26620

3 more :)
0 replies
Open
MadMarx (36299 D(G))
11 Apr 10 UTC
Top 20 Ghost Rating Game
This will be first come first served, I'll send the info to each in a PM, join if you'd like. If we can't get it started in a week, I'll open it up to the top 50.
39 replies
Open
nola2172 (316 D)
13 Apr 10 UTC
Javascript Timeouts when entering orders on World Variant
I am not entirely sure where this goes, so if I am posting in the wrong spot, please redirect me. In a world variant in which I am playing (http://webdiplomacy.net/board.php?gameID=20858), when I (and it appears others) try to either convoy an army or move an army that borders the sea, entering the final combo box of the order takes a very long time and I recently had to click the "No" button to allow Internet Explorer to continue around 16-18 times per order to get them to "stick".
2 replies
Open
StevenC. (1047 D(B))
13 Apr 10 UTC
So....
I think I'm starting to suffer from game exhaustion. Any ideas on how I can counterract this?
7 replies
Open
Page 565 of 1419
FirstPreviousNextLast
Back to top