This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
Topics - Pendulate
Pages: 1
1
« on: August 01, 2015, 04:11:51 AM »
2
« on: July 31, 2015, 02:15:23 AM »
http://rationallyspeakingpodcast.org/show/rs139-eric-schwitzgebel-on-moral-hypocrisy-why-doesnt-knowin.htmlFrom the transcript. J: Welcome to Rationally Speaking, the podcast where we explore the borderlands between reason and nonsense. I'm your host, Julia Galef, and with me today is our guest, professor Eric Schwitzgebel. Eric is a professor of philosophy at University of California Riverside. He's the author of the books Perplexities of Consciousness and Describing Inner Experience: Proponent meets Skeptic. He's also the author of the excellent philosophy blog Splintered Mind which I've been a fan of for years.
Eric, welcome to the show.
E: Hey. Thanks for having me on.
J: So great to have you.
One of the things that Eric is most famous for is his work studying the moral behavior of moral philosophers, examining the question: Do people whose job it is to study the question of how to behave morally, do those people actually behave more morally than the average person? Or than the average person in a comparative reference class, like other professors, for example?
Hopefully it's not too much of a spoiler to say: No, they don't...
...At any point, did you look at specific dimensions of morality, like specific behaviors that ethicists were doing more or less of than other people?
E: Yeah, most of our stuff has been on that. That study established in our minds two things. One is that there's no consensus among philosophers about how ethicists behave. That's already an interesting thing to establish. Because a lot of people seem to think it's obvious that ethicists will behave the same, or better, or worse. But it's not obvious to everyone. People give different answers when you actually ask them, without their knowing the data. The other thing it established was that one’s peers’ opinions might have some relation to reality, or they might not.
We've got now at this point 17 different behavioral measures of different kinds of behaviors that are arguably moral. Now, there's lots of dispute about what kinds of behaviors are moral, so none of the individual measures are going to be convincing to everyone. But they tell a very consistent story across the board when you look at them all.
J: What are some examples of individual measures?
E: We looked at the rate at which ethics books were missing from academic libraries. That was our second study. We found that ethics books were actually more likely to be missing than comparison books in philosophy, similar in age and popularity. We looked at whether ethicists and political philosophers were more likely to vote in public elections than other professors. Here we had access to publicly available voter participation data in five US states.
J: Can you also look at whether ethicists’ self reports of voting are accurate? That seems like a separate measure.
E: Yes, we did look at that actually. Probably our biggest study was a survey sent to the same five US states for which we had the voting data. And we asked ethicist respondents, a comparison group of non-ethicists in the same philosophy departments, and another comparison group of professors not in philosophy at the same universities. Three equal sized groups of respondents.
We contacted about a thousand respondents in total. And we got about 200 responses from each group, so a pretty good response rate.
We asked these people, in the first part of the questionnaire, their opinion about various moral issues. And then we asked them in the second part of the questionnaire to self-report their own behavior on those same issues.
Then on some of the issues like the voting issue, we also had, about the same participants, some direct measures of their behavior. So those don't rely on self-report.
Although I should say that in the interests of participant's privacy, we converted everything into de-identified codes... So we're not able to draw individual inferences about particular individuals. All the data was analyzed at the group level.
J: Got it. And the pattern you saw overall was…?
E: Ethicist behavior was basically identical across the board to the other groups of professors. There were some differences, but not very many, and not very strong. And overall, when you put it together, and you combine the data in various kinds of ways… It looks like there's no overall trend toward better behavior.
Although we did find, when we asked their opinions about various moral issues, that ethicists tended to have the most demanding opinions. They thought more things were morally good and morally bad, and were less likely to regard things as morally neutral, than were the other groups.
J: They just didn't act on those principles.
E: They didn't seem to act on those principles.
The most striking example of this was our data on vegetarianism. We didn't have any direct observational measures of this, but the self-report measures are already quite interesting.
Most of the questions in the first part of the questionnaire were set up so that we have these 9-point scales that people could respond on -- very morally bad on one end, through morally neutral in the middle, to very morally good on the other end. Then we had a bunch of prompts of types of behavior that people could then rate on these scales.
One of the types of behavior was regularly eating the meat of mammals, such as beef or pork. In response to that prompt, 60% of the ethics professors rated it somewhere on the morally bad side; 45% of the non-ethicist philosophers, and I think it was somewhere in the high teens for the non-philosophers, 17% or 19%, something like that for the non-philosophers. Big difference in moral opinion.
Then in the second part of the questionnaire, we asked, "Did you eat the meat of a mammal, such as beef or pork, at your last evening meal, not including snacks?" There we found no statistically detectable difference among the groups. Big difference in expressed normative attitude about meat eating; no detectable difference in self-reported meat eating behavior.
J: Pretty interesting. I'm wondering whether this is a result of ethics professors not really believing their ethical conclusions, like, having come to these conclusions in the abstract?
You know how people might say that they believe they'll go to hell if they do XYZ, but then they do XYZ. And you want to say, "I think you, on some level, don't really believe that you're going to go to hell by doing those things." I wonder if these conclusions are somewhat detached from their everyday lives.
I was reminded of this anecdote I heard back when I was in the economics department, about some famous econ professor who ... I think he was famous for a decision-making algorithm or something. And at one point in his career, he was facing a tough decision of whether or not to leave his current department for a different department. He's agonizing about this. And one of his colleagues says, "Well, Bob” -- I don't know his name, let's call him Bob -- "Bob, why don't you use your decision-making algorithm to tackle this?" And Bob says: "Oh, come on now, this is serious!"
Anyway, I'm wondering if something like that's going on. Or if you think, no, they really do believe these conclusions. They just don't care enough to act on them.
E: I'm very much inclined to think they believe them on an intellectual level, at least. It sounds like the econ professor you're talking about regarded it a little bit like a game. When I talked to philosophy professors about things like donation to charity, which is another question we asked about, or eating meat, they have intellectual opinions that I think are ... They don't regard it just as a game. I think it's actually a pretty real moral choice. Now some of them think it's perfectly fine, and some of them think it's not, but I think they take it pretty seriously for the most part.
It'll be interesting to think about whether there's some way to measure this. My inclination is to think that they take it pretty seriously at an intellectual level, and then the trickier question is whether that penetrates their overall belief structure. TL;DR read the title I find this both amusing and depressing. I mean, I don't think personal hypocrisy invalidates an argument, but come on -- if you want to be an ethicist, shouldn't you actually, yknow, care about ethics? As in more than the intellectual stimulation it gives you
3
« on: July 09, 2015, 12:04:10 AM »
Before addressing the question: - This isn't an argument for moral nihlism. - This accepts the existence of an objective system of value (which I think any reasonable person has to accept) So the problem here is not whether some actions are more moral than others, but whether there is any empirically quanitifiable distinction between actions that are moral/ethical and immoral/unethical. Take this moral dilemma from Peter Singer: You are walking down the street when you notice a baby drowning in a puddle. You can easily step in and save the baby, but it would mean ruining your new pair of $100 shoes. Are you morally obligated to save the baby?Now the point of this is to demonstrate that since all of us would (hopefully) save the baby at the expense of our shoes, we are equally obligated to donate a similar amount of money to charity for a child in need somewhere else -- their proximity to us is irrelevant. But this brings up another, more unsettling problem: if choosing not to save the drowning baby is indisputably unethical, then would choosing not to donate to charity be equally so? I don't think many people would be comfortable with this, but I really can't see any logic to get around it. And then, if you do choose to donate $100 to charity, the question would arise of "why only $100? Why not $150? $200?" Assuming you could be more ethical than you are being, are you not essentially being unethical? Where do we draw the line? If the line is drawn merely for pragmatic purposes, then the only truth to be derived from it is that nothing is truly ethical; some things are simply more or less ethical than others. Yet at the same time, everything we do has unethical components. Some quick arguments against this would be: "There's a difference between action and inaction"; that's to say, there's a difference between choosing to actively harm someone than to sit back and let them suffer. That's obviously flawed reasoning, because you are choosing to let them suffer either way; choosing not to act is no less morally repugnant, consequentially speaking "There's a sliding scale; just as some things are more unethical than others, some things are more ethical than others, but they are still more ethical than not"; that's to say, murder is worse than stealing a candy bar, just as donating $150 is more ethical than donating 100. Okay, but the problem is where an action crosses the line from unethical into ethical territory -- judging by the drowning baby example, this seems based on nothing more than intuition. "This doesn't invalidate ethics, though. There's still a continuum of more or less ethical actions, and we should still strive to be more ethical people." Absolutely. This doesn't discredit objective morality at all, but it does call into question whether the colloquial definitions of ethical/unethical can in fact be logically defended. At any rate it looks like the colloquial definitions need a serious overhaul.
4
« on: June 18, 2015, 03:10:22 AM »
No, it's not. Post favorites here.
5
« on: June 05, 2015, 01:03:15 AM »
Somewhat long but worthwhile wall ahead.
Okay, so I'm having some trouble wrapping my head around a thought experiment. It's a follow-on from the thought experiment posed by Thomas Nagel on death. The original is pretty straightforward, and goes something like this:
Imagine an intelligent man who receives a brain injury that reduces his mental abilities to that of an infant. He's unaware of his impairment and is a perfectly happy man-child.
The implication being that this impairment constitutes a severe harm to the man by depriving him of a range of experiences, like reading poetry and listening to music etc. I think this is a deeply flawed argument, but there you have it.
Okay, so here's the follow-up:
Imagine you develop a cure for this man's mental impairment. Imagine it is cheap to make and easy to administer. Restoring this man's mental faculties would be as simple as putting a vitamin tablet in his food. Would you then be morally obligated to give it to him? And would refusing to do so be harming him?
So the difference here being the prevention of experiences, rather than the mere deprivation of them (although I don't think Nagel's exercise actually illustrated a deprivation). I'm inclined to think that there's no real difference, assuming you aren't aware of what you're missing in either case. So it can't be harmful. But on the other hand, I dunno, it just doesn't sit right. I mean, you are missing out on a better experience than you're having, and obviously being aware of that makes it worse, but is that really all you need to be harmed? Isn't ignorantly missing out on a good still worse than getting it?
Someone throw me a bone here. Feel free to give your own opinion on the thought experiment, too.
6
« on: May 31, 2015, 12:14:10 AM »
What do you think?
I find it common for people to be against hunting for sport, but okay with hunting as long as it's done for food. But I can't see any worthwhile distinction; what's the difference between killing an animal because you want their head mounted on your wall, and killing it because you want a steak on your plate?
Assuming in both cases that it is in developed society with access to supermarkets, obviously.
I think this is mostly a conditioned mindset bred from the idea that hunting for food serves a purpose other than merely deriving pleasure at the animal's expense. But this seems to break down pretty quickly when you look at it closely.
Please vote and explain the reason behind your vote. Thanks
*Also this is not intended to be a "go vegan" proselytism. I'm just interested in the ethics of hunting.
Pages: 1
|