Thursday, December 31, 2009

Suspended disbelief – why?

One of the questions I asked in my first post on this subject is why we can suspend disbelief for fiction. Oddly, I think it may be connected to how we process reality.

Philosophers have wrestled forever with the problem of solipsism. How do we know that our individual consciousness is not all there is, that everything we experience isn't just a figment of our imaginations? I think the odds are against it – if my consciousness is the only intelligence in the universe, how did that intelligence come into being? (Is it just you and me, God?) But there is no logical requirement that the universe consist of more than our own minds.

What makes solipsistic speculation possible is the fact that we do have to put the entire universe as we know it through our consciousness in order to be conscious of it. That doesn’t mean it’s not out there, but it does mean that what we see is our version of what’s out there, not really what is out there. I don’t mean to make too big a deal of the idiosyncratic nature of this translation. It’s not interesting to me, today anyway, whether you see “blue” differently from me. On many days we can agree that the sky is blue; we perceive such things similarly enough to do business, and that makes language and, therefore, civilization possible. So the differences can wait.

I recently ate at a restaurant where the menu stated “Substitutions are not allowed, but additions are welcome.” Adding is easier than substituting, and nowhere is that more so than in nature and, especially, evolution. I suspect that how we process reality differs from how a hamster or a fish or a puppy processes it only in added complexity. Thus, I think that our ability to process events is an extension of our ability to process things, that the machinery we use to see that an event is more than a series of random motions, or a process more than a series of random events, is really the same machinery that we use to determine that a chair is not a random assemblage of material.

Applying machinery powerful enough to construct narratives to figure out that a chair is a chair seems wasteful, so we probably do use neural shortcuts for such things, but those shortcuts seem to me to be derivatives, memories of the larger machine’s earlier workings. Nature creates machinery and derivative shortcuts rather than direct alternatives – additions, not substitutions – like Jack Nicholson’s famous order of toast in Five Easy Pieces. “Now, just hold the chicken.” (A case of “addition” by subtraction: Jack adds to the concept of a chicken sandwich the concept of absent chicken. In logical terms, adding mayo and removing chicken are analogous processes – amendments to the chicken sandwich template.)

How, though, do we know, for example, that a side order of toast is “like” a chicken sandwich without the chicken? To make that connection, we must be able to recognize patterns. There are lots of surfaces we can sit on besides chairs, and we know instantly that something is sufficiently “chair-like” either be called a chair or, at the very least to be sat upon. Categorization is about analogy – we make decisions based on our perception that something is like something else. Otherwise, how can we learn anything useful? Unless what we encounter strikes us as like something we have learned, how do education and experience help us cope?

If we are to be able to recognize things, we need to be able to recognize patterns, and if we can recognize things and patterns, we should be able to apply the same mental gear to recognize events and common narratives.

Which brings us to fiction. Nature abhors excess capacity as much as she does a vacuum. If we can learn by analogy to events, why should we be limited to events that actually happened? (Assuming for the moment that history, as Napoleon said, isn’t just lies agreed upon.) Maybe there are fictions that, if we could internalize them – if we could make them equivalent in teaching power to actual experience – would teach us common lessons, especially the cautionary ones, that are useful to know.

How, then, from an evolutionary standpoint, do we imbue fiction with the power to teach? Can we most profitably listen to a story, always aware that it’s just something some guy made up, something that didn’t really happen? Wouldn’t it be great, if, just while the story is being told, we could immerse ourselves in it as if it were really happening? Could any other posture elicit a greater dose of education from the story?

In other words, it seems to me that if we didn’t suspend our disbelief toward stories, we could not learn from them, at least not as well. We wouldn’t enjoy them as much, wouldn’t seek them out, tell them (to a bored audience), or, given the dull response, even bother to make them up. What I’m saying is that we have fiction precisely because we are willing to be a good audience for it, that is, to suspend our disbelief. And given the pedagogical power of stories, the idea that we could not come up with fiction as a way to deliver more of them just seems foolish. Of course, we have fiction, so, of course, we have learned to suspend our disbelief to accommodate it. Otherwise a race of story-tellers would have taken our natural selection lunch long ago.

Wednesday, December 23, 2009

The Funny Game of Suspended Disbelief

Another of those philosophical musings I promised myself. SPOILER ALERT – Movie plots discussed.

Why do we suspend disbelief? Why can we? And what are the nuances?

I found myself asking those questions after stumbling onto the last five minutes of “Funny Games” on cable. I knew what the movie was “about” – more on that later – and I knew that the action was too brutal for my taste. So I’ve never watched it all the way through. I did, however, watch the last two minutes some time ago to make sure that the central home invasion ends as Hollywood likes, with dead invaders, and I was surprised and disturbed to learn that it does not. But I did not know the details. By watching the last five minutes last night, I picked up that one of the victims is casually drowned right before the sociopathic villains start their mayhem over again with new players. It was quite a disturbing scene.

Today, I searched for reviews of the movie to see what there was to like about it – why people actually paid to see it. Many of the reviewers liked that the movie, as an exercise in terror, thwarts the viewers’ expectations of the bad guys getting their comeuppance. But several reviewers mentioned three related (to me) aspects of the movie of which I, having not watched all of it, was unaware. The first is the breaking of the fourth wall – in one scene (or more?), one of the villains speaks directly to the audience or mugs for the camera. The second is a scene where the female victim succeeds in killing one of her attackers, but the other then accuses her of “breaking the rules,” - an ironic reference, perhaps, to the fact that the movie was about to "break the rules" - and uses her TV remote to “rewind” the movie so that he can prevent that particular outcome. The third thing, which seems unrelated at first, is that, according to the reviewers, all of the violence in the movie takes place off screen (though not out of earshot).

What seems to have bothered some reviewers most was the “rewind” scene, the physical impossibility of the action. There we are, all caught up in suspended disbelief, treating the invasion as if it is actually happening, and then, pow, we are reminded by the illogic of the action that we are watching a fiction, and that sucks, at least it does if we are in some sense pretending that it is not a fiction. But should we be?

Have you seen “The War of the Roses”? If so, can you summarize the plot? Hint: it has nothing to do with Michael Douglas and Kathleen Turner beating each other’s brains out. No, the plot of “The War of the Roses” is “A divorce lawyer tells a potential client a cautionary tale to test his commitment to the process.” The tale he tells, which takes up the bulk of the movie, and which tempts us to move inside its wrapper and suspend our disbelief as to it, is simply incredible. But instead of complaining that the “plot” is incredible, we need to understand that, even within the movie, the story is a fiction, an exaggeration of the perils of divorce litigation. By treating the excesses of the tale as mere embellishments by the lawyer character to make his point, we can dismiss the incredible as being intentionally so without disrupting the flow of the main plot, which, perhaps unbeknownst to us until we reflect on why the extravagance of the story is intentional, is the lawyer’s meeting with the potential client.

So, too, I think, was Director Michael Haneke’s goal in “Funny Games.” The story is too brutal actually to tell as if it were really happening in an imaginary universe. It certainly was for me: so long as I thought that the plot was “Two sociopaths invade a home and torture/kill a family,” I had no interest in watching it. But if the plot is “Some guys make a movie about home invasion to explore how the use or non-use of Hollywood conventions affects an audience” – if that is the plot of the movie I am watching, and not just (but maybe, also) the purpose of Haneke’s making the movie itself, then I can suspend my suspension of disbelief from time to time to remember that I am watching the making of movie, and not that movie itself. But I have to be able to go back into the movie within the movie in order to allow the moviemaker to find out how I would react to such a movie, if he actually made it.

The notion that we are watching what is essentially an academic exercise is heightened, I think, by the low-budget touch of off-screen violence. Yes, there is an inquiry to be made into the effectiveness of such action, but there is also the practicalities of the film budget. “Funny Games” is not necessarily a low-budget movie - $15,000,000 I think – but the film within the film clearly is. The real actors are highly paid in the real world, but the characters, if they are actors, too, within the movie, are nobodies as far as we know. Why waste money on stunt doubles and FX violence, especially when you can use the device to see how off-screen violence plays? Necessity as virtue. Nice.

And if any doubt remains, just listen to the apparently nonsensical jabbering of the killers in the last few minutes of the movie, where they talk about colliding universes of reality and fiction, and which is to be treated as “real.” That’s a lot of writing to be pointless. But the setting and action during that scene are so distracting that we don’t listen. These guys are crazy, right? And they speak so fast…

I think “Funny Games” is a masterful piece of moviemaking. That does not mean that I could stomach watching it, even thinking that it's about what I think it’s about. But I may give it a try, this time with my disbelief firmly in place, if only to see if that’s possible.

Of course, this little movie review doesn’t even address the question of why we can suspend disbelief much less answer it. Maybe next time…

Sunday, December 20, 2009

C-CSPAN and the Law of Unintended Consequences

I admit to a sort of obsession with dominoes, how one thing leads to another. One of my domino constructions starts with C-SPAN – C-SPAN 2 to be precise – and ends with governmental paralysis.

Remember “Mr. Smith Goes to Washington,” where Jimmy Stewart’s young Senator holds the floor for days on end until public opinion turns to his view and his filibuster saves America? The movie is pure fantasy, of course: 65 senators (back then) could have cut off debate, which means that there were at least 30 other senators who shared our hero’s view of the pending legislation. Where were they?

Anyway, the fact is that filibusters used to be conducted by real senators giving real speeches. The current practice, though, is for senators who oppose a bill to announce their intention to filibuster it and, thereby, to require sixty votes for its passage. According to Wikipedia, Senate Rule 22, which, of course, is not written in English, allows for speechless filibusters. The same Wikipedia article also says that the Senate Majority leader can order that real speeches be made. I admit that I cannot find any evidence in the rule that any of this is so. Nevertheless, the Senate makes its own rules, and the Senate can change them – though even there Wikipedia says that one Senate rule requires a 2/3 vote to change the Senate rules – a Gödelian nightmare if ever I saw one. (What if the rule said that the rules could only be changed when Hell freezes over?) Thus, if there are speechless filibusters, the Senate rules clearly countenance them, and those rules clearly could be changed to get rid of them.

Why, then, are speechless filibusters allowed? I can think of only two reasons, although I’m well aware that that’s not the same thing as saying that there only are two reasons. One of my candidates is laziness. Both parties use the filibuster when they’re in the minority. Why should they oblige themselves to actually have to blabber on? If one side makes the other actually speak, the same will happen to them when their turn comes. My second candidate, though, is the first domino in my chain: C-SPAN 2, which provides coverage of Senate floor speeches. I don’t believe that filibusters are all that attractive, and I think that if the American people got to watch enough of that particular bit of sausagery, they would not react kindly. As a result, the Senators, in order to preserve their own jobs and to preserve the filibuster, per se, have agreed that they should not actually have to carry one out.

I am sympathetic to the Senators’ plight (but only to their plight). The filibuster is an important tool whereby the tyranny of the majority can be constrained. And I’m perfectly happy that this current Senate needs 60 votes on such major things as healthcare reform. But on judicial nominations, or Pentagon appropriations? On things for which Senators would not have taken the trouble actually to filibuster, or over which they would have looked foolish while filibustering, things for which a majority in all good conscience should be enough to get done? Nah, I think Senators should be made to play the game on those bills. But, I agree, that the cost of making them play the game is too high with C-SPAN documenting every wasted minute of it.

That’s why I’m sympathetic to the Senators’ plight. But the answer, it seems to me, is quite simple. The Congressional Record records every word spoken in the Senate, and members of the Press are permitted in the gallery. So, there is no fear that the Senate will become a secret society if the C-SPAN 2 cameras are turned off during extended debate. I think the senators should be made to filibuster the bills they really, really, want to stop, and the first hour of each senator’s speech should be televised, so that if he actually has something to say, the people will have access to it, but after that, the cameras are turned off until the next speaker rises.

A televised filibuster is not a baby; there is no reason not to cut it in half to preserve the device and yet to restrict its use to cases where it’s worth the trouble to use. Tom Friedman has been complaining about how our system is only capable of “sub-optimal solutions.” Maybe we could make them a bit less sub-optimal if some of them only required 51 votes to get out of the Senate.

Friday, December 18, 2009

Absolutely (not).

Dragged by events into policy issues, I have posted almost exclusively on politics and economics, when I had actually hoped to hold forth more on philosophical stuff. With what appears to be a break in the action, here’s something along the latter lines.

The subject is moral absolutes. In what follows, I may use the terms “morals” and “ethics” interchangeably in some places and not so in others. I can't find a reliable distinction that makes one word always preferable to the other in every context. But where the distinction can be observed, I will use "ethics" to describe behaviors from the actor's perspective, and morals to describe them from the perspective of the community that the behaviors affect, either directly or as a consequence of the community's being made up of members who exhibit them.

But back to moral absolutes. Are there any? I say “no.” But then what? Relativism, at least as it’s commonly understood, is not necessarily the only place one can go without absolutes.

I start from the premise that we think about morality with the object of forming principles on which to base our ethical decisions. I recognize that some people reject the whole notion of “thinking" about morality, favoring instead the notion that one ought to strive to acquire the aretaic virtues, extracting ethical principles only as descriptive of how virtuous people are observed to be, not prescribing them as deontic rules to be observed. But to me, aretaic and deontic ethics are the vinyl and CD of the same music. I will say, though, that the aretaic school has one thing right: if there are no rules, per se, then a fortiori there are no absolute rules. And that’s really the point of this post: to harmonize aretaic and deontic ethics on the issue of absolutes.

At the outset (four paragraphs in, and I’m still at the outset – yikes!) I must declare my affinity for aretaic ethics. It’s like the old line “Don’t marry for money; go where money is, and marry for love.” Good people don’t need rules; the rules need them. But we live in a world where much of ethics is debated on the deontic plane, and rather than say that the issue of absolutes merely demonstrates the futility of deontic analysis, I prefer to refine that analysis so that it can be of use to those who find it useful, even as a way to grow in arĂȘte.

Actually, I want to, er, press the vinyl analogy. Leaving relativity and quantum mechanics aside for the moment, there is no set limit on how high or low a sound an analog disk can record. But a CD is expressly limited to the range that can be represented by the 0’s and 1’s available to code pitch. And yet, one assumes that for most ears CD technology is adequate for recording all of the music that has ever been recorded on vinyl. Some audiophiles can hear analog nuances that are lost in digital recordings, but those of us who are not audiophiles get along quite well with CD’s as our music source.

Can we say, therefore, that CD technology is “always” adequate? Well, for some people – most people, really – it is. If one of these people has to decide whether to buy a CD, is there any point in that person inquiring as to whether some audiophiles might find a particular CD inadequate? Or should the person, being one of those people who cannot tell a CD from a vinyl recording, simply order his or her musical life on the “absolute” principle that, for him, CDs are absolutely adequate? Even if we tell this person that there could be a piece of music for which even he would find a CD inadequate, unless he can actually identify such music easily, what good does the information do him? In other words, whether or not CD technology is always adequate, many listeners are best advised as a practical matter to behave as if it is.

In the ethical realm, this logic brings us to a question of human engineering. Is it better for a society to teach its children that (i) honesty is the best policy, or (ii) we cannot be 100% sure that honesty is the best policy, but there are no known instances in which honesty is not the best policy, i.e, it is the best policy so often, and our ability (dulled by both ignorance and bias) to discern situations in which honesty is not the best policy is so suspect, that we ought to behave as if honesty were the best policy? (I’m using “honesty” here in the sense of deception intended to defraud someone for the benefit of the defrauder and the detriment of the defrauded. Little white lies don’t count: no harm, no foul.) For some people, it makes sense that they be taught that there are moral absolutes and that honesty is one of them. For others, it’s ok, I think, to say that there are no moral absolutes, but there are some principles that are so often true that we are best advised to act as if it they were absolutes.

If, as a matter of selective pressure, the people in a society are most likely to survive if as many of them as possible behave as if honesty were the best policy, what follows? Moral absolutes, I think. Thou shalt not bear false witness. Teaching moral absolutes (like honesty is the best policy) works better than teaching the truth (how ironic!). How, then, can we be honest and still teach the simplified version? I submit that we must believe that there are moral absolutes so that we can honestly teach that there are moral absolutes, because it is in our community interest that the absoluteness of those moral rules be accepted, and we cannot teach it if we don’t believe it. (The Soviets tried to teach things that they knew were false, and look where it got them.)

Moral absolutes thus join Voltaire’s God among those things we would invent if they did not exist - not coincidentally, seeing as how morality is one of God’s principle contributions to our purported understanding of things. So maybe it’s immoral for me to argue that there are no absolutes, just things that are true often enough that we should act that way. But I can’t lie. That would be wrong.