Warning: Missing argument 2 for wpdb::prepare(), called in /home/content/62/6454762/html/wp-content/plugins/category-icons/category_icons.php on line 1047 and defined in /home/content/62/6454762/html/wp-includes/wp-db.php on line 992
Hubris and Ignorance | davelovell.net
Feb 272012
 

I may not know what I think I know, you know?

So I’ve been try­ing to come to some under­stand­ing about how very bright peo­ple, like some very close bio­log­i­cal rel­a­tives of mine, can see the polit­i­cal land­scape so dif­fer­ently than I do.

Be warned; I’m about to get all for­mal essay on you…

1.    Cog­ni­tive hubris: each of us believes that his map of the world is more accu­rate than it really is.

2.    Rad­i­cal igno­rance: when it comes to com­plex social phe­nom­ena, our maps are highly inaccurate.

Cog­ni­tive hubris can explain large, per­sis­tent dis­agree­ments over such issues as finan­cial reg­u­la­tion and Key­ne­sian fis­cal stimulus.

Cog­ni­tive hubris is par­tic­u­larly trou­ble­some when com­bined with rad­i­cal igno­rance. Indeed, this con­fig­u­ra­tion jus­ti­fies lim­it­ing gov­ern­ment inter­ven­tion in order to avoid set­ting up sys­tems that are exces­sively fragile.

Kah­ne­man on Hubris

“Our com­fort­ing con­vic­tion that the world makes sense rests on a secure foun­da­tion: our almost unlim­ited abil­ity to ignore our igno­rance.” — Daniel Kah­ne­man1

Cog­ni­tive psy­chol­o­gist Daniel Kahneman’s new book, Think­ing Fast and Slow, is a cap­stone to a dis­tin­guished career spent doc­u­ment­ing the sys­tem­atic flaws in human rea­son­ing. He finds it use­ful to describe us as hav­ing two sys­tems for thinking.

Sys­tem One, as he calls it, is quick, intu­itive, and deci­sive. It may be described as often wrong but never in doubt. Sys­tem One is always active and plays a role in every deci­sion that we make because it oper­ates rapidly and unconsciously.

Sys­tem Two is delib­er­a­tive and log­i­cal. In prin­ci­ple, Sys­tem Two can detect and cor­rect the errors of Sys­tem One. How­ever, Sys­tem Two has lim­ited capac­ity, and often we do not invoke it before arriv­ing at a con­clu­sion. Even worse, we may deploy Sys­tem Two to ratio­nal­ize the con­clu­sions of Sys­tem One, rather than to ques­tion those con­clu­sions and sug­gest appro­pri­ate changes.

Sup­pose you were to ask your­self how well you under­stand the world around you. How accu­rate is your map of reality?

If you inter­ro­gate Sys­tem Two, it might reply, “There are many phe­nom­ena about which I know lit­tle. In the grand scheme of things, I am just blindly grop­ing through a world that is far too com­plex for me to pos­si­bly understand.”

How­ever, if you were to inter­ro­gate Sys­tem One, it might reply, “My map is ter­rific. Why, I am very nearly omniscient!”

Evi­dently, in order to per­form its func­tion, Sys­tem One has to have con­fi­dence in its map. Indeed, else­where Kah­ne­man has told a story of a group of Swiss sol­diers who were lost in the Alps because of bad weather. One of them real­ized he had a map. Only after they had suc­cess­fully climbed down to safety did any­one dis­cover that it was a map of the Pyre­nees. Kah­ne­man tells that story in the con­text of dis­cussing eco­nomic and finan­cial mod­els. Even if those maps are wrong, we still feel bet­ter when using them.2

In fact, a num­ber of the cog­ni­tive biases that Kah­ne­man and other psy­chol­o­gists have doc­u­mented would appear to serve as defense mech­a­nisms, enabling the indi­vid­ual to hold onto the view that his map is the cor­rect one. For exam­ple, there is con­fir­ma­tion bias, which is the ten­dency to be less skep­ti­cal toward evi­dence in sup­port of one’s views than toward con­trary evidence.

Sys­tem Two is evi­dently not able to over­come cog­ni­tive hubris, even in sit­u­a­tions where one would expect Sys­tem Two to be invoked, such as fore­cast­ing the dif­fi­culty of a major under­tak­ing. Orga­ni­za­tions are much more likely to be overly opti­mistic than overly pes­simistic about the time and cost of com­plet­ing projects. Kah­ne­man calls this the “plan­ning fal­lacy.” One plau­si­ble expla­na­tion for this is that plan­ners over-estimate the qual­ity of the maps that they are using to make their forecasts.

Rad­i­cal Ignorance

Polit­i­cal sci­en­tist Jef­frey Fried­man uses the term “rad­i­cal igno­rance” to describe what he sees as the low qual­ity of maps that all of us have in our com­plex social envi­ron­ment. He con­trasts this rad­i­cal igno­rance with the assump­tions that econ­o­mists make, in which mar­ket par­tic­i­pants and pol­i­cy­mak­ers pos­sess nearly per­fect infor­ma­tion.3 Indeed, this year’s Nobel Prize in eco­nom­ics once again rein­forced the pop­u­lar­ity in main­stream eco­nom­ics of “ratio­nal expec­ta­tions,” a par­tic­u­larly strin­gent assump­tion that eco­nomic actors pos­sess uni­formly high qual­ity information.

Largely unwill­ing to con­sider igno­rance, econ­o­mists usu­ally fall back on incen­tives as expla­na­tions for phe­nom­ena. For exam­ple, econ­o­mists explain the buildup of risk in banks’ port­fo­lios in the years lead­ing up to the cri­sis of 2008 as result­ing from moral haz­ard, in which bankers knew that they were going to be bailed out if things went poorly. How­ever, Fried­man points out that if they had truly been seek­ing out high returns with high risk, they would not have been obsessed with obtain­ing the secu­ri­ties with the most pris­tine risk rat­ing: AAA. Low-rated secu­ri­ties would have been used to exploit moral haz­ard even more effec­tively, since they paid much greater yields than higher-rated securities.

Rather than focus on incen­tives, Friedman’s nar­ra­tive would empha­size what I have been call­ing cog­ni­tive hubris. Mort­gage lenders believed that new under­writ­ing tools, espe­cially credit scor­ing, allowed them to assess bor­rower risk with greater accu­racy than ever before. Such knowl­edge was thought to enable lenders to dis­crim­i­nate care­fully enough to price for risk in sub­prime mar­kets, rather than avoid lend­ing alto­gether. On top of this, finan­cial engi­neers claimed to be able to build secu­rity struc­tures that could pro­duce pre­dictable, low lev­els of default even when the under­ly­ing loans were riskier than the tra­di­tional prime mortgage.

Reg­u­la­tors, too, fell vic­tim to the com­bi­na­tion of cog­ni­tive hubris and rad­i­cal igno­rance. They believed in the qual­ity of bank risk man­age­ment using the new tools.4 They also believed in the effec­tive­ness of their own rules and practices.

A com­mon post-crisis nar­ra­tive is that bank­ing was de-regulated in the Reagan-Greenspan era. Some pun­dits make it sound as if reg­u­la­tors behaved like par­ents who hand their teenagers the keys to the liquor cab­i­net, leave for the week­end, and say “Have a good time.” In fact, reg­u­la­tors believed that they had stronger reg­u­la­tions in place in 2005 than they did in the pre-Reagan era.

—Before 1980, mort­gage loans held by banks were illiq­uid assets sub­ject to con­sid­er­able interest-rate risk. These prob­lems were alle­vi­ated by the shift toward securitization.

—Before 1980, insol­vent insti­tu­tions were opaque because of book-value account­ing. This prob­lem was addressed with market-value account­ing, enabling reg­u­la­tors to take more timely cor­rec­tive action to address trou­bled institutions.

—Before 1980, banks had no for­mal cap­i­tal require­ments and there were no mech­a­nisms in place to steer banks away from risky assets. This prob­lem was addressed with the Basel cap­i­tal accords (for­mally adopted in 1988), which incor­po­rated a risk-weighted mea­sure of assets to deter­mine required min­i­mum cap­i­tal. In the 2000s, these risk weight­ings were altered to penal­ize banks that did not invest in highly rated, asset-backed securities.

Thus, it was not the intent of reg­u­la­tors to loosen the reins on banks. On the con­trary, from the reg­u­la­tors’ point of view, it was the envi­ron­ment prior to 1980 that amounted to leav­ing the teenagers with the keys to the liquor cab­i­net. The post-1980 reg­u­la­tory changes were believed to be in the direc­tion of tighter super­vi­sion and more ratio­nal controls.

It turned out that the reg­u­la­tors were rad­i­cally igno­rant of the con­se­quences of their deci­sions. Secu­ri­ti­za­tion intro­duces principal-agent prob­lems into mort­gage lend­ing, as the loan originator’s inter­est in obtain­ing a fee for under­writ­ing a closed loan con­flicts with the inter­est of investors in ensur­ing that bor­row­ers are prop­erly screened. These con­flicts proved to be more pow­er­ful than imag­ined. Market-value account­ing makes finan­cial mar­kets steeply pro­cycli­cal, because in a cri­sis a drop in mar­ket val­ues forces belea­guered banks to sell assets, cre­at­ing a vicious down­ward spi­ral. Finally, the risk-based cap­i­tal rules helped drive the craze for finan­cial engi­neer­ing and mis­lead­ing AAA ratings.

Polit­i­cal Disagreement

Polit­i­cal dis­agree­ment can be explained using the the­o­ries of cog­ni­tive hubris and rad­i­cal igno­rance. The basic idea is that nobody has a grasp on capital-T truth, but each of us believes that our own map of the world is highly accu­rate. When we encounter some­one who holds a sim­i­lar map, we think, “That guy knows what he is talk­ing about.” When we encounter some­one who holds a dif­fer­ent map, we think, “That guy is an idiot.” When you over­es­ti­mate the accu­racy of your own map, it is very dif­fi­cult to explain the exis­tence of peo­ple with dif­fer­ent maps, other than to impugn their intel­li­gence or their integrity.

A metaphor for this may be a topo­graph­i­cally com­plex ter­rain, which none of us can see in its entirety. Each of us is try­ing to find the high­est moun­tain peak in the ter­rain, rep­re­sent­ing the capital-T truth.

Unable to look down at the entire ter­rain, each of us fol­lows what math­e­mati­cians call a “hill-climbing algo­rithm.” We make small probes in the area right around us, and when the ter­rain slopes upward, we climb in that direc­tion. We repeat this process until the probes in every direc­tion slope down. Then we con­clude that we are at the top.

The weak­ness of hill-climbing algo­rithms is that they can get stuck at a local max­i­mum. Instead of find­ing the high­est peak, you stop when you reach the top of one par­tic­u­lar hill. From this van­tage point, you are as high as pos­si­ble, so you do not move.

When two ide­o­log­i­cal oppo­nents wind up on dif­fer­ent hill­tops, nei­ther can believe that the other has sin­cerely arrived at a dif­fer­ent con­clu­sion based on the evi­dence. As Fried­man puts it,

Con­sider the most reviled pun­dit on the other side of the polit­i­cal spec­trum from your­self. To lib­eral ears, a Rush Lim­baugh or a Sean Han­nity, while well informed about which poli­cies are advo­cated by con­ser­v­a­tives and lib­er­als, will seem appallingly igno­rant of the argu­ments and evi­dence for lib­eral posi­tions. The same goes in reverse for a Frank Rich or a Paul Krug­man, whose knowl­edge of the “basics” of lib­er­al­ism and con­ser­vatism will seem, in the eyes of a con­ser­v­a­tive, to be matched by grave mis­un­der­stand­ings of the ratio­nales for con­ser­v­a­tive poli­cies.5

Indeed, our cog­ni­tive hubris is so strong that, accord­ing to David McRaney, peo­ple believe they under­stand other peo­ple bet­ter than oth­ers under­stand them­selves. He calls this phe­nom­e­non “asym­met­ric insight.”6

The illu­sion of asym­met­ric insight makes it seem as though you know every­one else far bet­ter than they know you, and not only that, but you know them bet­ter than they know them­selves. You believe the same thing about groups of which you are a mem­ber. As a whole, your group under­stands out­siders bet­ter than out­siders under­stand your group, and you under­stand the group bet­ter than its mem­bers know the group to which they belong.

In our con­text, this would mean that lib­er­als believe that they under­stand bet­ter than con­ser­v­a­tives how con­ser­v­a­tives think, and con­ser­v­a­tives believe that they under­stand bet­ter than lib­er­als how lib­er­als think. Accord­ing to McRaney, such beliefs have indeed been found in stud­ies by psy­chi­a­trists Emily Pronin and Lee Ross at Stan­ford along with Justin Kruger at the Uni­ver­sity of Illi­nois and Ken­neth Sav­it­sky at Williams College.

Impli­ca­tions

The cog­ni­tive biases doc­u­mented by Kah­ne­man have been inter­preted by a num­ber of thinkers, includ­ing Kah­ne­man him­self, as pro­vid­ing a jus­ti­fi­ca­tion for gov­ern­ment inter­ven­tion. After all, if peo­ple are far from the well-informed, ratio­nal cal­cu­la­tors assumed in eco­nomic mod­els, then pre­sum­ably the clas­si­cal eco­nomic analy­sis under­ly­ing laissez-faire eco­nomic pol­icy is wrong. Instead, it must be bet­ter to “nudge” peo­ple for their own good.7

How­ever, I draw dif­fer­ent impli­ca­tions from the hypoth­e­sis of cog­ni­tive hubris com­bined with rad­i­cal igno­rance. If social phe­nom­ena are too com­plex for any of us to under­stand, and if indi­vid­u­als con­sis­tently over­es­ti­mate their knowl­edge of these phe­nom­ena, then pru­dence would dic­tate try­ing to find insti­tu­tional arrange­ments that min­i­mize the poten­tial risks and costs that any indi­vid­ual can impose on soci­ety through his own igno­rance. To me, this is an argu­ment for lim­ited government.

Instead of using gov­ern­ment to con­sciously impose an insti­tu­tional struc­ture based on the maps of cog­ni­tively impaired indi­vid­u­als, I would pre­fer to see insti­tu­tions evolve through a trial-and-error process. Peo­ple can be “nudged” by all man­ner of social and reli­gious cus­toms. I would hope that the bet­ter norms and cus­toms would tend to sur­vive in a com­pet­i­tive envi­ron­ment. This was Hayek’s view of the evo­lu­tion of lan­guage, morals, com­mon law, and other forms of what he called spon­ta­neous order. In con­trast, count­ing on gov­ern­ment offi­cials to pro­vide the right nudges strikes me as a recipe for insti­tu­tional fragility.

If Kah­ne­man is cor­rect that we have “an almost unlim­ited abil­ity to ignore our own igno­rance,” then all of us are prone to mis­takes. We need insti­tu­tions that attempt to pro­tect us from our­selves, but we also need insti­tu­tions that pro­tect us from one another. Lim­ited gov­ern­ment is one such institution.

Foot­notes

1. Daniel Kah­ne­man, Think­ing Fast and Slow, p. 201.

2. I have had dif­fi­culty track­ing down a cita­tion for this. I thought I saw it on a video at the edge.org web­site. A Google search reveals a num­ber of ref­er­ences to a story “Irra­tional Every­thing” by Guy Rol­nick in the Israeli news­pa­per Haaretz around Jan­u­ary 1, 2008, but the story itself can no longer be found online. As for the orig­i­nal story itself, one ver­sion, given by orga­ni­za­tional the­o­rist Karl Weick, indi­cates that the sol­diers were Hun­gar­ian, and it comes from a poem by Miroslav Holub. See http://leaderswedeserve.wordpress.com/tag/karl-weick/.

3. Much of Friedman’s cri­tique can be found in Engi­neer­ing the Finan­cial Cri­sis, co-authored with Wladimir Kraus. The devel­op­ment of his point of view can be traced through many arti­cles that have appeared in Crit­i­cal Review, a jour­nal that Fried­man founded and edits.  For exam­ple, see “Demo­c­ra­tic Com­pe­tence in Nor­ma­tive and Pos­i­tive The­ory: Neglected Impli­ca­tions of ‘The Nature of Belief Sys­tems in Mass Publics,”’ Crit­i­cal Review 18(1–3): i – xliii (2006), or “Cap­i­tal­ism and the Jew­ish Intel­lec­tu­als,” Crit­i­cal Review 23(1–2): 169–94 (2011).

4. See the state­ments that were made at the time by Fed­eral Reserve Chair­man Ben Bernanke and other reg­u­la­tors, as doc­u­mented in my paper “Not What they Had in Mind: A His­tory of Poli­cies that Pro­duced the Finan­cial Cri­sis.”

5. Fried­man, “Demo­c­ra­tic Com­pe­tence,” p. vi.

6. David McRaney, “The Illu­sion of Asym­met­ric Insight.”

7. See Cass Sun­stein and Richard Thaler, Nudge, a book Kah­ne­man praises.

 

 Posted by at 7:42 am

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Content Protected Using Blog Protector By: PcDrome.