On Loving the Constitution and Needing to Rewrite It

April 16th, 2014

              Peter Sagal is a funny guy.  He has established that fact over the years as the host of National Public Radio’s weekly nationwide broadcast of “Wait Wait . . . Don’t Tell Me.”  The show has garnered many devoted fans, with a sizeable number in evidence when he addressed a packed house at the Mondavi Center (on the campus of U.C. Davis) last week.  His radio show, for those who haven’t heard it, asks contestants questions about current events, and it is often hilarious, largely because of Mr. Sagal’s ability to make seemingly boring topics humorous, if not downright interesting. 

              Mr. Sagal chose to put aside talk of his radio show at his Mondavi presentation, instead focusing on another recent project, one he did for the Public Broadcasting System.  In a four part PBS series last year, Mr. Sagal explored the understanding of and devotion to the Constitution that most Americans have.  The series featured Mr. Sagal touring various geographic regions of the country on a motorcycle, which, he explained in his talk, was actually flown from site to site, since he couldn’t take the time to drive back and forth across the country for the series.

              In any event, after regaling his audience with humorous anecdotes about the motorcycle and how the folks at PBS actually decided to transport it, he described some of the conversations he had with the people he spoke with on the series about their attitudes about our Constitution.  While nothing he revealed was all that earthshaking (many Americans know precious little about the contents of the Constitution and most think it says or at least means whatever they want it to say or mean), his talk did cause me to contemplate what our Constitution does signify for our country and why it is important, even if it is largely out of date and in need of significant alteration/amendment/reconsideration.

              Okay, so let me take those points one at a time.

              What does our Constitution signify for our country and why is it important?  According to Mr. Sagal, and I have every reason to believe he is correct, many Americans think of our Constitution as unique and special (it really isn’t in comparison to those of other countries) and as embodying inalienable rights that other constitutions do not accord to their country’s citizens (it doesn’t particularly distinguish itself there either). 

              What the Constitution does provide is a way for Americans to feel a sense of pride and trust in the system of government that the country has and to rely, essentially, on the rule of law that it provides for.  In this regard, it isn’t so much what the Constitution actually says or means but the fact that it exists that is important.  And flowing from its existence is a sense of patriotic spirit that, apart from the issue of slavery that led to the Civil War, has allowed the country to endure without violent attempts to overthrow the government or to otherwise disrupt the American way of life, such as it is. 

              In other words, the real significance and value of the Constitution is that it binds together the disparate and diverse elements of the country and unifies them into a true union of states.

              Now to the more intriguing question:  How is it out-of-date and in what ways does it need to be altered/amended/reconsidered?  The standard purpose of any country’s constitution is to present a definitive statement of the rights accorded to its citizens and the restrictions placed on its government.  In that regard the U.S. Constitution is a well-constructed and properly formulated document.  But it was written when the country was a very different place than it is now, with a culture and economic structure totally foreign to the way Americans live and engage in commerce now.

              The ability to amend the Constitution could be a way to address antiquated measures (the Electoral College and the Second and Third Amendments just to name a few), but amending the Constitution is no easy task, and is rarely accomplished (last in 1971 with the voting age reduced to 18).  So, instead, the country relies on the Supreme Court to keep the Constitution relevant to our times.  And that approach is fine if we understand the significance of allowing nine jurists (actually any five of the nine) to make those decisions. 

              But much of the framework that the founders envisioned for the country no longer makes sense.  The U.S. Senate is a perfect example of the long-term deficit of the Great Compromise, wherein the same number of votes are now allotted to a state with a population of less than 600,000 (Wyoming) as are given to a state with almost 40 million (California).  And the other branch of Congress, the House of Representatives, has become a bastion for radicals of both ideological stripes because of the way that body is constructed (with each state’s allocation of districts gerrymandered to create “undefeatable” seats for  each party in the state’s Congressional delegation).

              The result is a dysfunctional legislative branch of government that cries out for reform but that cannot produce that reform itself.  And much the same might be said of the executive branch, with many of the belief that the presidency has been allowed to assume far too much control over the way foreign policy (military encounters in particular) is conducted.

              And so the case can be made that the country needs a new Constitutional Convention (or its equivalent), not to destroy the essence of “life, liberty and the pursuit of happiness” (which are not guarantees of the Constitution, by the way, that phrase coming from the Declaration of Independence), but rather to make the document directly relevant to the times in which we live.

              The document that resulted from such a process might not look all that different from the one we have now, but it probably wouldn’t provide for the election of presidents by something called an Electoral College or leave it to nine jurists to decide (by a vote of 5 to 4) whether there is an individual right to own a gun.


As a Behavioral Science, Economics is All about Politics

April 11th, 2014

              Much is made of the science of economics by folks who spend their whole lives in serious study of the vagaries of local economies.  Basic “laws” that would be called theories in the real world of scientific study are promoted as if they are absolutes.  The “law” of supply and demand, for example, is said to dictate the price of goods in an open marketplace, unless the “law” of inelasticity is superimposed, in which case, the first law may be less a law and more a factor.

              In the real world of science, immutable laws can be deduced from experiments and observation.  Of course, they aren’t called laws for the most part, but are instead referred to as “theories,” since they are always theoretically subject to contradiction by more deduction (from more experimentation and observation).  But in actuality the theories of real science are far more absolute than the laws of economics.

              Economics is a behavioral science.  As such, it studies the way human behavior affects the flow of money and the acquisition of wealth.  But all behavioral sciences are subject to the vagaries of human behavior, which are problematic at best, as anyone who has studied his or her own behavior, even in passing fashion, can attest.  In a word, humans are fickle.  They like what they like until the don’t like it, at which point they act as if they never liked it or always hated it, not even recalling that just five minutes ago they professed their love for it.  And the “it” in that last sentence could be anything or anyone, which is why trying to predict how groups of individuals will react to ever-changing forces, such as supply and demand, becomes, well, unscientific.

              Still, we want to understand and predict how economies can be made more functional, and so the “laws” are relied upon in making decisions that might affect those economies.  Or at least they would be in a world free of ideologies.  But, of course, being humans, we cannot deny our penchant to have opinions on how things should be, and so we ignore the “laws” when they don’t suit our ideological perspectives and rely on conflicting “laws” when doing so furthers those ideological perspectives.

              In the earliest hunter-gatherer societies, ideologies were pretty simple.  They all revolved around the need to survive.  Thus, everyone in the small villages that consisted of groups with similar ethnicities (diversity was not even a word, let alone a goal) strove, each in their own way, to find food or to do the other things necessary to stay alive.  Those who couldn’t hunt gathered.  Those who could do neither found other ways to support the common goal of survival.  Perhaps they prepared the meals or cared for the young and the sick or scared off wild animals or kept the fires burning.  Everyone, in other words, found a way to help the community, and everyone in it, survive.

              At some point, however, hunting and gathering got easier, and some individuals in the community became thinkers and artists and writers.  Others began to make things, things that might be desired by other members of the community.  And as these non-hunter/gatherers developed their own identities, the members of their village either agreed that they should be valued for what they contributed or rejected for not working for the common goal of survival. 

              And then, too, as means of travel were developed, one village would discover other lands and other villages, some of which offered other assets than their village had.  Some of those foreign villages were amenable to trading assets; others were resistant.  Thus did money and militaries become meaningful, money as the medium of exchange for trading assets and militaries as the means of conquering those villages that were resistant to trading them (or as the means of defending from the marauding villages).

              As further advances developed, those early villages became more complex and more diverse in their needs and interests.  Money, itself bereft of intrinsic value, now became valuable for what it could bring to the community or to those members of the community who had it.  And military might, as a means of defending the village or otherwise advancing the village’s interests, became a valued component of a viable community, one that required economic support, with increased value accorded to those who served in military capacities.

              At some point as these “advances” occurred, greed became a recognized human trait.  The desire to have more may have been part of the earliest understanding that those early thinkers attained.  Or maybe it was just a natural outgrowth of the realization that surviving could take place on many levels of subsistence.  Some folks would only be fed enough to survive while others could actually have feasts on occasion.

              And so, an individual’s worth became measurable by what he or she produced or was otherwise worth to the village.  And those who produced more and especially those who produced more desirable commodities, as well as those who were for other reasons more valued, got more in return, while those who produced less or were otherwise less valued were given less in return. 

              In some of these early developing communities, the needs of the less fortunate were noted and those individuals were provided for in some measure.  In others, they were not noticed or were not considered worthy of special attention.  Perhaps the earliest ideologies formed from these pre-societal attitudes.

              The foregoing historical review is undeniably simplistic, but it does speak to the development of economics as a behavioral science, even as it suggests the difficulty of developing any real “laws” that govern it.  Fast forward some ten thousand years from those earliest hunter-gatherer villages and you have the economic reality of today, wherein societies superimpose their collective ideological judgments on the basic “laws” that would otherwise control the local economies of a given society.

              Viewed in this light, politics can only be seen as an attempt to impose ideologies for the benefit of one or another of the groups that comprise the society.  Should the economy provide the greatest support for those who produce the most or for those who need the most?  Should it provide the greatest benefits for those who work the hardest or for those who are the most fortuitous (by accident of birth or by good luck in the investments they make)?  Or should it leave everything to chance, so that those who suffer from uncontrollable “acts of God” are left to their own devices while those who escape such misfortune reap the benefits of that happenstance?

              There are no scientifically right or wrong answers to these questions.  And the morality that controls the answers will be as subjective as the human intellect is at contemplating the subject in the first place.

              In the end, it isn’t a matter of whether we live in a village.  Rather, it’s what kind of a village we want to live in.

Why Even an Opening Day in Australia Can’t Destroy the Wonder of a New Baseball Season

April 3rd, 2014

              In case you didn’t notice, baseball’s annual opening day was a little unusual this year.  Oh, I’m not talking about the one most fans greeted with joy earlier this week.  That day was just fine, even if one score (14 – 10) looked like it was a football result from last November.

              No, the opening day I’m referring to took place ten (or eleven, depending on how you construe the international date line) days earlier in Sydney, Australia, where, for reasons that only Bud Selig can explain, the Dodgers and Diamondbacks were required to play two games that counted.  To say it was weird doesn’t do the macabre scheduling justice.  Consider that both teams had to give up almost two weeks of normal spring training games to make the trip and play the games.  Consider that Australia has virtually no fan base of note for the game (certainly not one that compares for example to Latin America or the Far East, to name two far more ripe geographic locations worthy of exploration).  Consider that the rules regarding eligible players had to be drastically amended for the games (with 28 players potentially active, instead of the normal 25).  Consider that even with all the hoopla and buildup, neither game was a sellout (average attendance was 38,000, about what the Dodgers draw at home on a bad night), and that the first of the two aired at 4:00 in the morning in New York and the rest of the East Coast.

              And finally, consider that since most fans expect the season to open with fanfare and mass media coverage, as was the case this week, all but the most die-hard Dodger and D-Back fans probably didn’t even know the games counted, if they knew they had been played at all.  (It also didn’t help that the games were scheduled during the second big weekend of March Madness, which tends to take all the oxygen out of other sports fare anyway.)

              But no matter; Selig, who is, thankfully, retiring after this season, apparently wants to go out with a bang, and this sideshow was his idea of at least an overture to his swan song.  For the record, the Dodgers won both games, meaning they led the league for ten days without even needing to scoreboard watch.  And when the D’Backs lost their home opener to the Giants this past Monday, they had the distinction of being 0-3 after one day of the real baseball season.

              Hopefully the sport will quickly move past this silliness as the real games take hold of the nation’s attention.  And fans definitely will have some new things to contemplate as the home runs and strike outs pile up.  Here’s a brief summary of what is new this year, along with a prediction or two.

              For openers, instant replay has finally arrived full force.  After allowing all too many wrong calls to decide the outcome of critical games for far too long, the major leagues will finally allow state-of-the art technology to be used to check on controversial calls and to correct those that were clearly wrong.  To be sure, the rules for the use of the “on further review” concept will be experimental for the first few years.  This year, managers get to use it a max of two times per game (and then only if they are correct on the first appeal). 

              But the fear that the procedure will be too time-consuming seems ill-founded, as most replays will be resolved in no more time than most managers yelled and screamed about bad calls (before being ejected by abused umpires) in the old days, i.e., last year.

              The other big rule change is an attempt to reduce the number of collisions (between runners trying to score and catchers trying to prevent them from scoring) at home plate.  As ever-larger salaries are paid to these super athletes (more on that point in a moment), the owners are finally seeing that injuries that threaten careers are not good for their bottom line.  And so, this year, a new set of rules will prohibit runners from trying to crash into the catcher (thereby to dislodge the ball that might be in or approaching his mitt) and catchers from blocking home plate from on-coming runners, unless they already have the ball in their possession. 

              How this rule will work in actual game-deciding situations remains to be seen, but the guess here is that the players will find ways to enforce it themselves once they get used to the idea of not sacrificing their bodies for one run (especially since that run will quickly be taken away or added, depending on which player, the runner or the catcher, is the offending party).

              As for the salaries that are now being paid, it’s probably safe to say that we are in completely insane territory.  Witness Clayton Kershaw, a great pitcher, no doubt, but still only a guy who plays once every five days, getting seven years at $30-plus mill a year.  Witness Robinson Cano and Miguel Cabrera (superstars both, but each already over the age of 30) receiving ten-year contracts that pay them similar amounts (as to Kershaw) for the next ten years.  Giving any pitcher a contract as long and costly as Kershaw’s is nuts (and, sure enough, Kershaw is already on the disabled list and not expected to pitch until June).  And no one can expect Cano and Cabrera to be anything but shells of their former selves in the out years of their contracts (in their late 30s and beyond).  But the owners have the money (courtesy of gigantic television contracts and exploding attendance figures) and the players are represented by cagey agents who are going to make sure their clients get their share of the pie.

              As for the game itself, it will remain the great clock-less wonder it has always been.  It will still take 27 outs to end one and still take more runs scored than the other team to win one.  Predictions at the beginning of a 162-game season (with injuries and trades unforeseeable) are always ridiculous, but so are most ardent fans.  And since I most definitely am one of those, I’ll make a couple.  I see the Baltimore Orioles and Miami Marlins as the surprise teams in both leagues, with the Orioles making the playoffs with over 90 wins, and the Marlins pushing the top teams (Washington and Atlanta) in the NL East before ending slightly above .500.  In the end, I see a St. Louis-Tampa Bay World Series, with the Cardinals winning it all. 

On Uncovering the Mysteries of the Universe: Why It Matters (or Maybe Doesn’t)

March 28th, 2014

              Brian Greene is one of those people who study the universe in ways that most of the rest of us don’t even have the ability to contemplate.  He is a top astro-physicist who, for over 30 years, has been considering what kind of very small stuff might be the most basic of items that make up everything else.

              Now for most of us, if we are talking small in the scientific world, an atom is the baseline item.  And we probably also remember from our high school science classes that atoms consist of protons and neutrons that together constitute the atom’s nucleus and that even smaller particles, called electrons circle around the protons and neutrons.  We may also recall that it’s the number of electrons a particular atom has that allow the atom to join with other atoms to form molecules which when joined together in sufficient quantity can take the shape of material things that we recognize and use in the world we live in.

              Okay, that’s what most of us may be able to recite as our understanding of the stuff that makes up our material world. 

              Mr. Greene, who addressed a packed house at the Mondavi Center (on the Davis campus of the University of California—commonly known as U.C. Davis) last week, goes a good bit smaller in his research.  Principal among his studies is something called the “string theory,” which postulates that the real basic stuff that the universe is made of is smaller even than electrons, smaller even than quarks, which are now thought to be the elemental particles that make up protons and neutrons.

              The string theory, which Mr. Greene explained in elemental terms in his Mondavi talk, hypothesizes that the smallest substantive matter is in the shape of string-like objects that join in various forms and in differing degrees that then give all objects the ability to inter-relate with each other while still being independent entities. 

              Of course, I’m barely scratching the surface of what consumes Greene and his colleagues with this description (which may not even accurately state the basic thrust of the theory).  But Mr. Greene is the kind of lecturer who makes the stuff he studies, stuff that would be almost impossible to comprehend otherwise, both interesting and entertaining. 

              And so, in the course of his talk, he explained that the universe may be infinite (in my ignorance, I’ve always assumed it was), that there may be an infinite number of universes (I can’t even begin to understand what that means), that these strings may exist in all of them but take entirely different shapes and thereby create entirely different types of existence, and that, amazingly, they may be capable of existing in alternate dimensions than the three (up, down, in/out) that we identify and assume to be all there is.  

              If this sounds like Twilight Zone stuff, it’s probably no accident.  Rod Serling dramatized science fiction in ways that were not always plausible but were most definitely possible.  His tales were imaginative philosophical journeys of the mind into the scientific territory that Mr. Greene and his colleagues explore. 

              And there are also similar philosophical dimensions to the science that consumes Mr. Greene.  Thus, he explained that what we now know may only be a matter of what we are currently capable of perceiving.  As an example, he explained that if the universe is expanding at an ever-increasing rate of speed, such that the neighboring galaxies are moving ever farther apart, it may be that a trillion or so years from now, our descendants will not be able to see anything beyond our own solar system.  Scientists in that far-off world will thus have no reason (other than the records from previous generations) to believe that there is anything more to their universe than the few stars they would then be able to see.

              In answer to a question from the audience, he also admitted that he did not believe in free will (his lack of belief in that concept presumably the result of studying the inter-relationships of all matter to the point of accepting that everything in existence is controlled, and thus predictable, by the laws of physics).  He joked on this point that he very much desired not to answer the question, but that he could not refrain from doing so.  But his point was clear: the world as we know it could not be other than what it is, what it has become, and what it will continue to become, and all of us who are part of it cannot be other than what we are, what we have become, and what we will continue to become.

              This particular thought can bedevil both believers and non-believers alike if it is dwelt on too much.  The idea of free will is fundamentally integral to our understanding of how our species relates to its environment.  We base our legal system on it, defining criminal behavior by it and otherwise regulating our collective lives on the assumption that choices are made, choices that flow from a freedom every human has to decide or not decide to do or not do something or nothing.

              But if free will is nothing more than a human construction that appeals to our sense of how the world should work, if it is no more than an idea that has been implanted on us by the countless generations of our forebears, and if, indeed, the reality is quite the opposite, that none of us are free to do anything other than what we do because the laws of the universe allow no room for truly independent action, then how might our view of existence be altered?

              It’s an imponderable, to be sure, and hardly something to lose any sleep over.  We are who we are, whether by chance or design.  We do what we do because we feel free to decide so to do, whether we really have a choice or only think we do. 

              In the end, each life matters, if for no other reason than that it is destined to matter and in so mattering to affect everything else, just as each star and asteroid and electron and quark matter, because they, like us, cannot do otherwise.