Higher Health Care Costs: Why Obamacare isn’t the Real Problem

April 22nd, 2014

              I had to laugh at the headline in last week’s New York Times’ business section that read, “Health Care Spending’s Recent Surge Stirs Unease.”  The article describes the concern among independent observers regarding the ultimate impact that the Affordable Care Act (Obamacare) will have on health care costs.  It may turn out, claim the observers, that the cost of health insurance will actually increase over time as the total effect of Obamacare takes hold.

              Of course, such a result would be a disaster for the president’s legacy, not to mention for his party’s standing with the American people.  Obamacare was sold to the public as a way to provide health care insurance to the millions who have not been covered in the past and for several things it guarantees (principally coverage for pre-existing conditions and coverage for young adults on their parents’ plan).  And the main point that won over skeptics was that it was a sure-fire way to stall the rapid increase in health care costs.

              But if the administration was really serious about cutting the cost of health care, Obamacare wasn’t the plan to do it because only a government-controlled plan can accomplish that result, and for all that the Affordable Care Act is, it most definitely is not a government-controlled plan for the delivery of universal health care.  Instead, it’s a plan that requires for-profit insurance companies to provide insurance to everyone (the very sick and the very old along with everyone else). 

              The idea is that if everyone buys private health insurance (everyone here meaning the very healthy young as well as the rest of the population), the premiums companies will need to charge will be lower than they would otherwise be because, in effect, the young would be paying to cover the old and the very sick.

              It’s a nice idea in an all-for-one, one-for-all kind of way (assuming you find that kind of approach consistent with your philosophical/sociological view of what’s best for society) if it can produce the desired result of lowered premiums and, thereby, lower health care costs.

              Ah, but there’s a problem with the plan: Many young people aren’t buying it.  And if the healthy don’t buy in, the insurance companies are going to set the rates for insurance at higher levels (to cover the higher costs of treatment for the very sick and very old whom they are required to cover).  And why shouldn’t they do just that?  They are for-profit companies that are in business to make money for their shareholders by providing a service to their customers.  And isn’t that the way for-profits are supposed to operate in a capitalistic economy?

              Okay, so let me make a couple of tangential points here before I get back to my thesis.

              First, Obamacare is not socialism.  It’s a form of regulated capitalism, as is just about everything that exists in our economy.  We don’t have government ownership of enterprise.  That’s socialism.  We do have a regulated economy that imposes requirements on anyone or any entity that seeks to do business in it.  The health insurance industry is more regulated under Obamacare than it was before that act became the law of the land, but it is still a for-profit industry that is free to make a buck and is largely unfettered in its ability to do so (principally by setting rates for the insured it covers).

              Second, Obama opted out of a socialist approach to universal health care almost immediately on taking office when he took a single-payer system off the table before a bill encompassing that approach could even be introduced.  In a single-payer system, the government would supplant the for-profit insurance companies and, essentially, run the country’s health care system.  That’s pretty much what Medicare is.  Under Medicare, health care providers (i.e., doctors and hospitals) get paid for services performed under government-established rates.  And they can’t charge more for those services than the government will allow.  Thus, the elderly in our country are provided basic health care by the government, which collects taxes from everyone for the provision of that entitlement to the elderly.

              A universal single-payer plan would have eliminated the need for private health care insurance companies.  It would have taken the profit out of the business of providing health care insurance to everyone in the country.  It would have set payments for services rendered and imposed limits on what services could be provided and otherwise bureaucratized the business of health care in the United States.  It would, in that sense, have established “death panels” (a politicized euphemism to connote bureaucrats who would decide when a medical service was warranted and when one wasn’t).  Of course, insurance companies have their own “death panels” in the form of self-regulated policies that determine what medical procedures will be covered and what procedures won’t be.  Obama, notwithstanding the claims from the political right that he is a socialist, opted not to push for such a single-payer plan.  Instead, he went for a “half-loaf” solution: the regulated-capitalism approach that Obamacare encapsulates.

              Now, back to my point.  In the long run the real concern about health care costs has relatively little to do with the effect of Obamacare.  Yes, there could have been savings in a single-payer plan, but if such a plan could never have gained Congressional approval (as appeared from the outset to be the case, what with an intransigent Republican opposition that would have cowered enough Democrats away from supporting one), Obama may have been right in concluding that half a loaf was better than none.  In any event, the real cause of increased health care costs, in the long run, isn’t going to be insurance companies that tack on their ten (or fifteen or twenty-five) percent markup to secure their profits.

              The real cause of increased health care costs is going to be science, as in the scientific breakthroughs that will become increasingly available to extend life and to cure diseases.  Think about heart disease, for example.  Fifty years ago, people with advanced heart disease had no option in terms of health care.  In essence, they had a death sentence.  Now those same people have open heart surgery and heart transplants readily available to extend their lives (if not cure their diseases).  Those medical advances are great, but they cost a lot of money

              Similarly, many cancers are now capable of being fought successfully, if not fully cured, thereby extending the lives of those afflicted.  And those treatments/cures are also expensive.  And so are other diseases, previously life-threatening or severely disabling, either currently susceptible of expensive cures or soon to be so.

              These life-extending and life-appreciation-enhancing medical procedures are only going to become more prevalent as science marches on.  And would we want it otherwise?  Certainly not.  We all want to live as long and as productively as possible.

              So, as the right and left make their political arguments about the pros and cons of Obamacare and on whether it is cutting health care costs or adding to them, bear this thought in mind.  One way or another, we’re going to pay more for the cost of our health care.  There’s just no getting around it in either a capitalist or a socialist model.

 

On Loving the Constitution and Needing to Rewrite It

April 16th, 2014

              Peter Sagal is a funny guy.  He has established that fact over the years as the host of National Public Radio’s weekly nationwide broadcast of “Wait Wait . . . Don’t Tell Me.”  The show has garnered many devoted fans, with a sizeable number in evidence when he addressed a packed house at the Mondavi Center (on the campus of U.C. Davis) last week.  His radio show, for those who haven’t heard it, asks contestants questions about current events, and it is often hilarious, largely because of Mr. Sagal’s ability to make seemingly boring topics humorous, if not downright interesting. 

              Mr. Sagal chose to put aside talk of his radio show at his Mondavi presentation, instead focusing on another recent project, one he did for the Public Broadcasting System.  In a four part PBS series last year, Mr. Sagal explored the understanding of and devotion to the Constitution that most Americans have.  The series featured Mr. Sagal touring various geographic regions of the country on a motorcycle, which, he explained in his talk, was actually flown from site to site, since he couldn’t take the time to drive back and forth across the country for the series.

              In any event, after regaling his audience with humorous anecdotes about the motorcycle and how the folks at PBS actually decided to transport it, he described some of the conversations he had with the people he spoke with on the series about their attitudes about our Constitution.  While nothing he revealed was all that earthshaking (many Americans know precious little about the contents of the Constitution and most think it says or at least means whatever they want it to say or mean), his talk did cause me to contemplate what our Constitution does signify for our country and why it is important, even if it is largely out of date and in need of significant alteration/amendment/reconsideration.

              Okay, so let me take those points one at a time.

              What does our Constitution signify for our country and why is it important?  According to Mr. Sagal, and I have every reason to believe he is correct, many Americans think of our Constitution as unique and special (it really isn’t in comparison to those of other countries) and as embodying inalienable rights that other constitutions do not accord to their country’s citizens (it doesn’t particularly distinguish itself there either). 

              What the Constitution does provide is a way for Americans to feel a sense of pride and trust in the system of government that the country has and to rely, essentially, on the rule of law that it provides for.  In this regard, it isn’t so much what the Constitution actually says or means but the fact that it exists that is important.  And flowing from its existence is a sense of patriotic spirit that, apart from the issue of slavery that led to the Civil War, has allowed the country to endure without violent attempts to overthrow the government or to otherwise disrupt the American way of life, such as it is. 

              In other words, the real significance and value of the Constitution is that it binds together the disparate and diverse elements of the country and unifies them into a true union of states.

              Now to the more intriguing question:  How is it out-of-date and in what ways does it need to be altered/amended/reconsidered?  The standard purpose of any country’s constitution is to present a definitive statement of the rights accorded to its citizens and the restrictions placed on its government.  In that regard the U.S. Constitution is a well-constructed and properly formulated document.  But it was written when the country was a very different place than it is now, with a culture and economic structure totally foreign to the way Americans live and engage in commerce now.

              The ability to amend the Constitution could be a way to address antiquated measures (the Electoral College and the Second and Third Amendments just to name a few), but amending the Constitution is no easy task, and is rarely accomplished (last in 1971 with the voting age reduced to 18).  So, instead, the country relies on the Supreme Court to keep the Constitution relevant to our times.  And that approach is fine if we understand the significance of allowing nine jurists (actually any five of the nine) to make those decisions. 

              But much of the framework that the founders envisioned for the country no longer makes sense.  The U.S. Senate is a perfect example of the long-term deficit of the Great Compromise, wherein the same number of votes are now allotted to a state with a population of less than 600,000 (Wyoming) as are given to a state with almost 40 million (California).  And the other branch of Congress, the House of Representatives, has become a bastion for radicals of both ideological stripes because of the way that body is constructed (with each state’s allocation of districts gerrymandered to create “undefeatable” seats for  each party in the state’s Congressional delegation).

              The result is a dysfunctional legislative branch of government that cries out for reform but that cannot produce that reform itself.  And much the same might be said of the executive branch, with many of the belief that the presidency has been allowed to assume far too much control over the way foreign policy (military encounters in particular) is conducted.

              And so the case can be made that the country needs a new Constitutional Convention (or its equivalent), not to destroy the essence of “life, liberty and the pursuit of happiness” (which are not guarantees of the Constitution, by the way, that phrase coming from the Declaration of Independence), but rather to make the document directly relevant to the times in which we live.

              The document that resulted from such a process might not look all that different from the one we have now, but it probably wouldn’t provide for the election of presidents by something called an Electoral College or leave it to nine jurists to decide (by a vote of 5 to 4) whether there is an individual right to own a gun.

 

As a Behavioral Science, Economics is All about Politics

April 11th, 2014

              Much is made of the science of economics by folks who spend their whole lives in serious study of the vagaries of local economies.  Basic “laws” that would be called theories in the real world of scientific study are promoted as if they are absolutes.  The “law” of supply and demand, for example, is said to dictate the price of goods in an open marketplace, unless the “law” of inelasticity is superimposed, in which case, the first law may be less a law and more a factor.

              In the real world of science, immutable laws can be deduced from experiments and observation.  Of course, they aren’t called laws for the most part, but are instead referred to as “theories,” since they are always theoretically subject to contradiction by more deduction (from more experimentation and observation).  But in actuality the theories of real science are far more absolute than the laws of economics.

              Economics is a behavioral science.  As such, it studies the way human behavior affects the flow of money and the acquisition of wealth.  But all behavioral sciences are subject to the vagaries of human behavior, which are problematic at best, as anyone who has studied his or her own behavior, even in passing fashion, can attest.  In a word, humans are fickle.  They like what they like until the don’t like it, at which point they act as if they never liked it or always hated it, not even recalling that just five minutes ago they professed their love for it.  And the “it” in that last sentence could be anything or anyone, which is why trying to predict how groups of individuals will react to ever-changing forces, such as supply and demand, becomes, well, unscientific.

              Still, we want to understand and predict how economies can be made more functional, and so the “laws” are relied upon in making decisions that might affect those economies.  Or at least they would be in a world free of ideologies.  But, of course, being humans, we cannot deny our penchant to have opinions on how things should be, and so we ignore the “laws” when they don’t suit our ideological perspectives and rely on conflicting “laws” when doing so furthers those ideological perspectives.

              In the earliest hunter-gatherer societies, ideologies were pretty simple.  They all revolved around the need to survive.  Thus, everyone in the small villages that consisted of groups with similar ethnicities (diversity was not even a word, let alone a goal) strove, each in their own way, to find food or to do the other things necessary to stay alive.  Those who couldn’t hunt gathered.  Those who could do neither found other ways to support the common goal of survival.  Perhaps they prepared the meals or cared for the young and the sick or scared off wild animals or kept the fires burning.  Everyone, in other words, found a way to help the community, and everyone in it, survive.

              At some point, however, hunting and gathering got easier, and some individuals in the community became thinkers and artists and writers.  Others began to make things, things that might be desired by other members of the community.  And as these non-hunter/gatherers developed their own identities, the members of their village either agreed that they should be valued for what they contributed or rejected for not working for the common goal of survival. 

              And then, too, as means of travel were developed, one village would discover other lands and other villages, some of which offered other assets than their village had.  Some of those foreign villages were amenable to trading assets; others were resistant.  Thus did money and militaries become meaningful, money as the medium of exchange for trading assets and militaries as the means of conquering those villages that were resistant to trading them (or as the means of defending from the marauding villages).

              As further advances developed, those early villages became more complex and more diverse in their needs and interests.  Money, itself bereft of intrinsic value, now became valuable for what it could bring to the community or to those members of the community who had it.  And military might, as a means of defending the village or otherwise advancing the village’s interests, became a valued component of a viable community, one that required economic support, with increased value accorded to those who served in military capacities.

              At some point as these “advances” occurred, greed became a recognized human trait.  The desire to have more may have been part of the earliest understanding that those early thinkers attained.  Or maybe it was just a natural outgrowth of the realization that surviving could take place on many levels of subsistence.  Some folks would only be fed enough to survive while others could actually have feasts on occasion.

              And so, an individual’s worth became measurable by what he or she produced or was otherwise worth to the village.  And those who produced more and especially those who produced more desirable commodities, as well as those who were for other reasons more valued, got more in return, while those who produced less or were otherwise less valued were given less in return. 

              In some of these early developing communities, the needs of the less fortunate were noted and those individuals were provided for in some measure.  In others, they were not noticed or were not considered worthy of special attention.  Perhaps the earliest ideologies formed from these pre-societal attitudes.

              The foregoing historical review is undeniably simplistic, but it does speak to the development of economics as a behavioral science, even as it suggests the difficulty of developing any real “laws” that govern it.  Fast forward some ten thousand years from those earliest hunter-gatherer villages and you have the economic reality of today, wherein societies superimpose their collective ideological judgments on the basic “laws” that would otherwise control the local economies of a given society.

              Viewed in this light, politics can only be seen as an attempt to impose ideologies for the benefit of one or another of the groups that comprise the society.  Should the economy provide the greatest support for those who produce the most or for those who need the most?  Should it provide the greatest benefits for those who work the hardest or for those who are the most fortuitous (by accident of birth or by good luck in the investments they make)?  Or should it leave everything to chance, so that those who suffer from uncontrollable “acts of God” are left to their own devices while those who escape such misfortune reap the benefits of that happenstance?

              There are no scientifically right or wrong answers to these questions.  And the morality that controls the answers will be as subjective as the human intellect is at contemplating the subject in the first place.

              In the end, it isn’t a matter of whether we live in a village.  Rather, it’s what kind of a village we want to live in.

Why Even an Opening Day in Australia Can’t Destroy the Wonder of a New Baseball Season

April 3rd, 2014

              In case you didn’t notice, baseball’s annual opening day was a little unusual this year.  Oh, I’m not talking about the one most fans greeted with joy earlier this week.  That day was just fine, even if one score (14 – 10) looked like it was a football result from last November.

              No, the opening day I’m referring to took place ten (or eleven, depending on how you construe the international date line) days earlier in Sydney, Australia, where, for reasons that only Bud Selig can explain, the Dodgers and Diamondbacks were required to play two games that counted.  To say it was weird doesn’t do the macabre scheduling justice.  Consider that both teams had to give up almost two weeks of normal spring training games to make the trip and play the games.  Consider that Australia has virtually no fan base of note for the game (certainly not one that compares for example to Latin America or the Far East, to name two far more ripe geographic locations worthy of exploration).  Consider that the rules regarding eligible players had to be drastically amended for the games (with 28 players potentially active, instead of the normal 25).  Consider that even with all the hoopla and buildup, neither game was a sellout (average attendance was 38,000, about what the Dodgers draw at home on a bad night), and that the first of the two aired at 4:00 in the morning in New York and the rest of the East Coast.

              And finally, consider that since most fans expect the season to open with fanfare and mass media coverage, as was the case this week, all but the most die-hard Dodger and D-Back fans probably didn’t even know the games counted, if they knew they had been played at all.  (It also didn’t help that the games were scheduled during the second big weekend of March Madness, which tends to take all the oxygen out of other sports fare anyway.)

              But no matter; Selig, who is, thankfully, retiring after this season, apparently wants to go out with a bang, and this sideshow was his idea of at least an overture to his swan song.  For the record, the Dodgers won both games, meaning they led the league for ten days without even needing to scoreboard watch.  And when the D’Backs lost their home opener to the Giants this past Monday, they had the distinction of being 0-3 after one day of the real baseball season.

              Hopefully the sport will quickly move past this silliness as the real games take hold of the nation’s attention.  And fans definitely will have some new things to contemplate as the home runs and strike outs pile up.  Here’s a brief summary of what is new this year, along with a prediction or two.

              For openers, instant replay has finally arrived full force.  After allowing all too many wrong calls to decide the outcome of critical games for far too long, the major leagues will finally allow state-of-the art technology to be used to check on controversial calls and to correct those that were clearly wrong.  To be sure, the rules for the use of the “on further review” concept will be experimental for the first few years.  This year, managers get to use it a max of two times per game (and then only if they are correct on the first appeal). 

              But the fear that the procedure will be too time-consuming seems ill-founded, as most replays will be resolved in no more time than most managers yelled and screamed about bad calls (before being ejected by abused umpires) in the old days, i.e., last year.

              The other big rule change is an attempt to reduce the number of collisions (between runners trying to score and catchers trying to prevent them from scoring) at home plate.  As ever-larger salaries are paid to these super athletes (more on that point in a moment), the owners are finally seeing that injuries that threaten careers are not good for their bottom line.  And so, this year, a new set of rules will prohibit runners from trying to crash into the catcher (thereby to dislodge the ball that might be in or approaching his mitt) and catchers from blocking home plate from on-coming runners, unless they already have the ball in their possession. 

              How this rule will work in actual game-deciding situations remains to be seen, but the guess here is that the players will find ways to enforce it themselves once they get used to the idea of not sacrificing their bodies for one run (especially since that run will quickly be taken away or added, depending on which player, the runner or the catcher, is the offending party).

              As for the salaries that are now being paid, it’s probably safe to say that we are in completely insane territory.  Witness Clayton Kershaw, a great pitcher, no doubt, but still only a guy who plays once every five days, getting seven years at $30-plus mill a year.  Witness Robinson Cano and Miguel Cabrera (superstars both, but each already over the age of 30) receiving ten-year contracts that pay them similar amounts (as to Kershaw) for the next ten years.  Giving any pitcher a contract as long and costly as Kershaw’s is nuts (and, sure enough, Kershaw is already on the disabled list and not expected to pitch until June).  And no one can expect Cano and Cabrera to be anything but shells of their former selves in the out years of their contracts (in their late 30s and beyond).  But the owners have the money (courtesy of gigantic television contracts and exploding attendance figures) and the players are represented by cagey agents who are going to make sure their clients get their share of the pie.

              As for the game itself, it will remain the great clock-less wonder it has always been.  It will still take 27 outs to end one and still take more runs scored than the other team to win one.  Predictions at the beginning of a 162-game season (with injuries and trades unforeseeable) are always ridiculous, but so are most ardent fans.  And since I most definitely am one of those, I’ll make a couple.  I see the Baltimore Orioles and Miami Marlins as the surprise teams in both leagues, with the Orioles making the playoffs with over 90 wins, and the Marlins pushing the top teams (Washington and Atlanta) in the NL East before ending slightly above .500.  In the end, I see a St. Louis-Tampa Bay World Series, with the Cardinals winning it all.