Sept. 11, 2001 started out as a beautiful day, in Wisconsin, New York City and Washington, D.C.
I remember almost everything about the entire day. Sept. 11, 2001 is to my generation what Nov. 22, 1963 was to my parents and Dec. 7, 1941 was to my grandparents.
I had dropped off our oldest son at Ripon Children’s Learning Center. As I was coming out, the mother of one of his group told me to find a good radio station; she had heard as she was getting out with her son that a plane had hit the World Trade Center.
I got in my car and turned it on in time to hear, seemingly live, a plane hit the WTC. But it wasn’t the first plane, it was the second plane hitting the other tower.
As you can imagine, my drive to Fond du Lac took unusually long that day. I tried to call Mrs. Presteblog, who was working at Ripon College, but she didn’t answer because she was in a meeting. I had been at Marian University as their PR director for just a couple months, so I didn’t know for sure who the media might want to talk to, but once I got there I found a couple professors and called KFIZ and WFDL in Fond du Lac and set up live interviews.
The entire day was like reading a novel, except that there was no novel to put down and no nightmare from which to wake up. A third plane hit the Pentagon? A fourth plane crashed somewhere else? The government was grounding every plane in the country and closing every airport?
I had a TV in my office, and later that morning I heard that one of the towers had collapsed. So as I was talking to my wife on the phone, NBC showed a tower collapsing, and I assumed that was video of the first tower collapse. But it wasn’t; it was the second tower collapse, and that was the second time that replay-but-it’s-not thing had happened that day.
Marian’s president and my boss (a native of a Queens neighborhood who grew up with many firefighter and police officer families, and who by the way had a personality similar to Rudy Giuliani) had a brief discussion about whether or not to cancel afternoon or evening classes, but they decided (correctly) to hold classes as scheduled. The obvious reasons were (1) that we had more than 1,000 students on campus, and what were they going to do if they didn’t have classes, and (2) it was certainly more appropriate to have our professors leading a discussion over what had happened than anything else that could have been done.
I was at Marian until after 7 p.m. I’m sure Marian had a memorial service, but I don’t remember it. While I was in Fond du Lac, our church was having a memorial service with our new rector (who hadn’t officially started yet) and our interim priest. I was in a long line at a gas station, getting gas because the yellow low fuel light on my car was on, not because of panic over gas prices, although I recall that one Fond du Lac gas station had increased their prices that day to the ridiculous $2.299 per gallon. (I think my gas was around $1.50 a gallon that day.)
Two things I remember about that specific day: It was an absolutely spectacular day. But when the sun set, it seemed really, really dark, as if there was no light at all outside, from stars, streetlights or anything else.
For the next few days, since our son was at the TV-watching age, we would watch the ongoing 9/11 coverage in our kitchen while Michael was watching the 1-year-old-appropriate stuff or videos in our living room. That Sunday, one of the people who was at church was Adrian Karsten of ESPN. He was supposed to be at a football game working for ESPN, of course, but there was no college football Saturday (though high school football was played that Friday night), and there was no NFL football Sunday. Our organist played “God Bless America” after Mass, and I recall Adrian clapping with tears down his face; I believe he knew some people who had died or been injured.
Later that day was Marian’s Heritage Festival of the Arts. We had record attendance since there was nothing going on, it was another beautiful day, and I’m guessing after five consecutive days of nonstop 9/11 coverage, people wanted to get out of their houses.
In the 20 years since then, a comment of New York City Mayor Rudy Giuliani has stuck in my head. He was asked a year or so later whether the U.S. was more or less safe since 9/11, and I believe his answer was that we were more safe because we knew more than on Sept. 10, 2001. That and the fact that we haven’t been subject to another major terrorist attack since then is the good news.
Osama bin Laden (who I hope is enjoying Na’ar, Islam’s hell) and others in Al Qaeda apparently thought that the U.S. (despite the fact that citizens from more than 90 countries died on 9/11) would be intimidated by the 9/11 attacks and cower on this side of the Atlantic Ocean, allowing Al Qaeda to operate with impunity in the Middle East and elsewhere. (Bin Laden is no longer available for comment.) If you asked an American who paid even the slightest attention to world affairs where a terrorist attack would be most likely before 9/11, that American would have replied either “New York,” the world’s financial capital, or “Washington,” the center of the government that dominates the free world. A terrorist attack farther into the U.S., even in a much smaller area than New York or Washington, would have delivered a more chilling message, that nowhere in the U.S. was safe. Al Qaeda didn’t think to do that, or couldn’t do that. The rest of the Middle East also did not turn on the U.S. or on Israel (more so than already is the case with Israel), as bin Laden apparently expected.
The bad news is all of the other changes that have taken place that are not for the better. Bloomberg Businessweek asks:
So was it worth it? Has the money spent by the U.S. to protect itself from terrorism been a sound investment? If the benchmark is the absence of another attack on the American homeland, then the answer is indisputably yes. For the first few years after Sept. 11, there was political near-unanimity that this was all that mattered. In 2005, after the bombings of the London subway system, President Bush sought to reassure Americans by declaring that “we’re spending unprecedented resources to protect our nation.” Any expenditure in the name of fighting terrorism was justified.
A decade later, though, it’s clear this approach is no longer sustainable. Even if the U.S. is a safer nation than it was on Sept. 11, it’s a stretch to say that it’s a stronger one. And in retrospect, the threat posed by terrorism may have been significantly less daunting than Western publics and policymakers imagined it to be. …
Politicians and pundits frequently said that al Qaeda posed an “existential threat” to the U.S. But governments can’t defend against existential threats—they can only overspend against them. And national intelligence was very late in understanding al Qaeda’s true capabilities. At its peak, al Qaeda’s ranks of hardened operatives numbered in the low hundreds—and that was before the U.S. and its allies launched a global military campaign to dismantle the network. “We made some bad assumptions right after Sept. 11 that shaped how we approached the war on terror,” says Brian Fishman, a counterterrorism research fellow at the New America Foundation. “We thought al Qaeda would run over the Middle East—they were going to take over governments and control armies. In hindsight, it’s clear that was never going to be the case. Al Qaeda was not as good as we gave them credit for.”
Yet for a decade, the government’s approach to counterterrorism has been premised in part on the idea that not only would al Qaeda attack inside the U.S. again, but its next strike would be even bigger—possibly involving unconventional weapons or even a nuclear bomb. Washington has appropriated tens of billions trying to protect against every conceivable kind of attack, no matter the scale or likelihood. To cite one example, the U.S. spends $1 billion a year to defend against domestic attacks involving improvised-explosive devices, the makeshift bombs favored by insurgents in Afghanistan. “In hindsight, the idea that post-Sept. 11 terrorism was different from pre-9/11 terrorism was wrong,” says Brian A. Jackson, a senior physical scientist at RAND. “If you honestly believed the followup to 9/11 would be a nuclear weapon, then for intellectual consistency you had to say, ‘We’ve got to prevent everything.’ We pushed for perfection, and in counterterrorism, that runs up the tab pretty fast.”
Nowhere has that profligacy been more evident than in the area of homeland security. “Things done in haste are not done particularly well,” says Jackson. As Daveed Gartenstein-Ross writes in his new book, Bin Laden’s Legacy, the creation of a homeland security apparatus has been marked by waste, bureaucracy, and cost overruns. Gartenstein-Ross cites the Transportation Security Agency’s rush to hire 60,000 airport screeners after Sept. 11, which was originally budgeted at $104 million; in the end it cost the government $867 million. The homeland security budget has also proved to be a pork barrel bonanza: In perhaps the most egregious example, the Kentucky Charitable Gaming Dept. received $36,000 to prevent terrorists from raising money at bingo halls. “If you look at the past decade and what it’s cost us, I’d say the rate of return on investment has been poor,” Gartenstein-Ross says.
Of course, much of that analysis has the 20/20 vision of hindsight. It is interesting to note as well that, for all the campaign rhetoric from candidate Barack Obama that we needed to change our foreign policy approach, president Obama changed almost nothing, including our Afghanistan and Iraq involvements. It is also interesting to note that the supposed change away from President George W. Bush’s us-or-them foreign policy approach hasn’t changed the world’s view, including particularly the Middle East’s view, of the U.S. Someone years from now will have to determine whether homeland security, military and intelligence improvements prevented Al Qaeda from another 9/11 attack, or if Al Qaeda wasn’t capable of more than just one 9/11-style U.S. attack.
Hindsight makes one realize how much of the 9/11 attacks could have been prevented or at least their worst effects lessened. One year after 9/11, the New York Times book 102 Minutes: The Untold Story of the Fight to Survive Inside the Twin Towers points out that eight years after the 1993 World Trade Center bombing, New York City firefighters and police officers still could not communicate with each other, which led to most of the police and fire deaths in the WTC collapses. Even worse, the book revealed that the buildings did not meet New York City fire codes when they were designed because they didn’t have to, since they were under the jurisdiction of the Port Authority of New York and New Jersey. And more than one account shows that, had certain people at the FBI and elsewhere been listened to by their bosses, the 9/11 attacks wouldn’t have caught our intelligence community dumbfounded. (It does not speak well of our government to note that no one appears to have paid any kind of political price for the 9/11 attacks.)
I think, as Bloomberg BusinessWeek argued, our approach to homeland security (a term I loathe) has overdone much and missed other threats. Our approach to airline security — which really seems like the old error of generals’ fighting the previous war — has made air travel worse but not safer. (Unless you truly believe that 84-year-old women and babies are terrorist threats.) The incontrovertible fact is that every 9/11 hijacker fit into one gender, one ethnic group and a similar age range. Only two reasons exist to not profile airline travelers — political correctness and the assumption that anyone is capable of hijacking an airplane, killing the pilots and flying it into a skyscraper or important national building. Meanwhile, while the U.S. spends about $1 billion each year trying to prevent Improvised Explosive Device attacks, what is this country doing about something that would be even more disruptive, yet potentially easier to do — an Electromagnetic Pulse attack, which would fry every computer within the range of the device?
We have at least started to take steps like drilling our own continent’s oil and developing every potential source of electric power, ecofriendly or not, to make us less dependent on Middle East oil. (The Middle East, by the way, supplies only one-fourth of our imported oil. We can become less dependent on Middle East oil; we cannot become less dependent on energy.) But the government’s response to 9/11 has followed like B follows A the approach our culture has taken to risk of any sort, as if covering ourselves in bubblewrap, or even better cowering in our homes, will make the bogeyman go away. Are we really safer because of the Patriot Act?
American politics was quite nasty in the 1990s. For a brief while after 9/11, we had impossible-to-imagine moments like this:
And then within the following year, the political beatings resumed. Bush’s statement, “I ask your continued participation and confidence in the American economy,” was deliberately misconstrued as Bush saying that Americans should go out and shop. Americans were exhorted to sacrifice for a war unlike any war we’ve ever faced by those who wouldn’t have to deal with the sacrifices of, for instance, gas prices far beyond $5 per gallon, or mandatory national service (a bad idea that rears its ugly head in times of anything approaching national crisis), or substantially higher taxes.
Then again, none of this should be a surprise. Other parts of the world hate Americans because we are more economically and politically free than most of the world. We have graduated from using those of different skin color from the majority as slaves, and we have progressed beyond assigning different societal rights to each gender. We tolerate different political views and religions. To the extent the 9/11 masterminds could be considered Muslims at all, they supported — and radical Muslims support — none of the values that are based on our certain inalienable rights. The war between our world, flawed though it is, and a world based on sharia law is a war we had better win.
And winning that war does not include withdrawal. Whether or not Donald Trump was right about leaving Afghanistan, Joe Biden screwed up the withdrawal so badly that everyone with memory compared it to our withdrawing from South Vietnam. The obviously incomplete vetting of Afghan refugees, who are now at Fort McCoy, and our leaving billions of dollars of our military equipment in Afghanistan, guarantees will be back, and more American soldiers, and perhaps non-soldiers, will die.
In one important sense, 9/11 changed us less than it revealed us. America can be both deeply flawed and a special place, because human beings are both deeply flawed and nonetheless special in God’s eyes. Jesus Christ is quoted in Luke 12:48 as saying that “to whomsoever much is given, of him shall be much required.” As much as Americans don’t want to be the policeman of the world, or the nation most responsible for protecting freedom worldwide, there it is.
Ted Simmons’ upcoming induction to the Baseball Hall of Fame has robust support from the record book. He has the most hits in Major League history among switch-hitting catchers. His career OPS+ is higher than that of fellow Hall of Famers Carlton Fisk, Gary Carter, and Iván Rodríguez.
But to fully measure Simmons’ legacy requires a different sort of story, one that unfolded subtly over the 15,000-plus innings he caught for the Cardinals, Brewers, and Braves.
While plenty of statistics classify Simmons as an all-time great, his peers rarely allude to them. Instead, they speak about his passion, his intellect, and his unwavering focus. He never coasted through a ballgame. And by all accounts, he’s riding a lifelong streak of 72 years without a perfunctory conversation.
“He got the most out of his ability because of his mental approach,” said Bill Schroeder, a teammate in Milwaukee during the early 1980s. “He out-thought everyone.”
Simmons is a scholar of baseball, art, history — and life. He earned his undergraduate degree from the University of Michigan in 1996, nearly three decades after attending his first college class. He completed his coursework in Ann Arbor on trips to Detroit while scouting for the Cleveland Indians, because, as he said, there are certain things in life that a person is supposed to finish.
And he is willing to share his wisdom, provided the interlocutor is prepared with the right questions.
“You’d better be thinking it out for yourself,” Simmons told me earlier this year. “If you’re not taking this game real seriously and examining everything you see — incorporating it into some form, like an essay, a notebook, or a booklet — then you’re never going to understand baseball the way a professional should.
“If someone who is really serious about the game asks you a thoughtful question, I’m inclined to say, ‘Come here and sit down, and I’ll tell about what you’re seeing.’ If they have thought about the game, you can help to illuminate it for them.
“What I tell people, in the simplest form, is this: Anytime you see something happen on the field that strikes you as unusual, go there. Tear that situation apart, inside-out, upside-down and backwards. There’s going to be insight in there.”
To think deeply while playing freely is baseball’s essential riddle. Over 21 Major League seasons, Simmons came closer to solving it than most mortals have, before or since. He found a way for painstaking contemplation to enhance, rather than compromise, his natural athleticism.
And he did it all with magnetism that was evident while growing up in the Detroit suburb of Southfield, Mich. His older brothers encouraged him to switch-hit. His mother, Bonnie Sue Webb-Simmons, modeled the determined work ethic that became the backbone of Ted’s career. He drew attention as a Big Ten football recruit while starring for the A&B Brokers amateur baseball team.
Oh, and have you heard the story about the time Ted and his wife, Maryanne Ellison Simmons, hitchhiked in Michigan with an aspiring rock star named Bob Seger?
Sources confirm: It’s true.
Simmons’ long, flowing hair earned him a memorable nickname: Simba. He planned to play baseball at the University of Michigan, until the St. Louis Cardinals selected him with the 10th overall pick in the 1967 Draft. He was 17 when he signed his first professional contract.
One year later, the Cardinals met his hometown Tigers in the World Series. The Cardinals arranged for Ted and Maryanne — then his girlfriend, now his wife of 51 years — to attend the games at Tiger Stadium. More than a half-century later, Simmons remembers the “horrible” internal conflict he felt. The Tigers were his boyhood team. Al Kaline was his favorite player. But now he was a professional. Baseball remained the game he loved — but now it was his livelihood, too.
“I was sitting in the upper deck, watching the game with all of the Cardinals’ front-office people,” Simmons recalled. “They knew I was from Detroit. I’d played that season in Modesto. At one point, Al Kaline got a big base hit to put the Tigers ahead. I did everything I could to prevent myself from cheering along with the rest of the crowd. I realized quickly enough where I was sitting and who was responsible for my tickets. I kept in my seat.”
Simmons spent 14 years in the Cardinals organization, absorbing the traditions and teachings of St. Louis baseball oracle George Kissell. Simmons made six All-Star appearances by the time he was dealt to Milwaukee after the 1980 season. He earned two more selections as a Brewer.
At first, Brewers players weren’t sure how to approach their new, serious-minded teammate. Simmons would read books in the clubhouse. He also enjoyed playing bridge, which evolved into a point of connection — and instruction — for his teammates.
My uncle made book for a living. That is, he took money from those who wagered on sporting events, presidential elections, anything whereby they thought they could make a fast and easy dollar. I suppose then it was inevitable that, as a young man, I felt a certain affinity for the thought of Blaise Pascal (1623-1662). Pascal, of course, gambled on stakes much higher than my uncle ever imagined. At the same time, my uncle knew something that Pascal never understood or, in any event, never admitted. You can’t beat the house.
Pascal’s mind was among the finest of the seventeenth century. He was a prodigy, perhaps a genius, who, at fifteen, published a distinguished essay on conic sections. He invented the first calculating machine, which he called La Pascaline, and his experiments with the vacuums that nature abhors led to the creation of the barometer. Pascal was also a first-rate mathematician whose fascination with, and success at, the gaming table enabled him to contribute to the development of probability theory. To test his hypotheses, he devised the roulette wheel.
On November 23, 1654, at the age of thirty-one, Pascal underwent an emotional conversion that stirred him to abandon his worldly metier and to become an apologist for Christianity. He is best remembered today as a religious thinker, which he was, and a mystic, which he was not. Like the nineteenth-century Danish theologian Søren Kierkegaard, Pascal approached God with “fear and trembling.” A mystic seeks and expects union with God. Pascal, by contrast, despaired of ever finding Him. His conversion did not bring him clarity of vision. God remained distant and unfathomable; the will of God was inscrutable and His design for the cosmos mysterious. “For in fact,” Pascal asked, “what is man in nature?” He answered his question, writing:
A Nothing in comparison with the Infinite, an All in comparison with the Nothing, a mean between nothing and everything. Since he is infinitely removed from comprehending the extremes, the end of things and their beginning are hopelessly hidden from him in an impenetrable secret; he is equally incapable of seeing the Nothing from which he is made, and the Infinite in which he is swallowed up.
Yet, alone and without God, humanity was lost, frightened, and miserable in vast and desolate universe.
To calm his anxiety that God was, at best, remote and, at worst, illusory, Pascal conceived his famous wager. He urged skeptics, atheists, and free-thinkers to live as if they believed in God. Critics then and since have denounced what seemed to be Pascal’s sneering disdain in urging people to affirm that God was real and existence meaningful. It was disingenuous, if not cynical, of Pascal to play the odds and to bet on the reality of God and eternal life when he suspected both were false. The critique, although carefully aimed, misses the target. It is no small irony given Pascal’s attacks on the Jesuits that, like Ignatius Loyola, he rejected predestination, convinced that men and women, through their own efforts, could earn God’s saving grace. Good habits and sincere piety, even in the absence of real belief, thus became indispensable to salvation. “Custom is our nature,” Pascal declared. “He who is accustomed to the faith believes it, can no longer fear hell, and believes in nothing else.” As Augustine taught, the routine practice of faith might in time lead to genuine faith.
Difficulties arise not from Pascal’s intentions but from his premises. Pascal argued that a man, perhaps in utter desperation, must speculate that God exists. If he wins, he wins big and for all eternity. If he loses, he loses almost nothing, since he will be in no worse condition than before. A prudent man thus has no alternative but to roll the dice or to turn over the next card. He’s gambling with house money. But in reality, in history, those who have denied God have often won glory, wealth, and power; according to scripture, they have gained the whole world. Satan took Jesus to a mountain and there “showed him all the kingdoms of the world and the glory of them; and he said to him, ‘All these I will give you, if you will fall down and worship me.’” (Matthew, 4:8-9) It is equally mistaken that a man loses nothing by hazarding that God is real. A man who worships God may sacrifice all he has, all he is, and all he loves in the vindication of his faith. Consider Job.
Pascal’s tragedy originated in his embrace of Jansenism, which introduced Calvinist doctrines and attitudes into the Catholic world of seventeenth-century France and Western Europe. The Jansenists had revived the Manichean dualism, which characterized humanity as divided between good and evil. For the Jansenists, every soul was a battleground, its fate determined by whichever conflicting impulse was strongest. The Jansenists insisted, therefore, that virtue must be imposed on rebellious and perverse human beings. Only an exacting and solemn authority could direct individuals toward rectitude and purity. The Jansenists also prescribed such discipline for the churches they controlled and the local governments in France over which they exercised some influence. The flesh must be compelled to yield to the spirit. It takes no great leap of historical imagination to see that the Jansenist admiration for order, management, restraint, bureaucracy, and surveillance could be made to attend the requirements of the totalitarian state. The Jansenists determined to administer the “greatness and misery of man,” (“grandeur et misère de l’homme”), which was the foremost theme of Pascal’s work, though compulsion.
Jansenism, asserted Friedrich Heer, endowed Pascal with “an enormous capacity for hatred and over-simplification.” Stressing the enthusiasm and certainty that governed the residents of Port Royal, the spiritual and theological capital of the Jansenist movement, Heer doubtless exaggerates the charges against Pascal. He ignores not the complexity of Pascal’s thought, but the complexity of the man himself. Pascal was both austere and worldly, both rational and intuitive. When he partitioned the mind into l’esprit géométrique and l’esprit de finesse, he was mapping the course that a single mind—his own—could take. Pascal may have felt the zeal of a convert, but he never seems to have acquired the conviction that he possessed absolute truth or a sure method by which to attain it. For Pascal, God alone provided the antidote to the twin maladies of doubt and insecurity.
To alleviate his own misgivings, Pascal set out to compose a systematic defense of Christianity. The Pensées contain the remnants of the greater work that he never lived to complete. If these fragments and aphorisms suggest the character of the volume that Pascal meant to write, then it seems the Pensées would have been less an apologia for Christianity than the spiritual autobiography of a thinker attempting to explain to his intellect how his faith was possible.
In the Pensées, Pascal intimated that skepticism may transcend reason, and the doubts that reason awakens, leading not to certainty but to affirmation. By acknowledging the limits of reason, the thoughtful man, he hoped, could accept the mystery of life without also yielding to its absurdity. “The last proceeding of reason,” he wrote, ”is to recognize that there is an infinity of things which are beyond it. It is but feeble if it does not see so far as to know this. But if natural things are beyond it, what will be said of supernatural?” Yet, perhaps at this moment of vital insight, Pascal exhibited some of the odium that Friedrich Heer had detected in his thought and character. Like many intensely passionate and astute natures, Pascal disdained the society in which he lived—a disdain that reinforced his displeasure with his fellow human beings and, at times, with life itself. Most men, he assumed, were intellectually lazy and emotionally tepid. Desultory, incurious, and stupid, they were incapable of profound thought, searching doubt, or vibrant faith. The majority preferred not to bother about any subject, whether intellectual or theological, that would jolt them out of their passivity, lassitude, and indifference. Pascal’s disillusioned analysis of human nature may, as Heer suggests, have issued from the Jansenist view that human beings are both helpless and degraded. He could not avoid exposing the rancor, the insincerity, the conceit, the dishonesty, the self-deception, the cowardice, and the pettiness that circumscribed and disfigured the lives of most ordinary men, and made him despise them.
For Pascal, as for Kierkegaard and other, later existentialist philosophers and theologians, unending dread may well have been the cost of existence. “The eternal silence of these infinite spaces frightens me,” he proclaimed. There is at times the echo of a terrible nihilism that reverberates though the otherwise silent spaces of Pascal’s infinite universe, as he gazed into the abyss. T. S. Eliot wrote that Pascal’s despair is “perfectly objective,” corresponding “exactly to the facts” and so “cannot be dismissed as mental disease.” In the end, Pascal concluded, the rational proof of God’s existence, such as Descartes had tried to construct with the ontological argument, was useless and unconvincing to those disinclined to believe. Essential questions about the meaning and purpose of human existence could never be resolved through the application of reason or logic. In fact, for Pascal, they could not be resolved at all. They could only be felt in all of their contradiction and paradox. The experience of such utter confusion and despair alone made faith possible and necessary, but offered no assurance that it would come.
Voltaire judged Pascal to be restless soul and a sick mind. Pascal agreed, confirming Voltaire’s assessment long before he had rendered it. During his final illness, Pascal often refused the care of his physician, saying: “Sickness is the natural state of Christians.” He believed that human beings had been created to suffer. Misery was the condition of life in this world. His was a hard doctrine.
But to what end did people suffer? What did their suffering accomplish? Did it exalt the spirit? Were they to suffer unto truth or, as was more likely, did they suffer because the flesh was evil and needed to be punished? Pascal had gambled for ultimate stakes. When he rolled the dice, it came up snake eyes, not once, not the last time, but every time. His tragedy, and potentially ours, is that he could discover no purpose in his encounters with creation, his fellow human beings, life itself. Philosophy, science, and reason offered no assurance of truth, and were of little comfort against anguish and hopelessness. Some could even use elaborate rational arguments to defy the will of God and to excuse sin, as had the Jesuits whom Pascal denounced in The Provincial Letters.
Love was equally vain and worthless. It prompted only deception and contempt for truth. Human beings are so flawed and imperfect that they are wretched and detestable. Loving themselves and desiring others to love them, they conceal their transgressions and deformities. Since no one is inviolate, no one deserves to be loved just as, were strict justice to prevail, no one deserves to be saved. Man, Pascal complained:
cannot prevent this object that he loves from being full of faults and wants. He wants to be great, and he sees himself as small. He wants to be happy, and he sees himself miserable. He wants to be perfect, and he sees himself full of imperfections. He wants to be the object of love and esteem among men, and he sees that his faults merit only their hatred and contempt. This embarrassment in which he finds himself produces in him the most unrighteous and criminal passion that can be imagined; for he conceives a mortal enmity against the truth which reproves him, and which convinces him of his faults. He would annihilate it, but, unable to destroy it in its essence, he destroys it as far as possible in his own knowledge and in that of others; that is to say, he devotes all his attention to hiding his faults both from others and from himself, and he cannot endure that others should point them out to him, or that they should see them.
All “disguise, falsehood, and hypocrisy,” men are ignorant, brazen, and delusional. Preferring lies to truth, they should not be angry at others for pointing out their shortcomings. “It is but right that they should know us for what we are,” Pascal insisted, “and should despise us.”
Elsewhere Pascal acclaimed the dignity of man. He was a reed, but “a thinking reed,” more noble than the insensible universe that would destroy him. But the damage had been done. In the centuries to come, the same revulsion for humanity that Pascal had articulated, the same regimentation and tyranny that the Jansenists had endorsed, transformed life on earth into a living hell. In the early twentieth-century, the Roman Catholic philosopher Gabriel Marcel came face to face with the tragedy of the human condition. Shattered by his experiences in the Great War, during which he had served with the French Red Cross identifying the dead and accounting for the missing, Marcel sought an alternative to grief and desolation.
He contended that in the modern world a person was no longer a person, but “an agglomeration of functions.” According to this functional definition, human beings were valued solely for the work they did and the goods they produced. Death became “objectively and functionally, the scrapping of what has ceased to be of use and must be written off as a total loss.” Such a vision of life deprived people of their spirituality and their faith, and robbed them of any joy that they might feel. Consumed by rancor, malice, and ingratitude, they suffered an “intolerable unease,” as they descended into the void that engulfed them.
Love was the answer. If people could overcome selfishness and egocentricity, if they could love one another, Marcel was confident that they could fulfill themselves as human beings. Such involvement with, and such fidelity to, others afforded a glimpse of the transcendent and was, in Marcel’s view, the most persuasive argument for the existence of God. Faith consoled and inspired the downtrodden, the persecuted, the oppressed, and the brokenhearted. It cultivated and enhanced all human relationships. For if people refused any longer to treat others merely as objects performing a function, if they came at last to recognize that all persons, however deficient, imperfect, errant, or sinful, mattered to God, then those persons were also more likely to matter to them.
I have come to the conclusion that each of us is capable of doing the right thing or the wrong thing at any one time. Your ratio of right decisions to wrong decisions shows the type of person you are, and whether or not your life will be successful (as in avoiding controllable bad things from happening to you).
The Green Bay Packers on Thursday introduced their new, history-inspired third uniform: the 50s Classic Uniform. The new uniforms will debut at Lambeau Field on Oct. 24 against Washington.
The 50s Classic Uniform is inspired by the team’s uniforms from 1950-1953, which was the second time the team wore green and gold in its history. The Packers first wore green in the mid-to-late 1930s.
The uniforms are all green, with gold numbers and stripes similar to the jerseys worn in the 1950s. In those days, the green was a Kelly green and the team alternated between wearing it with green or gold pants. This alternate jersey, which is the Packers’ traditional green color, with gold numbers and stripes, will be worn with matching green pants with gold stripes, and matching green socks.
“The 1950s were one of the most interesting times in our organization’s rich history, creating the bridge between two of the greatest eras in pro football,” said Packers President/CEO Mark Murphy. “With the NFL growing rapidly, this time period set the stage for the construction of Lambeau Field and for the team’s success in the 1960s and beyond. We hope our fans enjoy celebrating our history with this new alternate uniform.”
I assert this (because I’m always right in this blog, and if you agree with me you’re right too) as someone who is not necessarily enamored with the green and gold look — specifically the “gold” part, which is more accurately described as “athletic gold” or “yellowgold,” basically a little bit darker than yellow. During the early 1950s apparently the Packers used a more metallic look …
… which is preferable to me from their current yellowgold.
(A Twin Cities sportswriter once described the Packers’ look as lemon and spinach. I have no problem with either description, but the writer should have included the Vikings colors — bruises and pus.)Other than the monochrome look, I have another issue:
While the early 1950s were not a particularly successful time for the Packers on the field, it was the dawn of an extraordinarily eventful decade off the field, a decade that began with the departure of the team’s founder Curly Lambeau and ended with the arrival of Vince Lombardi. In the 1950s, the NFL was growing quickly and gaining nationwide interest through television exposure. The Packers organization was at a turning point and a franchise-saving stock sale helped lay the groundwork for the eventual construction of Lambeau Field and set up the team to stay in Green Bay through modern times.
There’s a consoling thought as we descend deeper into the socially disintegrating, culturally self-loathing, economically stalling dystopia of contemporary America: We’ve been here before.
The hegemony of today’s left-wing radicals, pursuing their ambitions to repudiate America’s historical values and remake the country in the image of some purified version of a big government, equity-enforcing, social-democratic paradise, recalls the 1970s. That decade culminated in the unique combination of economic ruin and international humiliation that defined a one-term Democratic presidency—and we know what happened next. Wait a while, the optimists say. The next Reagan Revolution is at hand.
History doesn’t repeat itself, despite what Marx said, but there is a pattern in the ebb and flow of historical tides. Extreme lurches in one direction tend to be self-correcting, especially when they push a nation as successful as America close to the abyss.
But conservatives should defer the optimism. There are surely similarities between today’s conditions and that benighted decade of 50 years ago, and you don’t have to have a wild imagination to see the Joe Biden-Jimmy Carter parallels. But there are important differences that should temper any confident predictions of an imminent new era of conservative ascendancy.
The 1970s were probably the last decade when existential doubts about the American project were as pronounced and debilitating as they are now. The advances of the 1960s in civil rights and economic prosperity collapsed into a tumult of social unrest and, to coin a phrase, national malaise. The racial strife that closed the previous decade continued to define much of the next one. There are echoes of today’s woke revolutionaries in the 1968 Summer Olympics, when black athletes demonstrated their antipathy to the flag and what it stood for in their own Black Power salute from the medal podium.
The surge in homicides in the past year is a flashback to the decade when American cities were hellscapes—as is the flight of many Americans from those cities to suburbs and beyond. Back then Democratic politicians blamed it on systemic injustice and racist policing and seemed to favor criminals over their victims. Sound familiar?
Then as now there was an existential sense of peril and failure. In the 1970s the nation was haunted by a widespread fear that America was losing the great ideological struggle of the time to the communist superpower. The U.S. retreat from Vietnam, the tightening Soviet grip on Eastern Europe, and Marxist advances in Latin America had at least American progressive elites convinced of ultimate decline and fall. More than 40 years later, American elites are convinced another communist power is eclipsing the U.S. and the civilization it has led.
The 1970s gave us stagflation—immortalized in the popularization of the “misery index”—the sum of the unemployment and inflation rates. While today’s number remains well shy of the peak it reached in 1980, it has doubled in the past two years—a feat last performed in the mid-1970s. Other echoes resonate across the half century: unaccustomed military misadventures, in Vietnam then and Iraq and Afghanistan now; presidential infamy in Richard Nixon and Donald Trump.
For all the similarities, though, there’s at least one big political difference—rooted in an economic one—that suggests reason for pessimism.
Today, unlike then, almost the entire American establishment lines up on one side. The progressive revolution is much more deeply embedded in the nation’s institutions than it ever was in the 1970s. It was still possible then to find conservatives on campuses—it was the intellectual revolution of Milton Friedman and the Chicago school that presaged Ronald Reagan’s political version. Friedman would probably be canceled today. The permanent government wasn’t steeped as it is now in the social and political orthodoxy that thwarts efforts to undo it.
But the biggest difference of all is the investment by America’s corporate leadership in the dominant progressive ideology.
By the late 1970s U.S. financial markets had been in a decade-long bear market. In 1979 the Dow Jones Industrial Average was where it had been in 1965. Since then, and thanks in great part to the global economic liberalization unleashed by the Reagan-Thatcher years, today’s American corporations have enjoyed a bull run like no other.
Which leaves us with one of the strangest alliances in history: a dominant political class that argues America is a fundamentally flawed society in need of complete transformation, in coalition with a dominant capitalist class that reaps unprecedented riches from investors’ convictions that things have never been better.
Barring an epic financial collapse or some improbable early cultural counterrevolution, the coalition that helped elect Ronald Reagan isn’t coming back. Any reversal of the tide of progressive hegemony will have to be achieved from the bottom up.
Upper Midwest Airchecks brings us the day that was two weeks after my 11th birthday and two weeks and three days before the Bicentennial:
Progressive leftists are good at destroying traditions, careers and free expression. But after all the societal broken eggs, where’s the progressive omelette? Surely somewhere there must be a model of success given the confidence with which the wokesters of modern media condemn America’s constitutional republic.
A recent headline on this column invited readers to “Name a Great Civilization Created by Progressive Leftists.” Your humble correspondent is still happy to accept nominations and the submissions so far have been extremely interesting. The search continues for a progressive paradise. But what’s striking is that a number of left-leaning respondents—those who did not simply express resentment at the question—have nominated nations of Western civilization that are the typical targets of progressive ire. In fact a few leftists even cite the good old USA as a place created by the progressives of their day.
Perhaps this is encouraging, because it suggests that when pressed the cancel crowd acknowledges that it’s not unreasonable to judge people by the standards of their own times.
But on the substance, what about this argument that the United States of America is the answer to the question posed in that headline?
Princeton professor of jurisprudence Robert George runs the school’s James Madison Program in American Ideals and Institutions. In response to an email inquiry he writes that “the claim that the American founders were ‘progressive leftists’ is absurd.” Here’s the rest of his response:
An online discussion about music of the 1980s included a few references to songs about that fun topic of the imminent nuclear holocaust.
It should be pointed out that popular music has on occasion used social unrest to the point of the Apocalypse as a theme or inspiration …
… even before the ’80s.
The oeuvre of Doom Rock really got going in the 1980s, though, during the presidential terms of Ronald Reagan, who was simultaneously viewed by the American left as both stupid and evil (which you’d think would be incompatible concepts, but logic has never been a strong suit of political discussions) and doubtlessly bound to blow up the planet.
So because musical artists are usually left of center and get, shall we say, inspired by (more polite than “ripping off”) others’ works, an entire subgenre of rock was created.
For those who don’t know German:
Social commentary has always been a part of popular music at least since the 1960s. This particular musical trend dovetailed with what movie studios and TV networks were producing.
(One thing “Special Bulletin” and “Countdown to Looking Glass” have in common is really bad writing for and acting by those who were supposed to be portraying reporters and TV news anchors. Anyone who has watched coverage of such disasters as the 1989 San Francisco earthquake, 9/11 or severe storm damage knows that professionals do not emote on camera. The only way to get effective journalist portrayals is to use actual journalists, such as Eric Sevareid in his brief appearance in “Countdown to Looking Glass” and Sander Vanocur and Bree Walker in 1994’s “Without Warning.”)
You may notice, by the way, that the nuclear holocaust predicted for the 1980s did not take place. For that matter, within three years of Reagan’s leaving office the Soviet Union was no more and the entire Warsaw Pact collapsed. But defeating your enemy and being on the right side of history apparently doesn’t make good pop music.