Category: Culture

The 2021 Presteblog Christmas album

Starting shortly after my birth, my parents purchased Christmas albums for $1 from an unlikely place by today’s standards, tire stores.

(That’s as seemingly outmoded as getting, for instance, glasses every time you filled up at your favorite gas station, back in the days when gas stations were usually part of a car repair place, not a convenience store. Of course, go to a convenience store now, and you can probably find CDs, if not records, and at least plastic glasses such as Red Solo Cups and silverware. Progress, or something.)

The albums featured contemporary artists from the ’60s, plus opera singers and other artists.

These albums were played on my parents’ wall-length Magnavox hi-fi player.

Playing these albums was as annual a ritual as watching “The Grinch Who Stole Christmas,” “A Charlie Brown Christmas,” or other holiday-season appointment TV.

Those albums began my, and then our, collection of Christmas music.

You may think some of these singers are unusual choices to sing Christmas music. (This list includes at least six Jewish singers.)

Of course, Christians know that Jesus Christ was Jewish. (And faithful to his faith.)

And I defy any reader to find anyone who can sing “Silent Night” like Barbra Streisand did in the ’60s.

These albums are available for purchase online, but record players are now as outmoded as, well, getting glasses with your fill-up at the gas station. (Though note what I previously wrote.)

But thanks to YouTube and other digital technology, other aficionados of this era of Christmas music now can have their music preserved for their current and future enjoyment.

The tire-store-Christmas-album list has been augmented by both earlier and later works.

In the same way I think no one can sing “Silent Night” like Barbra Streisand, I think no one can sing “Do You Hear What I Hear” (a song written during the Cuban Missile Crisis, believe it or not) like Whitney Houston:

This list contains another irony — an entry from “A Christmas Gift for You,” Phil Spector’s Christmas album. (Spector’s birthday is Christmas.)

The album should have been a bazillion-seller, and perhaps would have been had it not been for the date of its initial release: Nov. 22, 1963.

Finally, here’s the last iteration of one of the coolest TV traditions — “The Late Show with David Letterman” and its annual appearance of Darlene Love (from the aforementioned Phil Spector album), which started in 1986 on NBC …

… and ended on CBS:

Merry Christmas. (To play this whole thing as a YouTube playlist, click here.)


Philip Klein:

Most people have at some point in their lives been asked to entertain a version of the cheesy question, “If you knew you had one day to live, what would you do?” It’s often posed as a playful game or essay topic or used by self-help gurus to prod people into trying to get a deeper sense of their priorities. But it’s time for everybody to start asking themselves a different question: If COVID-19 will be here forever, is this what you want the rest of your life to look like? In this case, it’s not an idle or theoretical exercise. It will be central to how we choose to live and function as a society for years or even decades to come.

Ever since the onset of COVID-19, we have more or less been living under an illusion. That illusion was that it would reach some sort of natural endpoint — a point at which the pandemic would be declared “over,” and we could all more or less go back to normal. The original promise of taking “15 days to slow the spread” or six weeks to “flatten the curve” has long since been reduced to a punchline.

In March of 2020, the outside estimates were that this coronavirus period would come to an end when safe and effective vaccines became widely available. Even the infamous Imperial College London report, viewed as draconian at the time for its estimate of up to 2.2 million deaths in the U.S. absent sustained intervention, predicted that its mitigation strategies “will need to be maintained until a vaccine becomes available.” Yet vaccines have been available for anybody who wants one for nearly six months, and our leaders have ignored the obvious off-ramp. The CDC backtracked on guidance and said that vaccinated people must wear masks in public, and many people and jurisdictions have listened. For example, Montgomery County, Md., has an extraordinarily high vaccination rate — with 96 percent of the eligible over-twelve population having received at least one dose and 87 percent of them being fully vaccinated. By its own metrics, the county has “low utilization” of hospital beds. Yet the county requires masks indoors — including in schools. In Oregon, vaccinated people are required to wear masks even outdoors. And it isn’t just liberal enclaves. A new Economist/YouGov poll found that eight in ten Americans report having worn a mask in the past week at least “some of the time” when outside their homes, with 58 percent masking “always” or “most of the time.” If masking has remained so widespread among adults months after vaccines became widely available, why will it end in schools after vaccines become available for children?

When operating under the assumption that there is a time limit on interventions, it’s much easier to accept various disruptions and inconveniences. While there have been ferocious debates over whether various mitigation strategies have ever been necessary, we should at least be able to agree that the debate changes the longer such restrictions are required. People making sacrifices for a few weeks, or even a year, under the argument that doing so saves lives is one thing. But if those sacrifices are indefinitely extended, it’s a much different debate.

There are many Americans who willingly locked themselves down and who still favor some restrictions. But what if this were to drag on for five years? Ten years? Twenty years? Do you want your children to be forced to wear masks throughout their childhoods? Do you want to bail on weddings if some guests may be unvaccinated? Skip future funerals? Ditch Thanksgiving when there’s a winter surge? Keep grandparents away from their grandkids whenever there’s a new variant spreading? Are you never going to see a movie in a theater again?

These are not wild scenarios. The Delta variant has led to surges throughout the world months after vaccines became widely available. Despite being a model of mass vaccination, Israel has been dealing with a significant Delta spike. To be clear, vaccines still appear to be quite effective at significantly reducing the risk of hospitalization and death. But if the virus continues to adapt and people need to get booster shots every six months or so, it seems there’s a good chance that the coronavirus will continue to spread for a very long time. So the question is how we, as individuals, and society as a whole, should adapt to this reality. Instead of thinking in terms of policies that may be tolerable for a very short period of time, it’s time to consider what would happen if such policies had to continue forever.

Whatever arguments were made to justify interventions early on in the pandemic, post-vaccine, we are in a much different universe. There is a negligible statistical difference in the likelihood of severe health consequences between vaccinated people who go about their business without taking extra precautions, and those who take additional precautions. Yet having to observe various protocols in perpetuity translates into a reduced quality of life. Put another way, the sort of question we need to start asking ourselves is not whether we can tolerate masking for one trip to the grocery store, but whether we want to live in a society in which we can never again go shopping without a mask.

People may ultimately come to different conclusions about the amount of restrictions they want to accept, regardless of the time frame. But at a minimum, we need to dispense with the framework that assumes the end of COVID-19 is just around the corner and instead recognize that it’s likely here to stay.

Worshiping Gaia instead of God

Ryan M. Yonk and Jessica Rood:

Dramatic headlines and images showing a deteriorating environment exist to demand swift, decisive, and large-scale action. We saw this approach in the 1960s when the first made-for-TV environmental crises showed oil-drenched seabirds on the California Coast and more recently in depressing videos depicting starving polar bears. Dramatic imagery has become the norm when discussing environmental issues.

We also see trends in editorial writing, discussions among political groups, changing business practices, and increasingly scholarly claims that also use dramatic imagery. At face value, these trends could indicate that the public demands dramatic governmental action on environmental issues. Some scholars, however, see this as more than mere increased public demand for government intervention, and they highlight similarities between environmentalism and religious movements. For example, Laurence Siegal states:

In the decades since modern environmentalism began, the movement has taken on some of the characteristics of a religion: claims not backed by evidence, self-denying behavior to assert goodness, (and a) focus on the supposed end of times.

Scholars have tuned into the general public’s zealous interest in the environment and more importantly, emphasis on government action, to push forward their own ideological goals under the guise of scholarship. Whereas the ultimate goal of scholarship is to mitigate climate change and improve sustainability, the reality is instead corrupted by thinly veiled ideology masquerading as scholarship, which is sure to distort any useful policy recommendations.

This phenomenon is illustrated by a recent study making the rounds in Science Daily and The Climate News Network. The authors, Vogel et al., claim that the world must decrease energy use to 27 gigajoules (GJ) per person in order to keep average global temperature increases to 1.5 degrees Celsius, a recommendation included in the Paris Agreement. Our current reality illustrates the outlandish nature of this suggestion. We are a far cry from this goal both in 2012, the year chosen for this study, as well as in 2019, the most recent year for available data. …

Using these data, the authors pair what they view to be excessive energy use with a failure to meet basic human needs worldwide. In their own argument, they acknowledge that among the 108 countries studied, only 29 reach sufficient need satisfaction levels. In each case where need satisfaction is met, the country uses at least double the 27 GJ/cap of sustainable energy use, thereby creating a conundrum both for those concerned about the environment and human well-being.

The authors, however, provide a solution arguing that their research shows a complete overhaul of “the current political-economic regime,” would allow countries to meet needs at sustainable energy levels. Some of their recommendations include: universal basic services, minimum and maximum income thresholds, and higher taxes on wealth and inheritance.

These policy recommendations are not supported by the research and directly contradict a body of literature that argues economic growth, not government redistribution, is our way forward. Vogel et al. argue against the necessity for economic growth and even go as far as to support degrowth policies on the grounds that their model finds no link between economic growth and maximizing human need satisfaction and minimizing energy use.

In short, their proposed solution would punish affluent countries and favor a collective misery in which any market driven environmental improvements are crushed under the promise of equality and sustainable energy use.

Conversely, Laurence Siegel in Fewer, Richer, Greener: Prospects for Humanity in an Age of Abundance and the 2020 Environmental Performance Index (EPI) argue that economic prosperity allows countries to invest in new technologies and policies that improve not only environmental health but also the well-being of the people. Thus, if we want to continue to improve our relationship with the environment and human progress, we should be more supportive of economic growth and the entrepreneurship that drives it.

If the above relationship between economic prosperity, environmental health, and human well-being is the case, how can these authors claim the opposite? The most likely conclusion is that the authors allow an ideological bias to drive their research, a claim that is supported by their normative descriptions of affluent countries as examples of planned obsolescence, overproduction, and overconsumption as well as the authors’ obvious demonization of profit-making.

As Vogel et al. demonstrates, environmental issues can be exploited by the drama and religious nature of the movement. Unfortunately, academics, such as Vogel et al., have learned to use these tools to stretch their limited findings into a full-blown rallying cry for their own preferred policies; in this case, socialism on a global scale.

An unlikely sermon subject for Sunday

Mark Malvasi:

My uncle made book for a living. That is, he took money from those who wagered on sporting events, presidential elections, anything whereby they thought they could make a fast and easy dollar. I suppose then it was inevitable that, as a young man, I felt a certain affinity for the thought of Blaise Pascal (1623-1662). Pascal, of course, gambled on stakes much higher than my uncle ever imagined. At the same time, my uncle knew something that Pascal never understood or, in any event, never admitted. You can’t beat the house.

Pascal’s mind was among the finest of the seventeenth century. He was a prodigy, perhaps a genius, who, at fifteen, published a distinguished essay on conic sections. He invented the first calculating machine, which he called La Pascaline, and his experiments with the vacuums that nature abhors led to the creation of the barometer.   Pascal was also a first-rate mathematician whose fascination with, and success at, the gaming table enabled him to contribute to the development of probability theory. To test his hypotheses, he devised the roulette wheel.

On November 23, 1654, at the age of thirty-one, Pascal underwent an emotional conversion that stirred him to abandon his worldly metier and to become an apologist for Christianity. He is best remembered today as a religious thinker, which he was, and a mystic, which he was not.   Like the nineteenth-century Danish theologian Søren Kierkegaard, Pascal approached God with “fear and trembling.” A mystic seeks and expects union with God. Pascal, by contrast, despaired of ever finding Him. His conversion did not bring him clarity of vision. God remained distant and unfathomable; the will of God was inscrutable and His design for the cosmos mysterious. “For in fact,” Pascal asked, “what is man in nature?” He answered his question, writing:

A Nothing in comparison with the Infinite, an All in comparison with the Nothing, a mean between nothing and everything. Since he is infinitely removed from comprehending the extremes, the end of things and their beginning are hopelessly hidden from him in an impenetrable secret; he is equally incapable of seeing the Nothing from which he is made, and the Infinite in which he is swallowed up.

Yet, alone and without God, humanity was lost, frightened, and miserable in vast and desolate universe.

To calm his anxiety that God was, at best, remote and, at worst, illusory, Pascal conceived his famous wager. He urged skeptics, atheists, and free-thinkers to live as if they believed in God. Critics then and since have denounced what seemed to be Pascal’s sneering disdain in urging people to affirm that God was real and existence meaningful. It was disingenuous, if not cynical, of Pascal to play the odds and to bet on the reality of God and eternal life when he suspected both were false. The critique, although carefully aimed, misses the target. It is no small irony given Pascal’s attacks on the Jesuits that, like Ignatius Loyola, he rejected predestination, convinced that men and women, through their own efforts, could earn God’s saving grace. Good habits and sincere piety, even in the absence of real belief, thus became indispensable to salvation. “Custom is our nature,” Pascal declared. “He who is accustomed to the faith believes it, can no longer fear hell, and believes in nothing else.” As Augustine taught, the routine practice of faith might in time lead to genuine faith.

Difficulties arise not from Pascal’s intentions but from his premises. Pascal argued that a man, perhaps in utter desperation, must speculate that God exists. If he wins, he wins big and for all eternity. If he loses, he loses almost nothing, since he will be in no worse condition than before. A prudent man thus has no alternative but to roll the dice or to turn over the next card. He’s gambling with house money. But in reality, in history, those who have denied God have often won glory, wealth, and power; according to scripture, they have gained the whole world. Satan took Jesus to a mountain and there “showed him all the kingdoms of the world and the glory of them; and he said to him, ‘All these I will give you, if you will fall down and worship me.’” (Matthew, 4:8-9) It is equally mistaken that a man loses nothing by hazarding that God is real. A man who worships God may sacrifice all he has, all he is, and all he loves in the vindication of his faith.   Consider Job.

Pascal’s tragedy originated in his embrace of Jansenism, which introduced Calvinist doctrines and attitudes into the Catholic world of seventeenth-century France and Western Europe. The Jansenists had revived the Manichean dualism, which characterized humanity as divided between good and evil. For the Jansenists, every soul was a battleground, its fate determined by whichever conflicting impulse was strongest. The Jansenists insisted, therefore, that virtue must be imposed on rebellious and perverse human beings. Only an exacting and solemn authority could direct individuals toward rectitude and purity. The Jansenists also prescribed such discipline for the churches they controlled and the local governments in France over which they exercised some influence. The flesh must be compelled to yield to the spirit. It takes no great leap of historical imagination to see that the Jansenist admiration for order, management, restraint, bureaucracy, and surveillance could be made to attend the requirements of the totalitarian state. The Jansenists determined to administer the “greatness and misery of man,” (“grandeur et misère de l’homme”), which was the foremost theme of Pascal’s work, though compulsion.

Jansenism, asserted Friedrich Heer, endowed Pascal with “an enormous capacity for hatred and over-simplification.”  Stressing the enthusiasm and certainty that governed the residents of Port Royal, the spiritual and theological capital of the Jansenist movement, Heer doubtless exaggerates the charges against Pascal. He ignores not the complexity of Pascal’s thought, but the complexity of the man himself.   Pascal was both austere and worldly, both rational and intuitive. When he partitioned the mind into l’esprit géométrique and l’esprit de finesse, he was mapping the course that a single mind—his own—could take. Pascal may have felt the zeal of a convert, but he never seems to have acquired the conviction that he possessed absolute truth or a sure method by which to attain it. For Pascal, God alone provided the antidote to the twin maladies of doubt and insecurity.

To alleviate his own misgivings, Pascal set out to compose a systematic defense of Christianity. The Pensées contain the remnants of the greater work that he never lived to complete. If these fragments and aphorisms suggest the character of the volume that Pascal meant to write, then it seems the Pensées would have been less an apologia for Christianity than the spiritual autobiography of a thinker attempting to explain to his intellect how his faith was possible.

In the Pensées, Pascal intimated that skepticism may transcend reason, and the doubts that reason awakens, leading not to certainty but to affirmation. By acknowledging the limits of reason, the thoughtful man, he hoped, could accept the mystery of life without also yielding to its absurdity. “The last proceeding of reason,” he wrote, ”is to recognize that there is an infinity of things which are beyond it. It is but feeble if it does not see so far as to know this. But if natural things are beyond it, what will be said of supernatural?” Yet, perhaps at this moment of vital insight, Pascal exhibited some of the odium that Friedrich Heer had detected in his thought and character. Like many intensely passionate and astute natures, Pascal disdained the society in which he lived—a disdain that reinforced his displeasure with his fellow human beings and, at times, with life itself. Most men, he assumed, were intellectually lazy and emotionally tepid. Desultory, incurious, and stupid, they were incapable of profound thought, searching doubt, or vibrant faith. The majority preferred not to bother about any subject, whether intellectual or theological, that would jolt them out of their passivity, lassitude, and indifference. Pascal’s disillusioned analysis of human nature may, as Heer suggests, have issued from the Jansenist view that human beings are both helpless and degraded. He could not avoid exposing the rancor, the insincerity, the conceit, the dishonesty, the self-deception, the cowardice, and the pettiness that circumscribed and disfigured the lives of most ordinary men, and made him despise them.

For Pascal, as for Kierkegaard and other, later existentialist philosophers and theologians, unending dread may well have been the cost of existence. “The eternal silence of these infinite spaces frightens me,” he proclaimed.  There is at times the echo of a terrible nihilism that reverberates though the otherwise silent spaces of Pascal’s infinite universe, as he gazed into the abyss. T. S. Eliot wrote that Pascal’s despair is “perfectly objective,” corresponding “exactly to the facts” and so “cannot be dismissed as mental disease.” In the end, Pascal concluded, the rational proof of God’s existence, such as Descartes had tried to construct with the ontological argument, was useless and unconvincing to those disinclined to believe. Essential questions about the meaning and purpose of human existence could never be resolved through the application of reason or logic. In fact, for Pascal, they could not be resolved at all. They could only be felt in all of their contradiction and paradox. The experience of such utter confusion and despair alone made faith possible and necessary, but offered no assurance that it would come.

Voltaire judged Pascal to be restless soul and a sick mind. Pascal agreed, confirming Voltaire’s assessment long before he had rendered it. During his final illness, Pascal often refused the care of his physician, saying: “Sickness is the natural state of Christians.” He believed that human beings had been created to suffer. Misery was the condition of life in this world. His was a hard doctrine.

But to what end did people suffer? What did their suffering accomplish? Did it exalt the spirit? Were they to suffer unto truth or, as was more likely, did they suffer because the flesh was evil and needed to be punished? Pascal had gambled for ultimate stakes. When he rolled the dice, it came up snake eyes, not once, not the last time, but every time. His tragedy, and potentially ours, is that he could discover no purpose in his encounters with creation, his fellow human beings, life itself. Philosophy, science, and reason offered no assurance of truth, and were of little comfort against anguish and hopelessness. Some could even use elaborate rational arguments to defy the will of God and to excuse sin, as had the Jesuits whom Pascal denounced in The Provincial Letters.

Love was equally vain and worthless. It prompted only deception and contempt for truth. Human beings are so flawed and imperfect that they are wretched and detestable. Loving themselves and desiring others to love them, they conceal their transgressions and deformities. Since no one is inviolate, no one deserves to be loved just as, were strict justice to prevail, no one deserves to be saved. Man, Pascal complained:

cannot prevent this object that he loves from being full of faults and wants. He wants to be great, and he sees himself as small. He wants to be happy, and he sees himself miserable. He wants to be perfect, and he sees himself full of imperfections. He wants to be the object of love and esteem among men, and he sees that his faults merit only their hatred and contempt. This embarrassment in which he finds himself produces in him the most unrighteous and criminal passion that can be imagined; for he conceives a mortal enmity against the truth which reproves him, and which convinces him of his faults. He would annihilate it, but, unable to destroy it in its essence, he destroys it as far as possible in his own knowledge and in that of others; that is to say, he devotes all his attention to hiding his faults both from others and from himself, and he cannot endure that others should point them out to him, or that they should see them.

All “disguise, falsehood, and hypocrisy,” men are ignorant, brazen, and delusional. Preferring lies to truth, they should not be angry at others for pointing out their shortcomings. “It is but right that they should know us for what we are,” Pascal insisted, “and should despise us.”

Elsewhere Pascal acclaimed the dignity of man. He was a reed, but “a thinking reed,” more noble than the insensible universe that would destroy him. But the damage had been done. In the centuries to come, the same revulsion for humanity that Pascal had articulated, the same regimentation and tyranny that the Jansenists had endorsed, transformed life on earth into a living hell. In the early twentieth-century, the Roman Catholic philosopher Gabriel Marcel came face to face with the tragedy of the human condition. Shattered by his experiences in the Great War, during which he had served with the French Red Cross identifying the dead and accounting for the missing, Marcel sought an alternative to grief and desolation.

He contended that in the modern world a person was no longer a person, but “an agglomeration of functions.” According to this functional definition, human beings were valued solely for the work they did and the goods they produced. Death became “objectively and functionally, the scrapping of what has ceased to be of use and must be written off as a total loss.” Such a vision of life deprived people of their spirituality and their faith, and robbed them of any joy that they might feel. Consumed by rancor, malice, and ingratitude, they suffered an “intolerable unease,” as they descended into the void that engulfed them.

Love was the answer. If people could overcome selfishness and egocentricity, if they could love one another, Marcel was confident that they could fulfill themselves as human beings. Such involvement with, and such fidelity to, others afforded a glimpse of the transcendent and was, in Marcel’s view, the most persuasive argument for the existence of God. Faith consoled and inspired the downtrodden, the persecuted, the oppressed, and the brokenhearted. It cultivated and enhanced all human relationships. For if people refused any longer to treat others merely as objects performing a function, if they came at last to recognize that all persons, however deficient, imperfect, errant, or sinful, mattered to God, then those persons were also more likely to matter to them.

I have come to the conclusion that each of us is capable of doing the right thing or the wrong thing at any one time. Your ratio of right decisions to wrong decisions shows the type of person you are, and whether or not your life will be successful (as in avoiding controllable bad things from happening to you).


The latest sports controversy

Michael Smith first wrote:

“I say put mental health first, because if you don’t, then you’re not going to enjoy your sport and you’re not going to succeed as much as you want to. It’s OK sometimes to even sit out the big competitions to focus on yourself because it shows how strong of a competitor and a person that you really are, rather than just battle through it.”

Says Simone Biles.

What I’m about to say may not be very popular, but if you have read my stuff for any period of time, you know I tend to say what I am thinking.

I feel sorry for her, but more sorry for her teammates. Her statement just seems to be the antithesis of the traditional American spirit.

Biles is a not a kid, she is 24. She is an adult. She indicated in an interview last night that she was feeling fear. Fear of injury, fear of making a mistake, fear of letting her teammates down.

So, she didn’t saddle up. Her mental toughness is gone. I’ve seen this before in people, they become so paralyzed with fear that they can’t even act. It is a from of PTSD or the old version known as “shell shock” – but the Biles situation isn’t a life threatening one. This is a sport.

But it isn’t just Biles – maybe she is the most famous example of it, but I have known young adults to not show up for work, calling out sick and claiming they had such a stressful week, they needed a “mental health day.”.

Somehow I can’t imagine a Chinese or Russian athlete walking away from a competition and you know, those CIA and US Army advertisements celebrating mental weakness and wokeness as desirable characteristics for intelligence employees and soldiers didn’t fill me with confidence.

Biles should not be disparaged for doing what she thought she needed to do for herself, but she damn sure should not be celebrated for it either. She let down her team and an entire country.

I am left to wonder if this is an example of the new America in the hands of the Biles generation, that they just don’t think they should “battle through” when things get tough.

May God have mercy on our souls if that what we have wrought.”

Then Smith wrote:

Many have criticized my comments about Simone Biles. That’s fair – but I have a personal reason for my opinion.

In the late 90’s the business I was part of failed. I was unemployed and dead broke, loaded with debt and with a wife and three kids to feed and literally no food in the house. I can imagine that kind of stress is at least equivalent to what Biles feels. Yet, I didn’t quit.

I couldn’t quit. My family depended on me.
I couldn’t find a job in my field at a comparable level so I turned to skills I learned working in construction from college, doing anything I could to make money to feed my kids. I didn’t give up, I wasn’t embarrassed to take any job I could find.

I clawed my way back over the past 30 years.

That’s real life. I know other people share a similar story. Rare are people who have never faced setbacks in life. Some buckle down and get things done, some people quit – on their families, their teams and themselves.

That has real and immediate consequences, so people will have to pardon me if I am harsh toward an athlete who quits for “mental health” reasons. She has been supported all her life by other people because she can do things nobody else can – and now she can’t (or won’t) do those things.

I don’t know her, and I have said she has the right to do whatever she wants to do and for whatever reason (or no reason at all), but again, she doesn’t just get to escape the consequences – people forming opinions of her or the real impact to her family and her teammates.

We all are entitled to our opinions, but now she is out of the individual competition as well, her legacy will forever be that she quit.

She doesn’t deserve denigration – but by the same token, she sure as hell shouldn’t be celebrated for quitting. Many are celebrating her for “living her truth” and “taking care of herself.”

I can’t do it. I won’t do it. I’m sorry for her problems but she performs in a sport that is basically entertainment, as all sports are to one degree or another. All sports, especially the Olympics, are luxury appendages of a prosperous world. They are nice, but not necessary, so maybe this doesn’t deserve the attention it is getting

Whether Simone Biles ever competes again will not change my world one iota. The fact remains that the behavior of elite athletes in several sports are making me not care about things I once enjoyed and supported (my wife, daughter and I volunteered and worked in the 2002 Salt Lake Games).

A friend once told me that society will forgive anything except going broke. The fact is that I’ll forever carry the stigma of being broke and bankrupt, but I will never be called a quitter.


The real purpose of today’s liberalism

Nathaniel Blake:

They want to make your life worse. They are the various diversity, equity, and inclusion activists and apparatchiks whose obsessions with race, sex, and gender now govern much of American life. They are the nation’s scolds, afraid that someone, somewhere, is having fun in a way that offends the ever-shifting demands of diversity and inclusion.

The latest victims of their killjoy spree are in Virginia, where prom kings and queens and daddy-daughter dances are about to be banned in public schools. By law, all Virginia public schools are required to implement policies that are “consistent with but may be more comprehensive than the model policies developed by the Department of Education.” These model policies are a disaster, requiring schools to “accept a student’s assertion of their gender identity without requiring any particular substantiating evidence, including diagnosis, treatment, or legal documents.”

Thus, on nothing more than a child’s say-so, schools must allow males into female locker rooms and showers, and house boys with girls on overnight trips. School employees are even told to keep a student’s steps toward transition secret from parents deemed insufficiently “supportive,” and to consider calling Child Protective Services on them. But the malice of the trans-kids movement may be most evident in the smaller cruelties of prohibiting prom queens and daddy-daughter dances.

This directive is meant to ensure the inclusion of students who identify as transgender, but it is an inclusion by elimination, achieved by banning sex-specific celebrations and honors. Any sex-specific distinctions, it is feared, might hurt the feelings of those with gender dysphoria.

But by this logic, even a generic gender-neutral parent-child dance should be cut for making children with dead, absent, or terrible parents feel left out. Indeed, dances in general should be cancelled because they might make socially awkward or physically disabled students feel bad. And so on the reasoning goes, cancelling everything that might make someone, somewhere, feel excluded.

This mindset allows misery to take happiness hostage, and it is particularly pernicious for sex and gender. We are embodied, and the reality of biological sex is fundamental to our being. It is also essential to human existence, for new persons are conceived through the union of male and female. That some people are uncomfortable with the sexed reality of physical embodiment is tragic (that cultural and political leaders are trying instill and encourage such discomfort is wicked), but that is no reason to require everyone else to officially ignore the basic experiential reality of being male and female.

Yet this is precisely what Virginia is doing to the children in its public schools. The cancellations of dances and other events are emblematic of a deeper erasure of identity, in which young men and women have the sexed reality of bodily existence officially erased in the name of inclusion. We are born male and female, but Virginia has decided that helping children grow into men and women is wrong, and that fundamental relational identities such as “father” and “daughter” must be publicly eliminated.

The irony in this is that those who preach most fervently about celebrating diversity seem terrified of acknowledging human differences. If, for instance, they really believe that “trans women are women,” then what is diverse about them? Only by acknowledging the difference between biological women and biologically male trans women can there be any diversity to celebrate. But admitting these differences threatens the entire ideological project.

The difficulty the ideologues face is that their mantra of diversity, equity, and inclusion is borderline incoherent. Diversity means difference, which intrinsically imperils equity and inclusion, for differences are by nature unequal in some way and exclusive of something. To be one thing is to not be another. To prefer one thing is to disfavor another. And so on.

Identifying and responding to important differences are always fundamental social and political tasks, as is finding commonalities that might unite us. These duties require wisdom and discernment, and the advocates and acolytes of diversity, equity, and inclusion are not up to the job. They efface essential differences and magnify those that they should minimize. And they lack a unifying principle that could unite different people and groups. Hence their flailing attempts to reconcile the tensions inherent in their sloganeering.

The Virginia public school system is a case in point, with Loudoun County alone providing a plethora of egregious examples that have provoked a major parental backlash even in this wealthy, solid-blue area. For instance, Loudoun administrators have illegally punished a teacher for stating the truth about biological sex, and they have become racial obsessives who have spent tens of thousands on critical race theory training that denounces colorblindness.

But the distinction between male and female is literally fundamental to human existence, whereas the construction of racial identities has little to no basis in biological reality. Effacing the former while emphasizing the latter is the opposite of what educators should be doing.

This folly arises because they are trying to remake society with a half-baked, incoherent ideology that is enforced by the shifting demands of the woke internet mob. And those people will always find something else to be unhappy about and another ideological envelope to push. Thus, the same ideology that cancels daddy-daughter dances in Virginia is putting men into female prisons in California, with predictably horrifying results.

These policies are the products of unhappy people who have realized that combining ideology and claims of victimhood give them power, which they can use to hurt others. This is why so much of our public discourse, especially from thelLeft, amounts to little more than accusations that “the thing you like is bad, and you should feel bad for liking it.”

The truth is that a daddy-daughter dance hurts no one, except those already determined to be miserable. Banning the dance helps no one, except for those eager to punish those who are happy.

The past isn’t what it used to be

Jonah Goldberg:

Joe Biden loves to say, “America is back.” He used it to announce his incoming national security team last November. “It’s a team that reflects the fact that America is back, ready to lead the world, not retreat from it.”

Last February, there were a slew of headlines about his first big foreign policy speech along the lines of this from the Associated Press:

Biden declares ‘America is back’ in welcome words to allies.”

In that speech, Biden told diplomats at the State Department, “when you speak, you speak for me. And so—so [this] is the message I want the world to hear today: America is back. America is back. Diplomacy is back at the center of our foreign policy.”

That phrase—as well as those Biden-tells-allies-America-is-back headlines—keeps coming to mind every time I read about the inexorable advance of the Taliban in Afghanistan. For the Afghans, America was “here,” and now it’s leaving. I wonder how “America is back” must sound to the people feeling abandoned by America in general, and the guy saying it in particular.

I’m not trying to pull on heart strings, so I won’t trot out the girls who will be thrown back into a kind of domestic bondage or the translators and aides who rightly fear mass executions may be heading their way. All I’ll say is that their plight does pull on my heart strings.

But let’s get back to this “America is back” stuff. For Biden, it seems to have two meanings. One is his narrow argument that we are rejoining all of the multilateral partnerships and alliances that Trump pulled out of or denigrated. Fair enough. I can’t say this fills me with joy, even though I disliked most of that stuff from Trump (the two obvious exceptions being getting out of the Paris Accord and the Iran deal). I think diplomacy often gets a bad rap. But I also think diplomacy is often seen as an end rather than a means. We want diplomats to accomplish things, not to get along with each other just for the sake of getting along. For too long, Democrats have cottoned to a foreign policy that says it’s better to be wrong in a big group than to be right alone.

But there’s another meaning to “America is back.” It’s an unsubtle dig at Trump and a subtle bit of liberal nostalgia all at once. It’s kind of a progressive version of “Make America Great Again.” It rests on the assumption that one group of liberal politicians speaks for the real America, and now that those politicians are back in power, the real America is back, too. But the problem is, there is no one real America. There are some 330 million Americans and they, collectively and individually, cannot be shoe-horned into a single vision regardless of what labels you yoke to the effort.

Liberals were right to point out that there was a lot of coding in “Make America Great Again.” I think they sometimes overthought what Trump meant by it, because I don’t think he put a lot of thought into it. He heard a slogan, liked the sound of it, and turned it into a rallying cry—just as he did with “America first,” “silent majority,” and “fake news.” Still, when, exactly,  was America great in Trump’s vision? The consensus seems to be the 1950s, a time when a lot of good things were certainly happening, but a lot of bad things were going on that we wouldn’t want to restore.

Liberal nostalgia is a funny thing. Conservative nostalgia I understand, because I’m a conservative and I’m prone to nostalgia (even though nostalgia can be a corrupting thing, which is why Robert Nisbet called it “the rust of memory”). Conservatives tend to be nostalgic for how they think people lived. Liberals tend to be nostalgic about times when they had power.

Consider the New Deal. Being nostalgic for the New Deal certainly isn’t about how people lived, not primarily. America was in a deep depression throughout the New Deal. Breadlines and men holding signs saying “will work for food” are probably the most iconic images of that time. Who wants to return to that? And yet, liberals will not banish it from their collective memory as something like the high water mark of American history. That’s why they keep pushing for new New Deals and slapping the label on new programs that consist of spending money we don’t have.

The only thing that competes with the New Deal in the liberal imagination is the 1960s in general and the civil rights movement and Great Society in particular. I’m reminded of a Washington Post interview with Howard Dean in 2003 in which he explained his nostalgia for that era:

“Medicare had passed. Head Start had passed. The Civil Rights Act, the Voting Rights Act, the first African American justice [was appointed to] the United States Supreme Court. We felt like we were all in it together, that we all had responsibility for this country. … [We felt] that if one person was left behind, then America wasn’t as strong or as good as it could be or as it should be. That’s the kind of country that I want back.”

“We felt the possibilities were unlimited then,” he continued. “We were making such enormous progress. It resonates with a lot of people my age. People my age really felt that way.”

That’s not how people his age felt back then. It’s how a certain group of liberals felt because they were winning. The 1960s and the 1930s were times of massive civic strife marked by race riots, domestic bombings, assassinations, and anti-war protests. But liberals were in charge, felt like history was on their side, and they had a lot of “wins” as Donald Trump might say.

The current obsession with the “new Jim Crow” seems like a perfect example of how liberal nostalgia distorts and corrupts. As I write today, I’m not a fan of the arguments coming out of the GOP or the Democrats. But the simple fact is that we don’t live in the 1960s—or 1890s—anymore. Whatever the future holds, it will not be a replay of that past. And that’s overwhelmingly for the good.

I always find it funny that the same people who ridicule “excessive” fidelity to the timeless principles of the Founding as archaic are often also the same ones who worship at the altar of the New Deal and the Great Society. The Founders didn’t know about mobile phones and the internet! Well, neither did the New Dealers or the Johnson administration. But that doesn’t matter because the part they really liked and yearn to restore is timeless: people in Washington deciding how Americans everywhere else should live and work.

I don’t know how the White House’s new collaboration with Facebook to combat “misinformation” will actually play out and I’m not fully up to speed on what the administration really intends to do. Though—given press secretary Jen Psaki’s comment that “you shouldn’t be banned from one platform and not others,” etc.—it doesn’t sound good. But I think David French’s gut check is exactly right: “Moderation is a platform decision, not a White House decision. Trying to force more moderation is as constitutionally problematic as trying to force less moderation.”

The principle at the heart of that speaks not just to social media regulation, but to all of the competing efforts from right and left to throw aside the rules in a thirsty search to rule.

Listeners of The Remnant know that I often find myself suffering from a peculiar form of nostalgia, for want of a better word. The title of my podcast comes from an essay by Albert Jay Nock, who was one of the “superfluous men” of the long Progressive Era that stretched—with a brief, and partial, parentheses under the sainted Calvin Coolidge—from the end of the Teddy Roosevelt administration to the end of the Franklin Roosevelt administration. I don’t agree with Nock, or the other superfluous men, on everything—they were a diverse lot. But the thing that connected them all—hence their superfluousness—was how they felt that they were standing on the sidelines as the major combatants at home and abroad competed over how best to be wrong, how to stir up populist anger for their agendas, and, most of all, how to use the state to impose their vision on the “masses.” The remnant was the sliver that wanted no part of any of it.

“Taking his inspiration from those Russians who seemed superfluous to their autocratic nineteenth-century society and sought inspiration in the private sphere, even to the point of writing largely for their desk drawers,” writes Robert Crunden, Nock’s biographer. “Nock made the essential point: ransack the past for your values, establish a coherent worldview, depend neither on society nor on government insofar as circumstances permitted, keep your tastes simple and inexpensive, and do what you have to do to remain true to yourself.”

Or as the great superfluous man of the Soviet Empire, Alexander Solzhenitsyn, put it, “You can resolve to live your life with integrity. Let your credo be this: Let the lie come into the world, let it even triumph. But not through me.”

I share this—yet again—as a kind of omnibus response to all of my critics these days and the ones yet to come. I’m lucky that I don’t have to write for my desk drawer, though I am reliably informed — daily — that many people would prefer I did. But I am going to continue to write for the remnant as I see it and those I hope to convince to swell its ranks, and not for those who think that to be against what “they” are doing I must endorse what “we” are doing. Our politics may be a binary system of competing asininities these days, but just because one side of a coin is wrong, that doesn’t mean the other side is right.

“A little rebellion, now and then …”

Selwyn Duke:

It was said during our Mideast military adventures, and has been considered a truism of war, that you can’t really win a conflict without “boots on the ground.” For it’s difficult to completely subdue a people from afar. It may not be too different with battles for civilization.

I stated in 2012, addressing a long-developing reality, that the culture war was over as the Left had achieved social dominance. “What is occurring now is a pacification effort,” I wrote — one designed to stamp out the “conservative” guerrilla-group diehards.

Other than its intensification, the only thing that has changed about this effort in the last decade is that it has a new name: “cancel culture.” With GoogTwitFace (Big Tech) having upped its bias and dropped its mask and corporate America joining academia, the media and entertainment on the Dark Side, these entities act as a malevolent monolith silencing dissident voices from Maine to Maui. But it would be naïve to think the Left, which craves power and wants total control, will be satisfied with its current soft authoritarianism.

This brings us to two developments that could cause the raising of eyebrows if not militias. Consider: If you heard about a Third World country in which the leadership was purging the military of political opponents, would you assume it was just an exercise in ideological nepotism? Or would you suppose the leaders wanted a military of devoted fellow travelers who would, when asked, unflinchingly turn their guns on domestic opponents of the regime who couldn’t be cowed by other methods?

Now, should the assumption be different just because the military purge occurs in a developed country?

Just such an event has been taking place in the U.S. for at least a decade. It began under Barack Obama, who not only tried to socially re-engineer the military but also engaged in a widely noted purge of top military brass.

President Trump didn’t (couldn’t?) do enough to reverse this process, and now it has been kicked into high gear. Having largely corrupted the armed forces’ upper echelons, the Left now aims for rank-and-file ideological conformity. Thus do we hear about how we must stamp out the imaginary boogeyman du jour, “white supremacy,” from the military. Preposterously, Secretary of Defense Lloyd Austin even issued, in early February, a 60-day stand-down order to address the alleged internal threat it poses.

Of course, white supremacists are about as common as straight, happily married women at a NOW convention; why, I’m well into middle age and I don’t know that I’ve ever met one. This isn’t to say there aren’t liberals delusional enough to believe the threat is real; that they’re detached from reality is partially why leftists are so dangerous.

Yet it’s clear there’s a different motivation among the Machiavellian leftists. It hasn’t escaped the Left’s notice that the military traditionally leaned Republican. Moreover, even if this has changed somewhat, having armed forces that are obedient to the ruling party to the point of wickedness isn’t possible with dissidents in the ranks. (Besides, “fragging” is a real thing.)

So you need a purge. You do this by conjuring up a boogeyman — in our case “white supremacy” — and then characterizing it as a widespread, existential threat. This now means defining Trump support, patriotism, opposition to illegal migration and, really, any deviation from the Left’s agenda at all as reflecting white supremacy.

It’s an old tactic: Portray already persecuted minorities or political opponents as the persecutors so you can leverage even more control over them. It’s how you create your own Enabling Act moment.

Pre-election polls showing that military members favored Joe Biden over Donald Trump indicate how the armed forces have already been partially transformed (this is true even if the polls were manipulated, and some of what they reflect is general societal “leftward” drift). Yet controlling the military is only part of the equation. You must also own the other boots on the ground: the police.

As soon as the talk of dismantling/defunding/“re-imagining” law enforcement began last year, I pointed out that while much of the movement was driven by blind passion, there’s only one rational reason to want to nix the police. “Certain leftists want to eliminate the police,” I wrote June 7, “because they want to become the police.”

Power-mongers attack those whose power they crave. Leftists want centralized control over local police just as they now have control over the intelligence agencies. They especially want this because law enforcement is generally, it appears, even more conservative than the military (its members are older, for one thing).

In this vein, it hasn’t eluded leftists that certain sheriffs are engaging in nullification efforts, having vowed not to enforce some new anti-gun and/or COVID-related laws. Remember that sheriffs are elected by an area’s local population and, with most counties being conservative (Trump won 83 percent of counties, or about 2,600, in 2016), such nullification isn’t surprising; moreover, expect more resistance to radical leftism in a good portion of this 83 percent of the nation.

So while the Left is quickly gaining monolithic federal control by virtue of large population centers that vote (and steal votes) heavily for Democrats, controlling Middle America with its more patriotic police is a different matter.

That is, unless the Left can institute federal police. Ergo, the “re-imagining” of law enforcement.

Once your sheriff is an Antifa/BLM-sympathizing ideologue installed by D.C. (District of Communism) and hailing from 1,000 miles away — with no local community ties — he’ll happily “discipline” the white supremacists lurking around every corner.

If the Left can co-opt the military and police, it will have seized our country’s last two remaining (relatively) “conservative” institutions. It will also have what’s necessary to quash that impediment to total coast-to-coast hegemony: America’s framework and tradition of state and local control.

Leftists know that the Left-Right divide is intensifying and that more “conservative” states — such as Florida, South Dakota and South Carolina — are increasingly beating their own path. They know that increased nullification of federal dictates lies ahead (heck, leftists wrote the book on it with their violation of federal immigration and drug laws). And they know that as their philosophical soulmate Mao put it, “Power grows out the barrel of a gun.”

There’s no question that certain leftists have thought about using boots on the ground to conclude their pacification effort. Remember that Bill Clinton might have once said: “I loathe the military” back when leftist protesters were calling Vietnam-era soldiers “baby killers” and that today’s socialist rabble spew venom at police. But they don’t in principle hate either institution.

And just as they’ve flipped from hating to liking the intelligence services because they now control them, so would they love the military and police — and use them with zeal — upon seizing them.

Also note that so-called “leftism” is not an ideology (how could it be? Its “principles” change continually). Rather, it currently represents movement toward moral disorder. And leftists are morally disordered people, the worst of them being vice-ridden, amoral and driven by base appetites such as power lust. As I wrote in “The Time for Talking with the Left is Long, Long Past,” perhaps the best way to prepare yourself for contending with them is to “pretend you’re dealing with Satan.”

Vanguard leftists are above nothing and beneath contempt. If you read the worst possible intentions into whatever they do, you won’t too often go wrong.

An apparently necessary reminder

Richard M. Ebeling:

The recent string of multiple-victim incidents of gun violence and police shootings of black Americans has once again resulted in renewed calls for restrictions on gun ownership. President Biden has said that executive instructions to various branches of the Federal government will attempt to reduce the frequency and possibility of such violence.

Some of his proposals, however, are merely using the gun control argument as a cover for more government redistributive intervention within the society. Thus, when the White House released a statement on April 7, 2021 detailing its plans in this direction, one of them called for a $5 billion investment over eight years to support “community violence intervention programs” with a key part of it being “to help connect individuals to job training and job opportunities.” The Department of Health and Human Services will be also directed to “educate” state governments in better using Medicaid funding to better subsidize such interventionist projects.

In other words, if only we expand notoriously wasteful and ineffective government job training programs, gun violence magically will be reduced. If only “unemployed” gun-using criminals can be taught a nonviolent job skill, they will stop robbing convenience stores and stop killing people in gang-related drive-by shootings! Plus, once the national mandated minimum wage is raised to $15 an hour, there will be long lines, obviously, of prospective employers eagerly waiting to hire former street thugs with their newly certified government-provided entry-level employment “skills.” Who knew it could be so simple?

But the meat of the Biden gun control policies all center on defining various types of firearms to categories that can rationalize greater prohibition of access and ownership. The fact is, however, that the number of Americans thinking the country needs stricter gun controls has been decreasing. According to a recent Gallup opinion survey, in 2018, 67 percent of survey respondents supported more stringent gun laws, but in 2020, that number had fallen to 57 percent, or a 15 percent decrease in those holding this opinion.

And in a survey in early 2021, Gallup reported that of those most concerned about current government gun policy, 42 percent said that current laws are sufficient, 41 percent replied they should be stricter, and 8 percent called for them to be less strict. So, 50 percent, think that gun regulations should be left as they are or actually reduced. Hardly a clamoring supermajority wanting the government to dramatically weaken a relatively wide right to bear arms. More like the same and usual vocal minority who think that “bad things” can be legislated away by political paternalists given enough governmental power over people’s lives.

Also, according to those queried by Gallup, 42 percent said that they had a gun in their home, 55 percent said they did not, and 3 percent had no opinion. It is not too much of a stretch of the imagination to think that many among the 3 percent who had no opinion in fact might be simply not wanting to admit that they do have one or more firearms in their home. Nor is it likely going very far out on a limb to presume that at least some of those who replied that they do not have a gun in their home probably were not being completely honest, particularly if they are suspicious of government or have a firearm that is not properly licensed in the state in which they live.

But, nonetheless, among those Americans wanting a heavier government hand over gun access and ownership, a good number probably view the Second Amendment and its guarantee of the right of the individual to bear arms as something practically anachronistic. It may seem to be a throwback to those earlier days of the Wild West, when many people, far from the law and order provided by the town sheriff and circuit judge, had to protect their families and land from cattle rustlers and outlaw bands. Such people are wrong.

Locks, bars on windows, and alarm systems are all useful devices to prevent unwanted intruders from making entrance into our homes and places of work. But what happens if an innocent victim is confronted with an invader who succeeds in entering his home, for example, and the safety of his family and possessions is now threatened? What if the invader confronts these innocent occupants and threatens some form of violence, including life-threatening force? What are the victims to do?

Critics of the Second Amendment and private gun ownership never seem to have any reasonable answer. Silent prayer might be suggested, but if this were to be a formal recommendation by the government it might be accused of violating the separation of church and state. No, better to not get the anti-religion lobby on your back, especially if it’s in an election year.

Even in an era promoting “politically correct” notions of equality among the sexes and an infinite number of self-defining genders, it nonetheless remains a fact that on average an adult male tends to be physically stronger than an adult woman, and most especially if there is more than one man confronting a single woman. A good number of years ago, economist Morgan Reynolds wrote a book on the economics of crime. The following is from one of the criminal cases he discussed. It seems that four men broke into a house in Washington, D.C., looking for a man named “Slim.” When the occupant said that he didn’t know where Slim was, they decided to kill him, instead. One of the defendants later testified,

“I got a butcher knife out of the kitchen. We tied him up and led him to the bathroom. And we all stabbed him good. Then, as we started to leave, I heard somebody at the door. Lois [the dead man’s girlfriend] came in…. We took her back to the bathroom and showed her his body. She started to beg, ‘don’t kill me, I ain’t gonna tell nobody. Just don’t kill me.’ She said we all could have sex with her if we wouldn’t kill her. After we finished with her, Jack Bumps told her, ‘I ain’t takin’ no chances. I’m gonna kill you anyway.’ He put a pillow over her head, and we stabbed her till she stopped wiggling. Then we set fire to the sheets in the bedroom and went out to buy us some liquor.”

Would either of these two victims have been saved if the man had had a gun easily reachable by him in the house or if the woman had had a gun in her purse? There is no way of knowing. What is for certain is that neither was any match for the four men who attacked and killed them with a butcher knife. Even Lois’s begging and submitting to sexual violation did not save her. How many people might be saved from physical harm, psychological trauma, or death if they had the means to protect themselves with a firearm?

Equally important, how many people might never have to be confronted with an attack or murder if potential perpetrators were warded off from initiating violence because of the uncertainty that an intended victim might have the means to defend him- or herself from thieves, rapists, and murders? A gun can be a great equalizer for the weak and the defenseless, especially if an intended victim doesn’t have to waste precious seconds fumbling with the key to a mandatory trigger lock.

But what is an ordinary person to do when he finds out that it is the government that is the perpetrator of violence and aggression against him and his fellow citizens? How do you resist the power of the state? Tens of millions of people were murdered by governments in the 20th century. They were killed because of the language they spoke or the religion they practiced. Or because those in political control classified them as belonging to an “inferior race” or to a “social class” that marked them as an “enemy of the people.” Furthermore, the vast, vast majority of these tens of millions of victims were murdered while offering little or no resistance. Fear, terror, and a sense of complete powerlessness surely have been behind the ability of governments to treat their victims as unresisting lambs brought to the slaughter.

Part of the ability of government to commit these cruel and evil acts has been the inability of the victims to resist because they lacked arms for self-defense. However, when the intended victims have had even limited access to means of self-defense it has shocked governments and made them pay a price to continue with their brutal work.

Many have been surprised by the lack of resistance by the European Jews who were killed by the millions in the Nazi concentration and death camps during the Second World War. For the most part, with a seemingly peculiar fatalism, they calmly went to their deaths with bullets to the back of the head or in gas chambers. Yet when some of the people were able to gain access to weapons, they did resist, even when they knew the end was most likely to be the same. The following is from historian John Toland’s biography of Adolf Hitler (1992), in reference to the resistance of the Jews in the Warsaw Ghetto in 1943:

“Of the 380,000 Jews crowded into the Warsaw ghetto, all but 70,000 had been deported to the killing centers in an operation devoid of resistance. By this time, however, those left behind had come to the realization that deportation meant death. With this in mind, Jewish political parties within the ghetto finally resolved their differences and banded together to resist further shipments with force . . .

“At three in the morning of April 9, 1943, more than 2,000 Waffen SS infantryman – accompanied by tanks, flame throwers and dynamite squads – invaded the ghetto, expecting an easy conquest, only to be met by determined fire from 1,500 fighters armed with weapons smuggled into the ghetto over a long period: several light machine guns, hand grenades, a hundred or so rifles and carbines, several hundred pistols and revolvers, and Molotov cocktails. Himmler had expected the action to take three days but by nightfall his forces had to withdraw.

“The one-sided battle continued day after day to the bewilderment of the SS commander, General Jürgen Stroop, who could not understand why ‘this trash and sub-humanity’ refused to abandon a hopeless cause. He reported that, although his men had initially captured ‘considerable numbers of Jews, who are cowards by nature,’ it was becoming more and more difficult. ‘Over and over again new battle groups consisting of twenty or thirty Jewish men, accompanied by a corresponding number of women, kindled new resistance.’ The women, he noted, had the disconcerting habit of suddenly hurling grenades they had hidden in their bloomers . . .

“The Jews, he reported, remained in the burning buildings until the last possible moment before jumping from the upper stories to the street. ‘With their bones broken, they still tried to crawl across the street into buildings that had not yet been set on fire…. Despite the danger of being burned alive the Jews and bandits often preferred to return into the flames rather than risk being caught by us.’ … For exactly four weeks the little Jewish army had held off superior, well-armed forces until almost the last man was killed or wounded.”

In the end the Germans had to commit thousands of military personnel and in fact destroy an entire part of Warsaw to bring the Jewish ghetto resistance to an end.

What if not only the Jewish population but the majority of all the “undesirable” individuals and groups in Germany and the occupied countries of Europe had been armed, with the Nazi government unable to know who had weapons, what types, and with what quantity of ammunition? It would be an interesting study in World War II history to compare private gun ownership in various parts of Europe and the degree and intensity of resistance by the local population to German occupation.

In the early years of the Bolshevik takeover in Russia there were numerous revolts by the peasantry against Communist policies to collectivize the land or seize their crops as in-kind taxes. What made this resistance possible for several years was the fact that in the countryside the vast majority of the rural population owned and knew how to use hunting rifles and other weapons of various kinds. At the end of the day, in the face of armed resistance, Lenin had to reverse his 1918 policy of “war communism,” with its near total collectivization of the Russian economy and introduce his “New Economic Policy” (NEP), in 1922, restoring small- and medium-sized enterprises to private hands, and return nationalized land to the peasantry. In no other way could the countryside revolts be stopped that threatened the overthrow of the Marxist regime and to reestablish some kind of economic rationality to Russian society.

Acquisition of firearms during the Second World War as part of the partisan movement against the German invasion of the Soviet Union enabled active, armed resistance by Lithuanian and Ukrainian nationalist guerrillas against Soviet reoccupation of their countries to continue in the forests of Lithuania and western Ukraine well into the early 1950s. The Soviets also discovered what a determined and armed population could do when they invaded Afghanistan in 1979 and had to ignominiously withdraw ten years later in 1989 in de facto defeat at the hands of the mujahideen. About 15,000 Soviet military forces were killed in the conflict, along with an estimated 2 million Afghanis.

It is hard to imagine how the people of the 13 American colonies could have ever obtained their independence from Great Britain at the end of the 18th century if the local population had not been “armed and dangerous.” It is worth recalling Patrick Henry’s words in arguing for resistance against British control before the king’s armed forces could disarm the colonists:

“They tell us . . . that we are weak – unable to cope with so formidable an adversary. But when shall we be stronger? . . . Will it be when we are totally disarmed, and when a British guard shall be stationed in every house? … Three million people, armed in the holy cause of liberty . . . are invincible by any force which our enemy can send against us.”

The taking up of arms is a last resort, not a first, against the intrusions and oppressions of government. Once started, revolutions and rebellions can have consequences no one can foretell, and final outcomes are sometimes worse than the grievance against which resistance was first offered. However, there are times, “in the course of human events,” when men must risk the final measure to preserve or restore the liberty that government threatens or has taken away. The likelihood that government will feel secure in undertaking infringements on the freedoms of Americans would be diminished if it knew that any systematic invasion of people’s life, liberty, and property might meet armed resistance by both the victim and those in the surrounding areas who came to his aid because of the concern that their own liberty might be the next to be violated.

Though it may seem harsh and insensitive, when I read the advocates of gun control pointing to incidents of private acts of violence against groups of innocent others, I think to myself:

How many more tens of thousands of innocent men, women and children were killed around the world in the last century by governments? And how many of those men, women and children, victims of government-armed violence, might have been saved if their families and neighbors had possessed the right to bear arms against political aggressors? How many men, women and children have been saved because their families have had weapons for self-defense against private violators of life and property? And how many could have been saved from private aggressors if more families had owned guns?

Nor should the argument that virtually all other “civilized” countries either prohibit or severely restrict the ownership and the use of firearms in general and handguns, in particular, intimidate Americans. America has been a free and prosperous land precisely because of the fact that as a nation we have chosen, for far longer, to follow political and economic avenues different from those followed by other countries around the world.

As a people, we have swum against the tide of collectivism, socialism, and welfare statism to a greater degree, for the most part, than have our Western European cousins. As a result, in many areas of life we have remained freer, especially in our market activities, than they. The fact that other peoples in other lands chose to follow foolish paths leading to disastrous outcomes does not mean that we should follow in their footsteps.

America was born in revolt against the ideas of the “Old World:” the politics of monarchy, the economics of mercantilism, and the culture of hereditary class and caste. America heralded the politics of representative, constitutional government, the economics of the free market, and the culture of individualism under equality before the law. It made America great.

If in more recent times there has been an “American disease,” it has been our all-too-willing receptivity to the European virus of political paternalism, welfare redistribution, economic regulation and planning, and the passive acceptance of government control over social affairs.

We need not and indeed should not fall victim to one more of the collectivist ailments practiced more intensely in other parts of the world: the disarming of the people under the dangerous notion that the private citizenry cannot be trusted and should not be allowed to have the means of self-defense against potential private and political aggressors in society.

Let us continue to stand apart and not fall prey to the false idea that somehow our European cousins are more enlightened or advanced than we on the matters of gun ownership and control. They are not. Terrorist attacks in a number of European countries over the last few years demonstrate that merely banning or restricting gun ownership does not deter those who are determined to undertake such violent acts by acquiring the needed firearms or finding ways to carry out mass murder with knives, axes, homemade bombs, or motor vehicles that run down dozens of people on crowded city streets.

Instead let us remember and stay loyal to the sentiment of James Madison, the father of the U.S. Constitution, who praised his fellow countrymen when he said, “Americans [have] the right and advantage of being armed – unlike citizens of other countries whose governments are afraid to trust the people with arms.”

Let us remain worthy of Madison’s confidence in the American people and defend the Second Amendment of the Constitution upon which part of that confidence was based.

Doomsday rock

An online discussion about music of the 1980s included a few references to songs about that fun topic of the imminent nuclear holocaust.

It should be pointed out that popular music has on occasion used social unrest to the point of the Apocalypse as a theme or inspiration …

… even before the ’80s.

The oeuvre of Doom Rock really got going in the 1980s, though, during the presidential terms of Ronald Reagan, who was simultaneously viewed by the American left as both stupid and evil (which you’d think would be incompatible concepts, but logic has never been a strong suit of political discussions) and doubtlessly bound to blow up the planet.

So because musical artists are usually left of center and get, shall we say, inspired by (more polite than “ripping off”) others’ works, an entire subgenre of rock was created.

For those who don’t know German:

Social commentary has always been a part of popular music at least since the 1960s. This particular musical trend dovetailed with what movie studios and TV networks were producing.

(One thing “Special Bulletin” and “Countdown to Looking Glass” have in common is really bad writing for and acting by those who were supposed to be portraying reporters and TV news anchors. Anyone who has watched coverage of such disasters as the 1989 San Francisco earthquake, 9/11 or severe storm damage knows that professionals do not emote on camera. The only way to get effective journalist portrayals is to use actual journalists, such as Eric Sevareid in his brief appearance in “Countdown to Looking Glass” and Sander Vanocur and Bree Walker in 1994’s “Without Warning.”)

You may notice, by the way, that the nuclear holocaust predicted for the 1980s did not take place. For that matter, within three years of Reagan’s leaving office the Soviet Union was no more and the entire Warsaw Pact collapsed. But defeating your enemy and being on the right side of history apparently doesn’t make good pop music.