Category: History


I have noted in this space numerous examples about how Hollywood’s lack of creativity leads to lame remakes.

The latest example comes from Stephen Green:

Hours after news broke that NBCUniversal will re-reboot “Battlestar Galactica,” an idea colder than a Cylon’s heart, we learn that ’60s sitcom “Hogan’s Heroes” is getting the sequel treatment from series co-creator Al Ruddy.

The original premise was fun, in a lighthearted ’60s way. Despite valid concerns of “Too soon!” and genuine Nazi atrocities committed mostly against Soviet prisoners, the show worked well enough to run for 168 primetime episodes — and win a bunch of awards in the process. I grew up watching the reruns almost endlessly. Colonel Robert Hogan (Bob Crane) and his heroes were, quickly described, a white guy (Hogan), a black guy (Ivan Dixon as Kinchloe), a nerdy guy (Larry Hovis as Carter), a British guy (Richard Dawson as Newkirk), and a French guy (Robert Clary as LeBeau). Together they derailed German munitions trains, snuck spies or vital information to safety, and generally aided the Allied cause from one of the least likely places imaginable.

The two main German characters, camp commander Colonel Klink (Werner Klemperer, a German-born* Jewish actor!) and oafish guard Sergeant Schultz (John Banner), were played for laughs. They were presented as not-terribly-competent German soldiers trying to do their duty as best they could, but mostly trying not to get on the wrong side of any actual Nazis. The only regular Nazi character, Howard Caine’s Major Hochstetter, appeared in maybe a third of the shows, and was outsmarted by Hogan and his crew at every turn.

The ’60s being the ’60s, there was of course Klink’s improbably attractive secretary, Hilda (or was it Gretchen?), played by Sigrid Valdis.

Hilda was one of Klink’s secretaries.

Helga was the other. Bob Crane, who played Hogan, and Sigrid Valdis, who married Hilda, married during the series’ last season.

Like Mel Brooks’s “Get Smart,” which aired during the same years, “Hogan’s Heroes” was really a spy spoof — a genre which flourished in the years after Sean Connery made James Bond into a box office star.

So what about the new show? Well, we don’t know much yet. We do know not to call it a reboot, because it isn’t. In the new show the descendants of the original heroes are scattered all over the world in the present day, but somehow wind up together on a global treasure hunt.

Is this supposed to be “Hogan’s Heroes” or …?

Hell, you’re probably going to be disappointed no matter what. Because as near as I can tell, the new show is the flimsiest excuse for a sequel since “Return to the Blue Lagoon.” Other than featuring an international cast of various accents and colors (plus various sexualities, sexes, and at least three different genders, I’d wager), the new “Hogan’s” has about as much to do with the old “Hogan’s” as Long Island Iced Tea has in common with iced tea.

The new show isn’t a cynical attempt at rebooting a classic. It isn’t even a cynical attempt at making a sequel. The new “Hogan’s Heroes” seems more like a cynical attempt at stretching a beloved brand thin enough to cover something almost entirely unrelated. Boomers are probably getting too old now to care about this stuff, so I think what’s going on here is an attempt to tug at Gen X nostalgia for the reruns we watched as kids. Sheesh, we couldn’t even get a “Family Ties II: Family Tighter.”

But that’s what passes in Hollywood today for originality, so maybe I’ll give it a look when it comes out. Especially if Hilda’s great-granddaughter turns out to be even half as attractive as she was.

So much for those who thought a sitcom set in a German POW camp couldn’t possibly be redone … assuming it is redone.

I don’t remember watching when the series was originally on CBS. I did, however, watch it every chance I got when it was in reruns. “Hogan’s Heroes” was inspired by a black comedy movie, “Stalag 17,” also set in a German POW camp, but, as Green notes, with a few 007 touches.

The most notable thing about the series is its casting. Corporal LeBeau and every major Nazi role were played by Jewish actors. Robert Clary survived a concentration camp. The family of Werner Klemperer, who played Col. Klink, came to the U.S. in 1935. John Banner, who played Sgt. Schultz, was from what now is Ukraine; he was acting in Switzerland when Germany annexed Austria, and decided that would be a good time to head to the U.S. Leon Askin, who played Gen. Burkhalter, was from Austria. (Banner and Askin were both sergeants in the Army during World War II.) Howard Caine, who played Gestapo Major Hochstetter, was an American.

Klemperer said he would only take the role if the Nazis were portrayed as bumbling idiots. That was what the producers had in mind, except for the evil German characters, who usually ended up dead.

My two favorite episodes were when Sgt. Carter did a more-than-passable imitation of Adolf Hitler …

… and when Hogan’s Heroes, well, ended the war:


Sept. 11, 2001 started out as a beautiful day, in Wisconsin, New York City and Washington, D.C.

I remember almost everything about the entire day. Sept. 11, 2001 is to my generation what Nov. 22, 1963 was to my parents and Dec. 7, 1941 was to my grandparents.

I had dropped off our oldest son, Michael, at Ripon Children’s Learning Center. As I was coming out, the mother of one of Michael’s group told me to find a good radio station; she had heard as she was getting out with her son that a plane had hit the World Trade Center.

I got in my car and turned it on in time to hear, seemingly live, a plane hit the WTC. But it wasn’t the first plane, it was the second plane hitting the other tower.

As you can imagine, my drive to Fond du Lac took unusually long that day. I tried to call Jannan, who was working at Ripon College, but she didn’t answer because she was in a meeting. I had been at Marian University as their PR director for just a couple months, so I didn’t know for sure who the media might want to talk to, but once I got there I found a couple professors and called KFIZ and WFDL in Fond du Lac and set up live interviews.

The entire day was like reading a novel, except that there was no novel to put down and no nightmare from which to wake up. A third plane hit the Pentagon? A fourth plane crashed somewhere else? The government was grounding every plane in the country and closing every airport?

I had a TV in my office, and later that morning I heard that one of the towers had collapsed. So as I was talking to Jannan on the phone, NBC showed a tower collapsing, and I assumed that was video of the first tower collapse. But it wasn’t; it was the second tower collapse, and that was the second time that replay-but-it’s-not thing had happened that day.

Marian’s president and my boss (a native of a Queens neighborhood who grew up with many firefighter and police officer families, and who by the way had a personality similar to Rudy Giuliani) had a brief discussion about whether or not to cancel afternoon or evening classes, but they decided (correctly) to hold classes as scheduled. The obvious reasons were (1) that we had more than 1,000 students on campus, and what were they going to do if they didn’t have classes, and (2) it was certainly more appropriate to have our professors leading a discussion over what had happened than anything else that could have been done.

I was at Marian until after 7 p.m. I’m sure Marian had a memorial service, but I don’t remember it. While I was in Fond du Lac, our church was having a memorial service with our new rector (who hadn’t officially started yet) and our interim priest. I was in a long line at a gas station, getting gas because the yellow low fuel light on my car was on, not because of panic over gas prices, although I recall that one Fond du Lac gas station had increased their prices that day to the ridiculous $2.299 per gallon. (I think my gas was around $1.50 a gallon that day.)

Two things I remember about that specific day: It was an absolutely spectacular day. But when the sun set, it seemed really, really dark, as if there was no light at all outside, from stars, streetlights or anything else.

For the next few days, since Michael was at the TV-watching age, we would watch the ongoing 9/11 coverage in our kitchen while Michael was watching the 1-year-old-appropriate stuff or videos in our living room. That Sunday, one of the people who was at church was Adrian Karsten of ESPN. He was supposed to be at a football game working for ESPN, of course, but there was no college football Saturday (though high school football was played that Friday night), and there was no NFL football Sunday. Our organist played “God Bless America” after Mass, and I recall Adrian clapping with tears down his face; I believe he knew some people who had died or been injured.

Later that day was Marian’s Heritage Festival of the Arts. We had record attendance since there was nothing going on, it was another beautiful day, and I’m guessing after five consecutive days of nonstop 9/11 coverage, people wanted to get out of their houses.

In the decade since then, a comment of New York City Mayor Rudy Giuliani has stuck in my head. He was asked a year or so later whether the U.S. was more or less safe since 9/11, and I believe his answer was that we were more safe because we knew more than on Sept. 10, 2001. That and the fact that we haven’t been subject to another major terrorist attack since then is the good news.

Osama bin Laden (who I hope is enjoying Na’ar, Islam’s hell) and others in Al Qaeda apparently thought that the U.S. (despite the fact that citizens from more than 90 countries died on 9/11) would be intimidated by the 9/11 attacks and cower on this side of the Atlantic Ocean, allowing Al Qaeda to operate with impunity in the Middle East and elsewhere. (Bin Laden is no longer available for comment.) If you asked an American who paid even the slightest attention to world affairs where a terrorist attack would be most likely before 9/11, that American would have replied either “New York,” the world’s financial capital, or “Washington,” the center of the government that dominates the free world. A terrorist attack farther into the U.S., even in a much smaller area than New York or Washington, would have delivered a more chilling message, that nowhere in the U.S. was safe. Al Qaeda didn’t think  to do that, or couldn’t do that. The rest of the Middle East also did not turn on the U.S. or on Israel (more so than already is the case with Israel), as bin Laden apparently expected.

The bad news is all of the other changes that have taken place that are not for the better. Bloomberg Businessweek asks:

So was it worth it? Has the money spent by the U.S. to protect itself from terrorism been a sound investment? If the benchmark is the absence of another attack on the American homeland, then the answer is indisputably yes. For the first few years after Sept. 11, there was political near-unanimity that this was all that mattered. In 2005, after the bombings of the London subway system, President Bush sought to reassure Americans by declaring that “we’re spending unprecedented resources to protect our nation.” Any expenditure in the name of fighting terrorism was justified.

A decade later, though, it’s clear this approach is no longer sustainable. Even if the U.S. is a safer nation than it was on Sept. 11, it’s a stretch to say that it’s a stronger one. And in retrospect, the threat posed by terrorism may have been significantly less daunting than Western publics and policymakers imagined it to be. …

Politicians and pundits frequently said that al Qaeda posed an “existential threat” to the U.S. But governments can’t defend against existential threats—they can only overspend against them. And national intelligence was very late in understanding al Qaeda’s true capabilities. At its peak, al Qaeda’s ranks of hardened operatives numbered in the low hundreds—and that was before the U.S. and its allies launched a global military campaign to dismantle the network. “We made some bad assumptions right after Sept. 11 that shaped how we approached the war on terror,” says Brian Fishman, a counterterrorism research fellow at the New America Foundation. “We thought al Qaeda would run over the Middle East—they were going to take over governments and control armies. In hindsight, it’s clear that was never going to be the case. Al Qaeda was not as good as we gave them credit for.”

Yet for a decade, the government’s approach to counterterrorism has been premised in part on the idea that not only would al Qaeda attack inside the U.S. again, but its next strike would be even bigger—possibly involving unconventional weapons or even a nuclear bomb. Washington has appropriated tens of billions trying to protect against every conceivable kind of attack, no matter the scale or likelihood. To cite one example, the U.S. spends $1 billion a year to defend against domestic attacks involving improvised-explosive devices, the makeshift bombs favored by insurgents in Afghanistan. “In hindsight, the idea that post-Sept. 11 terrorism was different from pre-9/11 terrorism was wrong,” says Brian A. Jackson, a senior physical scientist at RAND. “If you honestly believed the followup to 9/11 would be a nuclear weapon, then for intellectual consistency you had to say, ‘We’ve got to prevent everything.’ We pushed for perfection, and in counterterrorism, that runs up the tab pretty fast.”

Nowhere has that profligacy been more evident than in the area of homeland security. “Things done in haste are not done particularly well,” says Jackson. As Daveed Gartenstein-Ross writes in his new book, Bin Laden’s Legacy, the creation of a homeland security apparatus has been marked by waste, bureaucracy, and cost overruns. Gartenstein-Ross cites the Transportation Security Agency’s rush to hire 60,000 airport screeners after Sept. 11, which was originally budgeted at $104 million; in the end it cost the government $867 million. The homeland security budget has also proved to be a pork barrel bonanza: In perhaps the most egregious example, the Kentucky Charitable Gaming Dept. received $36,000 to prevent terrorists from raising money at bingo halls. “If you look at the past decade and what it’s cost us, I’d say the rate of return on investment has been poor,” Gartenstein-Ross says.

Of course, much of that analysis has the 20/20 vision of hindsight. It is interesting to note as well that, for all the campaign rhetoric from candidate Barack Obama that we needed to change our foreign policy approach, president Obama changed almost nothing, including our Afghanistan and Iraq involvements. It is also interesting to note that the supposed change away from President George W. Bush’s us-or-them foreign policy approach hasn’t changed the world’s view, including particularly the Middle East’s view, of the U.S. Someone years from now will have to determine whether homeland security, military and intelligence improvements prevented Al Qaeda from another 9/11 attack, or if Al Qaeda wasn’t capable of more than just one 9/11-style U.S. attack.

Hindsight makes one realize how much of the 9/11 attacks could have been prevented or at least their worst effects lessened. One year after 9/11, the New York Times book 102 Minutes: The Untold Story of the Fight to Survive Inside the Twin Towers points out that eight years after the 1993 World Trade Center bombing, New York City firefighters and police officers still could not communicate with each other, which led to most of the police and fire deaths in the WTC collapses. Even worse, the book revealed that the buildings did not meet New York City fire codes when they were designed because they didn’t have to, since they were under the jurisdiction of the Port Authority of New York and New Jersey. And more than one account shows that, had certain people at the FBI and elsewhere been listened to by their bosses, the 9/11 attacks wouldn’t have caught our intelligence community dumbfounded. (It does not speak well of our government to note that no one appears to have paid any kind of political price for the 9/11 attacks.)

I think, as Bloomberg BusinessWeek argued, our approach to homeland security (a term I loathe) has overdone much and missed other threats. Our approach to airline security — which really seems like the old error of generals’ fighting the previous war — has made air travel worse but not safer. (Unless you truly believe that 84-year-old women and babies are terrorist threats.) The incontrovertible fact is that every 9/11 hijacker fit into one gender, one ethnic group and a similar age range. Only two reasons exist to not profile airline travelers — political correctness and the assumption that anyone is capable of hijacking an airplane, killing the pilots and flying it into a skyscraper or important national building. Meanwhile, while the U.S. spends about $1 billion each year trying to prevent Improvised Explosive Device attacks, what is this country doing about something that would be even more disruptive, yet potentially easier to do — an Electromagnetic Pulse attack, which would fry every computer within the range of the device?

We have at least started to take steps like drilling our own continent’s oil and developing every potential source of electric power, ecofriendly or not, to make us less dependent on Middle East oil. (The Middle East, by the way, supplies only one-fourth of our imported oil. We can become less dependent on Middle East oil; we cannot become less dependent on energy.) But the government’s response to 9/11 has followed like B follows A the approach our culture has taken to risk of any sort, as if covering ourselves in bubblewrap, or even better cowering in our homes, will make the bogeyman go away. Are we really safer because of the Patriot Act?

American politics was quite nasty in the 1990s. For a brief while after 9/11, we had impossible-to-imagine moments like this:

And then within the following year, the political beatings resumed. Bush’s statement, “I ask your continued participation and confidence in the American economy,” was deliberately misconstrued as Bush saying that Americans should go out and shop. Americans were exhorted to sacrifice for a war unlike any war we’ve ever faced by those who wouldn’t have to deal with the sacrifices of, for instance, gas prices far beyond $5 per gallon, or mandatory national service (a bad idea that rears its ugly head in times of anything approaching national crisis), or substantially higher taxes.

Then again, none of this should be a surprise. Other parts of the world hate Americans because we are more economically and politically free than most of the world. We have graduated from using those of different skin color from the majority as slaves, and we have progressed beyond assigning different societal rights to each gender. We tolerate different political views and religions. To the extent the 9/11 masterminds could be considered Muslims at all, they supported — and radical Muslims support — none of the values that are based on our certain inalienable rights. The war between our world, flawed though it is, and a world based on sharia law is a war we had better win.

In one important sense, 9/11 changed us less than it revealed us. America can be both deeply flawed and a special place, because human beings are both deeply flawed and nonetheless special in God’s eyes. Jesus Christ is quoted in Luke 12:48 as saying that “to whomsoever much is given, of him shall be much required.” As much as Americans don’t want to be the policeman of the world, or the nation most responsible for protecting freedom worldwide, there it is.

Long live rock … and the classics

I may prefer the 1980s in entertainments, but I have pointed out here before that every generation of music has included badly done popular music, or music that never should have been recorded.

In the same vein, to be blunt, every generation has produced ideas that are stupefying in their stupidity, mind-numbingly moronic.

And so today let us consider Nebal Maysard:

My fellow musicians of color: it is time to accept that we are in an abusive relationship with classical music.

In my previous articles, I laid out my experiences and reasoning for coming to this conclusion. I started with “Am I Not a Minority?” to explain the everyday racism people of color experience and how it manifests on an institutional level. If you haven’t read it already, I encourage you to explore how institutions uphold their power by choosing which minorities to give access to.

The few scraps given to minorities are overwhelmingly white–occupied by white cisgender women or LGBT+ individuals. The few PoC who are given access to institutional space are most often light skinned and non-Black while also exoticised and tokenised.

And that led me to my second article, “Escaping the Mold of Oriental Fantasy“–a personal history of isolation and colonization, of how Western classical music participates in the act of destroying culture and replaces it with its own white supremacist narrative.

Finally, I shared my attempts at reviving my culture and my tradition, along with the barriers I faced on this journey. My third article, “I’m Learning Middle Eastern Music the Wrong Way,” chronicles the difficulties (and the near impossibility) of engaging with my own cultural musical practices in a proper, authentic way.

From three angles I shared my attempts at being an authentic composer. These articles bring to light the many ways in which the dreams of low-income people of color are obstructed in the Western classical tradition.

It’s not uncommon to love your abuser. I know the experience, and can understand how hard it is to leave. Despite all that classical music has done to me, I still can’t help but marvel at the religious splendor of Bach’s works for organ. Nor can I help but weep at Tchaikovsky’s raw expressive power.

I will forever love my favorite composers. It is possible to be critical about the way classical music is treated and to adore the individual works which inspire you at the same time. I am not making a judgment call on specific works in the canon, but instead their function in modern classical music institutions


And there is still the question of what to do about the skills these composers taught us.

I would like to return to the analogy of the abusive relationship.

Many of us have learned a lot from our abusers. Some abusers are even our parents. Their abuse can follow you wherever you go, and escaping them entirely may be impossible. Whether we like it or not, we are forever changed by our abuse.

This abuse can appear as a scar. We will need each other to heal from the trauma. But we also need to survive and nurture the spirit which requires us to create.

While most composers of color are responding to a calling, that calling is to create artwork in our own voices not to behold ourselves to the social construct of Western classical music.

We can do that using the tools we learned as classical composers without contributing to our own abuse. As I shared in my previous article, we can get to a better understanding of our own cultural traditions little by little if we just start exploring.

In order to leave our abusive relationship, we need a community.

Western classical music depends on people of color to uphold its facade as a modern, progressive institution so that it can remain powerful. By controlling the ways in which composers are financed, it can feel like our only opportunities for financial success as composers are by playing the game of these institutions.

It’s time for us to recognize that engaging with these institutions, that contributing to the belief that our participation in composer diversity initiatives is doing anything to reshape the institution of classical music, and that classical music is an agent of cultural change instead of a placeholder to prevent composers of color from forming our own cultures, is ultimately furthering colonization and prevents us from creating artwork capable of real, genuine expression.


Writing for an audience of rich white people is no longer a priority of mine. Instead, I want to create music for my community. Instead of contributing to white culture and helping them erase my own narrative, I want to use my ability to create art to keep my culture alive.

As long as people of color are making art, culture stays alive.

This mission is entirely against the nature of white supremacy, which seeks to replace non-white cultures with their own fantasies. Therefore, I will not find support in this endeavor.

Click on the link if you want to read the rest of that garbage, to which there is this perfect response in the comments …

As a black man, I believe it is troubling to compare one’s love of classical music to an abusive relationship. Classical music gives me joy, the same is the case with jazz, latin music, the music of Sinatra, Nat King Cole and Shirley Bassey, to mention a few. When it comes to music, no one should issue a prescription of what others (white or of color) should listen to. I’ve always been of the belief that if you don’t like it, then don’t listen to it. With all due respect, this perspective of classical music is simply arrogant. Where does it stop? What should we eliminate next? Western attire? Western books? Western food, technology, philosophy? Western medicine? Paintings, sculptures? One’s opinions should not become the norm for an entire people, whether they are white or of color.

… as well as:

Oh, my God. This is so ridiculous. Everything is eventually going to be white supremacy, isn’t it? Isn’t this a great way to make ppl want more POC to enter classical music? It’s blatant now. Identity politics is a part of a larger agenda to destroy western society. It has attacked every single cultural institutions, almost all of which have happily opened their arms to non-white ppl and have even prioritized their success in the field. There are groups helping POC to gain professional orchestral jobs. How’s that white supremacy?

The west is absolutely the least racist and most tolerant society on the planet and, most likely, in all of human history. It’s opened itself to outsiders of all different backgrounds, so much so that in some nations the very demographics are shifting to a white minority. How’s that racist? The west were who abolished slavery worldwide and enforced it, many white ppl losing their lives for it. Hundreds of thousands of whites died in the American civil way to end slavery. How is all that racist? How long can you hound someone for a mistake that they didn’t even make, that their predecessors made? Westerners aren’t allowed to celebrate the good of their ancestors, so why tf should they be condemned for the crimes of their ancestors? That’s illogical and a double standard. And it shows the true intentions behind identity politics.

There’s not much need for identity politics in the west anymore, everyone is equal under law, and that’s why IP has become absolutely corrosive to society. Look how divided we see now. That wasn’t the case until identity politics became a dominating force. It is nothing but authoritarian and totalitarian. It is never satiated. And it’s a losing ideology, and if you can’t see it, then you are blinded. The track it will go is pitting everyone against everyone else and everything against everything else. It is toxic. Once whites are fully shoved off into a corner, where they will certainly fall back on uniting finally along racial lines, finally your beloved white racism, identity politics will then pit the next two groups against each other on claims of who had it worse, and then again and again until everyone is 100% divided. It will eat itself and destroy western society.

Why not go and tackle REAL RACISM where it really is elsewhere in the world, because it certainly is rampant in the world; slavery still exists in the world! But it’s not really about opposing racism, is it? It’s about opposing western society (which many ppl of all different backgrounds are and can become a part of, it isn’t exclusionary racially). That’s why my LGBT community say NOTHING about the twelve countries that still execute LGBT ppl. Because it’s not white countries doing it, they’re all Islamic countries. Yet they will endlessly demean and attack western society despite western society being the only place on the anet where ppl like me can marry whom they want and become just as successful as anyone else. Instead, the LGBT community even covers up and excuses Islamic countries killing their own gays and lesbians and trans ppl.

It’s become too blatant, and that’s why trump is in office. Ppl were with the supposed betterment of the live of POC and LGBT in our nation’s. Ppl were very on board, but they’ve seen that it isn’t really about that and it’ll never end, they see that it is only becoming more and more authoritarian and totalitarian. I mean, any criticism is called racism and banned. That’s fascistic. That’s very anti-individual and freedom of thought. And that’s why ppl are turning against it and the left in droves. I’m an alienated liberal. I’m gay, and I ahbe to oppose my own community often because of their radical, dangerous ideology.

No doubt my comment will be removed for some racist violation, even though I’m mixed and my argument supports racial harmony and is against racialism, which is no different than the Klan or white supremacists. It’s just the other side of the coin. But go ahead and practice the authoritarianism that is inherently a part of modern identity politics. Go ahead, destroy peace, harmony, and beauty some more.

Then there is this cheeriness from Damon Linker:

Rock music isn’t dead, but it’s barely hanging on.

This is true in at least two senses.

Though popular music sales in general have plummeted since their peak around the turn of the millennium, certain genres continue to generate commercial excitement: pop, rap, hip-hop, country. But rock — amplified and often distorted electric guitars, bass, drums, melodic if frequently abrasive lead vocals, with songs usually penned exclusively by the members of the band — barely registers on the charts. There are still important rock musicians making music in a range of styles — Canada’s Big Wreck excels at sophisticated progressive hard rock, for example, while the more subdued American band Dawes artfully expands on the soulful songwriting that thrived in California during the 1970s. But these groups often toil in relative obscurity, selling a few thousand records at a time, performing to modest-sized crowds in clubs and theaters.

But there’s another sense in which rock is very nearly dead: Just about every rock legend you can think of is going to die within the next decade or so.

Yes, we’ve lost some already. On top of the icons who died horribly young decades ago — Brian Jones, Jimi Hendrix, Janis Joplin, Jim Morrison, Elvis Presley, John Lennon — there’s the litany of legends felled by illness, drugs, and just plain old age in more recent years: George Harrison, Ray Charles, Michael Jackson, Lou Reed, David Bowie, Glenn Frey, Prince, Leonard Cohen, Tom Petty.

Those losses have been painful. But it’s nothing compared with the tidal wave of obituaries to come. The grief and nostalgia will wash over us all. Yes, the Boomers left alive will take it hardest — these were their heroes and generational compatriots. But rock remained the biggest game in town through the 1990s, which implicates GenXers like myself, no less than plenty of millennials.

All of which means there’s going to be an awful lot of mourning going on.

Behold the killing fields that lie before us: Bob Dylan (78 years old); Paul McCartney (77); Paul Simon (77) and Art Garfunkel (77); Carole King (77); Brian Wilson (77); Mick Jagger (76) and Keith Richards (75); Joni Mitchell (75); Jimmy Page (75) and Robert Plant (71); Ray Davies (75); Roger Daltrey (75) and Pete Townshend (74); Roger Waters (75) and David Gilmour (73); Rod Stewart (74); Eric Clapton (74); Debbie Harry (74); Neil Young (73); Van Morrison (73); Bryan Ferry (73); Elton John (72); Don Henley (72); James Taylor (71); Jackson Browne (70); Billy Joel (70); and Bruce Springsteen (69, but turning 70 next month).

A few of these legends might manage to live into their 90s, despite all the … wear and tear to which they’ve subjected their bodies over the decades. But most of them will not.

This will force us not only to endure their passing, but to confront our own mortality as well.

From the beginning, rock music has been an expression of defiance, an assertion of youthful vitality and excess and libido against the ravages of time and maturity. This impulse sometimes (frequently?) veered into foolishness. Think of the early rock anthem in which the singer proclaimed, “I hope I die before I get old.” As a gesture, this was a quintessential statement of rock bravado, but I doubt very much its author (The Who’s Pete Townshend) regrets having survived into old age.

It’s one thing for a young musician to insist it’s better to burn out than to fade away. But does this defiance commit the artist to a life of self-destruction, his authenticity tied to his active courting of annihilation? Only a delusional teenager convinced of his own invincibility, or a nihilist, could embrace such an ideal. For most rock stars, the bravado was an act, or it became one as the months stretched into years and then decades. The defiance tended to become sublimated into art, with the struggle against limits and constraints — the longing to break on through to the other side — merging with creative ambition to produce something of lasting worth. The rock star became another in our civilization’s long line of geniuses raging against the dying of the light.

Rock music was always a popular art made and consumed by ordinary, imperfect people. The artists themselves were often self-taught, absorbing influences from anywhere and everywhere, blending styles in new ways, pushing against their limitations as musicians and singers, taking up and assimilating technological innovations as quickly as they appeared. Many aspired to art — in composition, record production, and performance — but to reach it they had to ascend up and out of the muck from which they started.

Before rock emerged from rhythm and blues in the late 1950s, and again since it began its long withdrawing roar in the late 1990s, the norm for popular music has been songwriting and record production conducted on the model of an assembly line. This is usually called the “Brill Building” approach to making music, named after the building in midtown Manhattan where leading music industry offices and studios were located in the pre-rock era. Professional songwriters toiled away in small cubicles, crafting future hits for singers who made records closely overseen by a team of producers and corporate drones. Today, something remarkably similar happens in pop and hip-hop, with song files zipping around the globe to a small number of highly successful songwriters and producers who add hooks and production flourishes in order to generate a team-built product that can only be described as pristine, if soulless, perfection.

This is music created by committee and consensus, actively seeking the largest possible audience as an end in itself. Rock (especially as practiced by the most creatively ambitious bands of the mid-1960s: The Beatles, The Rolling Stones, The Kinks, and the Beach Boys) shattered this way of doing things, and for a few decades, a new model of the rock auteur prevailed. As critic Steven Hyden recounts in his delightful book Twilight of the Gods: A Journey to the End of Classic Rock, rock bands and individual rock stars were given an enormous amount of creative freedom, and the best of them used every bit of it. They wrote their own music and lyrics, crafted their own arrangements, experimented with wildly ambitious production techniques, and oversaw the design of their album covers, the launching of marketing campaigns, and the conjuring of increasingly theatrical and decadent concert tours.

This doesn’t mean there was no corporate oversight or outside influence on rock musicians. Record companies and professional producers and engineers were usually at the helm, making sure to protect their reputations and investments. Yet to an astonishing degree, the artists got their way. Songs and albums were treated by all — the musicians themselves, but also the record companies, critics, and of course the fans — as Statements. For a time, the capitalist juggernaut made possible and sustained the creation of popular art that sometimes achieved a new form of human excellence. That it didn’t last shouldn’t keep us from appreciating how remarkable it was while it did.

Like all monumental acts of creativity, the artists were driven by an aspiration to transcend their own finitude, to create something of lasting value, something enduring that would live beyond those who created it. That striving for immortality expressed itself in so many ways — in the deafening volume and garish sensory overload of rock concerts, in the death-defying excess of the parties and the drugs, in the adulation of groupies eager to bed the demigods who adorned their bedroom walls, in the unabashed literary aspirations of the singer-songwriters, in mind-blowing experiments with song forms marked by seemingly inhuman rhythmic and harmonic complexity, in the orchestral sweep, ambition, and (yes) frequent pretension of concept albums and rock operas. All of it was a testament to the all-too-human longing to outlast the present — to live on past our finite days. To grasp and never let go of immortality.

It was all a lie, but it was a beautiful one. The rock stars’ days are numbered. They are going to die, as will we all. No one gets out alive. When we mourn the passing of the legends and the tragic greatness of what they’ve left behind for us to enjoy in the time we have left, we will also be mourning for ourselves.

First, as long as people are listening to music — rock, classical or something else — that music isn’t going to die. The classical and classic rock artists prove that.

Classical music didn’t die out (Maysard’s wishes notwithstanding) when Beethoven, Mozart and Bach died, anymore than country music died out when Hank Williams and Johnny Cash died.

These sound like rock bands to me.

They may not to be your taste. I’m not aware of this, but maybe these bands are as corporatized and homogenized as previously mentioned here. Of course, music of every kind in every area has been criticized by someone who didn’t like it for valid and spurious reasons.

Woodstock? Sorry. Can’t make it.

This weekend is the 50th anniversary of the Woodstock music festival.

About which Steven Hayward writes:

Forget asking about citizenship status on the next Census. I’ve always wanted to have the Census ask: “Were you at Woodstock in 1969?” The event was such an icon for the appalling baby boomer generation (to which I sadly belong) that I estimate that you’d get 5 million Yes responses to the question. Maybe that many people believe they were there by astral projection during an acid trip or something.

There was an attempt to organize a 50-year anniversary festival at Woodstock for this weekend, but the effort fizzled. One practical problem, I imagine, is that no vendor could be found to produce enough LSD suppositories.

It is a good time to go back and take in the nonsense written about Woodstock by the mainstream media at the time, and reflect how nothing has changed when it comes to media idiocy and superficiality.

Woodstock set off a fresh round of self-congratulation about the idealism of the young generation.  The absence of destructive chaos was taken as evidence of the moral superiority of the counterculture’s rejection of middle class materialism.  It was, in Abbie Hoffman’s words, “the birth of the Woodstock Nation and the death of the American dinosaur.”  “This festival will show,” Woodstock organizer Michael Lang said, “that what this generation is about is valid …  This is not just about music, but a conglomeration of everything involved in the new culture.” The New York Times thought Woodstock was “essentially a phenomenon of innocence,” while Time magazine chirped that Woodstock

may well rank as one of the significant political and sociological events of the age. . . [T]he revolution it preaches, implicitly or explicitly, is essentially moral; it is the proclamation of a new set of values … With a surprising ease and a cool sense of authority, the children of plenty have voiced an intention to live by a different ethical standard than their parents accepted.  The pleasure principle has been elevated over the Puritan ethic of work.  To do one’s own thing is a greater duty than to be a useful citizen.  Personal freedom in the midst of squalor is more liberating than social conformity with the trappings of wealth.  Now that youth takes abundance for granted, it can afford to reject materialism.

“To do one’s own thing is a greater duty than to be a useful citizen”?? Yup—that pretty much sums up the ethos of modern liberalism. Or as Harry Jaffa put it more bluntly, the core principle of modern liberalism is “every man his own tyrant.”

The New Left was not thrilled with the spin surrounding Woodstock because it suggested that the revolution of youth was far less political than cultural.  After all, the New Left has struggled to get a mere 10,000 to come to Chicago the summer before. “Our frivolity maddened the Left,” one concertgoer remarked.  “We did not even collect pennies for SANE [Society for the Abolition of Nuclear Energy].” Abbie Hoffman had been booed when he attempted to offer some political remarks: The Who’s Pete Townshend whacked Hoffman with his guitar to get him off the stage. But the ever-protean ideological Left managed to adapt.  Leftist writer Andrew Kopkind wrote that Woodstock represented

a new culture of opposition. It grows out of the disintegration of old forms, the vinyl and aerosol institutions that carry all the inane and destructive values of privatism, competition, commercialism, profitability and elitism. . .  For people who had never glimpsed the intense communitarian closeness of a militant struggle—People’s Park or Paris in the month of May or Cuba—Woodstock must always be their model of how good we will all feel after the revolution … [P]olitical radicals have to see the cultural revolution as a sea in which they can swim.

A surprisingly sympathetic account of Woodstock in National Review noted that Woodstock was “a moment of glorious innocence, and such moments happen only by accident, and then not often. . .  [T]hese accidental bursts of aimless solidarity do not last forever.” In fact the purported innocence and new moral world of Woodstock would prove as evanescent as the summer showers that cooled off the concertgoers at Max Yasgur’s farm. A few months later the attempted sequel to Woodstock at California’s Altamont Pass ended violently when the Hells Angels hired as stage security proved they were not yet ready to be part of the Age of Aquarius. The Hells Angels beat a concertgoer to death just a few feet in front of Mick Jagger, who was in the middle of singing “Sympathy for the Devil.”  In contrast to the encomiums to Woodstock, there was little media commentary suggesting that Altamont showed a dark side of the counterculture.

Good riddance to the whole scene I say.

P.S. I do recall a line from Jay Leno back when there was a 30th anniversary concert at Woodstock: “They had to fly in five helicopters of food. And that was just for David Crosby.” Heh.

A more current statement about Crosby would be that they would have to make accommodations for his second liver.

One of the comments about Hayward’s piece:

Two friends were discussing the summer of ’69. One mentioned he attended and enjoyed Woodstock. The other said he didn’t go as he was kinda busy at the time. “Oh, doing what?”…. “Vietnam.” I swear there was then an audible Pacman death sound effect.

And …

I almost made it to Woodstock. I got within around 50 miles from the farm but detoured and entered West Point on July 3. It was very hot and humid in Beast Barracks but they let us Plebes eat a real meal the day of the moon landing. Yeah, it was a great summer.

Be that as that may (I didn’t go; I was 4), I do not write today to denigrate Woodstock, because there is one aspect I find slightly outrageous and considerably more humorous — the bands that did not go to Woodstock, and why they didn’t.

The list begins with Chicago, which certainly would have fit …

… but didn’t get the chance because promoter Bill Graham booked the group into one of his clubs. That made them unavailable, and Graham substituted the group he was promoting, Santana. As bass player/singer Peter Cetera later put it, “We were sort of peeved at him for pulling that one.”

Graham did make it up to the group later:

The rest of the list starts with Ultimate Classic Rock:

Jethro Tull

Reason: Fear of Naked Ladies

“I asked our manager Terry Ellis, ‘Well, who else is going to be there?’ And he listed a large number of groups who were reputedly going to play, and that it was going to be a hippie festival,” Jethro Tull‘s Ian Anderson once told SongFacts, “and I said, ‘Will there be lots of naked ladies? And will there be taking drugs and drinking lots of beer, and fooling around in the mud?’ Because rain was forecast. And he said, ‘Oh, yeah.’ So I said, ‘Right. I don’t want to go.’ Because I don’t like hippies, and I’m usually rather put off by naked ladies unless the time is right.”

Jeff Beck Group

Reason: They Broke Up

Jeff Beck and an all-star band that featured Rod Stewart, Nicky Hopkins, Aynsley Dunbar and Ronnie Wood were actually scheduled to play — only to split up just before Woodstock. Seems Beck simply disappeared on a plane back home, according to Rod Stewart in his autobiography ‘Rod,’ because he was worried about a possible marital infidelity. Not that Stewart was that concerned about missing out. “Ah, well,” he writes. “Seen one outdoor festival you’ve seen them all.”

Led Zeppelin

Reason: They Had A New Jersey Show

Led Zeppelin was invited, of course. But manager Peter Grant apparently decided that headlining their own concert was preferable. Instead, the band headed off to the Asbury Park Convention Hall in New Jersey, just south of Woodstock, for two of the festival’s four days. Grant, in ‘Led Zeppelin: The Concert File,’ said “I said no because at Woodstock we’d have just been another band on the bill.

Iron Butterfly

Reason: They Wanted a Helicopter

Riding the popularity of In-A-Gadda-Da-Vida, Iron Butterfly decidedly overreached with its pre-appearance demands — supposedly asking for such niceties as a helicopter ride in from a New York airport, an immediate start time on stage upon arrival, complete payment upon completion of set and an immediate return helicopter ride for airport departure. The story is they were told that promoters were considering it, but ultimately it seems nobody ever called Iron Butterfly back. “Apparently the agent had a real attitude,” festival co-creator Michael Lang has said, “and we were up to our eyeballs in problems.”

The Beatles

Reason: Yoko?

Bob Dylan

Reason: Sick Kid

Another huge star, another raft of innuendo. Bob Dylan reportedly said no because one of his kids fell ill. There was also a rumor that he had become annoyed with the gathering hippies around his home, which stood near the town of Woodstock. Whatever the reason, it didn’t keep him from playing another huge festival — and just two weeks later — at the Isle of Wight. Dylan reportedly left for England aboard the Queen Elizabeth 2 on August 15, 1969, the day the original Woodstock Festival started. Dylan then moved away from upstate New York, complaining that his house was being beseiged by “druggies.”

The Rolling Stones

Reason: Filming a Forgotten Movie

The Rolling Stones declined because Mick Jagger was in Australia that summer, filming a forgotten movie called ‘Ned Kelly.’ You don’t remember ‘Ned Kelly’? It’s the poorly received 1970 Tony Richardson-directed biopic of a 19th-century Australian bushranger. Also, Keith Richards‘ girlfriend Anita Pallenburg had just given birth to son Marlon that week in London.

Joni Mitchell

Reason: Silly Scheduling Issue

Joni Mitchell reportedly wanted to play Woodstock, but was dissuaded from making the trip by manager David Geffen, reportedly because he wanted her fresh for an appearance on ‘The Dick Cavett Show.’ In a twist, she would end up performing on that TV program with two other participants in the Woodstock festival – David Crosby and Stephen Stills of Crosby Stills and Nash, and Jefferson Airplane. Worse still, she’d be forced to write ‘Woodstock,’ one of her better-known songs, based on boyfriend Graham Nash‘s account of the event.
The Doors
Reason: Thought Monterey Was Better
The Doors apparently gave Woodstock strong consideration, only to decline the invitation. Not because of a scheduling conflict, however. Robby Kriegerwould later say, “We never played at Woodstock because we were stupid and turned it down. We thought it would be a second class repeat of Montery Pop Festival.” John Densmore, however, had other ideas. He was actually at the festival. Densmore appears side stage during Joe Cocker‘s set in the concert film.

Roy Rogers

Reason: Hated the Idea

The revelation that old-timey TV cowboy Roy Rogers had actually been invited, as well, remains something of a shock. Apparently, as Michael Lang relayed in an interview for the expanded Woodstock DVD, the idea was for Rogers to close out the festival with a rendition of ‘Happy Trails.’ It didn’t happen, of course, but only because “his manager didn’t think it was such a great idea.” Hard to argue with that, isn’t it?
Rogers may sound crazy, but remember that Sha Na Na was there.
11 Points contributes a few more, including alternate explanations:

Frank Zappa and the Mothers of Invention – too much mud

Zappa turned down the gig last minute because he heard rain was coming and didn’t want to play around all that mud. (Bad for the festival, good for one of his future children who no doubt would’ve gotten a name like Runny Soil Zappa or Muddlicious Orthopedic June Caralarm Zappa.) …

The Doors – fear of getting shot by someone in the crowd

Apparently, by 1969, Jim Morrison had such a raging case of agoraphobia that he refused to play outdoors because of a genuine belief that it would give snipers too good of a shot. Really. And, at that point, he still wasn’t The Saint so he couldn’t just roam around in disguise.

The Beatles – Yoko wasn’t invited too

One of the biggest questions in music history is “Why weren’t the Beatles at Woodstock?” It’s up there with “Who was so vain that they probably thought this song was about them?”, “Did Rob Base just say he ‘can’t stand sex’?” and “Did New Kids On The Block really think people wouldn’t notice that Hangin’ Tough and You Got It (The Right Stuff) are the same basic song?” And there are three theories why the Beatles didn’t end up as a part of the festival…

(1) John couldn’t get a visa to come to the U.S. because of his drug arrests. (And Nixon didn’t like him.) (2) Other than their B-Sharps-inspiring rooftop concert in January of 1969, they hadn’t played a show together since 1966. (3) John agreed to play but only if Yoko’s Plastic Ono Band also got an invite… and the Woodstock organizers said hell no.

I’d say #1 is the most boring theory, #3 is the most entertaining theory… and #2 is probably the most accurate theory. …

Eric Clapton – in England with Steve Winwood working really hard on getting their new band off the ground

Woodstock caught Clapton at an awkward time. The Yardbirds were long dead, Cream was recently dead, and Clapton decided to pour all of his effort into launching his new supergroup, Blind Faith. So, rather than play Woodstock, Clapton and his Blind Faith bandmade Steve Winwood decided to have a retreat to really work on their music. It didn’t work — Blind Faith would barely last another few months.

I will draw a parallel to this in a few years when LeBron James and Dwyane Wade skip the 2012 Olympics to practice working together over the summer when they realize their results produced by their superteam is less than the sum of its parts.

Woodstock Story adds:

Procol Harum were invited but declined because the festival was happening at the end of a long tour and the impending birth of band member Robin Trower’s child.

The Moody Blues were included on the original Wallkill poster as performers, but decided to back out after being booked in Paris the same weekend. …

Tommy James and the Shondells declined the invitation, Tommy James would later say “We could have just kicked ourselves. We were in Hawaii, and my secretary called and said, ‘Yeah, listen, there’s this pig farmer in upstate New York that wants you to play in his field.’ That’ s how it was put to me. So we passed.”. (Linear notes to “Tommy James and the Shondells: Anthology”).

Arthur Lee and Love declined the invitation, but Mojo Magazine later described inner turmoil within the band which caused their absence at the Woodstock festival.

Free was asked to perform and declined.

Spirit declined and instead launched a promotional tour.

Mind Garage declined because they thought the festival would be no huge deal and they had a higher paying gig elsewhere. …

Joni Mitchell was recommended by her agent to appear on the Dick Cavett show rather than at the Woodstock festival. It is also believed that Mitchell was discouraged from performing at another festival after a particularly nasty crowd at the Atlantic City Pop Festival who actually made her cry. …

Lighthouse the Canadian band was booked to play, but backed out for fear that Woodstock would be a bad scene.

Rock Pasta adds another:

The Byrds

Like majority of the bands who passed on Woodstock, The Byrds turned down their invitation to play, assuming that Woodstock would be no different from any of the other music festivals that summer. And like most bands who turned it down, they regretted their decision.  Financial reasons were also cited for declining the invitation. Bassist John York recalls,

“We were flying to a gig and Roger [McGuinn] came up to us and said that a guy was putting on a festival in upstate New York. But at that point they weren’t paying all of the bands. He asked us if we wanted to do it and we said, ‘No’. We had no idea what it was going to be. We were burned out and tired of the festival scene. […] So all of us said, ‘No, we want a rest’ and missed the best festival of all.”


100 years ago today

Today in 1919, the Green Bay Packers were created.

Tom Oates:

When you grow up in Wisconsin, it’s not if you become a Green Bay Packers fan, it’s when.

For me, the when came the day after Christmas in 1960.

That was when the Packers, two seasons removed from a 1-10-1 record that was the low point in the franchise’s 100-year history, lost to the Philadelphia Eagles in the NFL championship game at Franklin Field in Philadelphia. It was the only playoff game a Vince Lombardi-coached team ever lost and it was the very first football game I remember watching on television.

I was only 8 at the time and even though the Packers lost to the Eagles after Chuck Bednarik, the NFL’s last two-way regular, tackled Jim Taylor inside the 10-yard line on the final play, I still have vivid memories of the game.

Norm Van Brocklin hitting Tommy McDonald on a corner route to give the Eagles a 7-6 lead. Bednarik and Tom Brookshier hitting Paul Hornung and knocking the Packers star out of the game with a pinched nerve in his neck. Max McGee defying Lombardi’s orders and running 35 yards from punt formation, setting up his own go-ahead touchdown catch in the fourth quarter. Ted Dean taking the ensuing kickoff back 58 yards, putting the Eagles in position for the game-winning touchdown. And finally, Bednarik dropping Taylor at the 8, preserving the Eagles’ 17-13 victory by sitting on the Packers fullback until time expired.

That’s all it took — one game — and I was hooked for life. An unbreakable bond with the Packers was formed that day.

Of course, my story is similar to millions of others who grew up in Wisconsin and fell in love with the most unique franchise in professional sports, a state treasure that has survived — and thrived — in the NFL’s smallest city. Only my story has a slight twist.

You see, I lived in the Chicago area until 1959, when my dad packed up the family and moved us to Appleton, some 30 miles from Lambeau Field (then known as City Stadium). Talk about serendipitous: We arrived in Packerland two months before Lombardi coached his first game for the franchise he would make famous by winning an unprecedented five NFL titles in seven years.

By the end of Lombardi’s second season, the Packers were in the NFL title game and I was captivated by their players, their coach, their winning ways. So, it seems, was everyone else in Wisconsin. And, thanks to the wisdom of NFL commissioner Pete Rozelle, football fans across the nation also adopted the small-town team with the rich history as their own.

It was Rozelle who married the NFL and network television in 1961, leading to six decades of wedded bliss in which the league became the colossus of American sports. With legends such as Lombardi, Hornung, Taylor, Bart Starr, Ray Nitschke and Willie Davis helping the Packers win five NFL championships (and the first two Super Bowls) from 1961 through 1967, the Packers were the first dynasty of the television era and Green Bay became known, justifiably, as Titletown.

Almost 60 years later, with the tradition carried on by superstars such as Brett Favre, Reggie White and Aaron Rodgers, the Packers remain one of the NFL’s most-storied franchises and Lambeau Field one of its most-cherished shrines.

Indeed, the Packers are the universal language of Wisconsin. No matter what divides us socially, politically or geographically, residents of the state always have the Packers in common. From one end of Wisconsin to the other, the Packers are a sure-fire conversation starter, a source of great angst at times, great joy at other times and great pride forever.

Other major sports entities in the state have had their days in the sun but the Packers are a clear-cut No. 1 in Wisconsin. The reason is simple. The Brewers, Bucks and Badgers have all had stretches where they garner national attention and sell out their stadiums and arenas, but the Packers are the only team in the state that commands our attention whether they go 12-4 or 4-12.

Proof of that lies in two of the most magical words in Wisconsin: season tickets.

Starting with Lombardi’s second season in 1960, the Packers have sold out every game they’ve played at Lambeau Field despite its capacity rising from 32,154 when it opened in 1957 to its present-day 81,441. Even during the dismal 24-season stretch from 1968 through 1991 when the Packers were a dysfunctional organization and their on-field fortunes predictably sagged, the fans kept showing up — at Lambeau and, until 1994, at Milwaukee County Stadium. Packers fans kept believing right on up to the time Favre, White, Mike Holmgren, Ron Wolf and Bob Harlan joined forces and showed the franchise how to win again.

Perhaps the most amazing sign of the fans’ devotion is the Packers’ season-ticket waiting list, which has kept growing even though the stadium and the ticket prices have, too. A year ago, there were more than 135,000 names on the list. With the stadium’s capacity essentially maxed out and season tickets being passed from generation to generation, someone at the bottom of that list might get tickets in, oh, 100 years or so.

Another sign of the unmatched loyalty of Packers fans are the stock sales that have bailed out the franchise from various financial situations. There have been five sales of Packers stock over the years, the first in 1923, the most recent in 2011. Though Packers stock carries no monetary value and only extremely limited voting power, there were 361,169 proud stockholders as of 2018.

Therein lies the reason for the unwavering devotion of Packers fans all over Wisconsin. While billionaire owners in all professional sports treat their franchises like toys, the Packers are community-owned. Everyone has a stake. And there is an intimacy with the franchise that could never happen in major metropolitan areas. With only 105,000 people in Green Bay, fans often run into their heroes at the grocery store or the gas pump.

Like so many in Wisconsin, I learned this at a young age. The first expansion — an additional 6,519 seats — at then-City Stadium took place in 1961. My father drove to Green Bay and secured eight season tickets from the new supply, another example of good timing because the waiting list was started that same year. Thus began the Sunday football memories of my youth.

Watching 13 future Hall of Famers play for Lombardi. Getting autographs outside the locker rooms when both were at the south end of the stadium (a new home locker room on the north end opened in 1963). Tailgating with a large contingent of Appleton people in Don Terrien’s parking lot across Valley View Road from the stadium (the Packers bought the property in 2007 and it’s now part of Lot 9). The 13-10 playoff victory over the Baltimore Colts in 1965 when Don Chandler tied the game with a disputed late field goal (sorry, I didn’t have a good view of from Section 28, row 47) and won it with another field goal in overtime. The NFL title game a week later when the Packers beat the Cleveland Browns (Jim Brown’s last NFL game). The Ice Bowl victory over the Dallas Cowboys for the 1967 NFL title, the coldest and most-famous game in league history (OK, so I left at halftime).

Those remain some of the fondest memories of my youth. If you grew up in Wisconsin, you undoubtedly have your own. No matter how different our Packers experiences are, however, they all end up in the same place, a life-long love affair with the greatest franchise in sports.

It’s funny for me to realize that every Packers Super Bowl win has been during my lifetime. I have told the story here of picking up a book called, I think, Greatest Sports Legends in my elementary school library and reading with amazement the description of the Packers’ winning the first two Super Bowls (when I was 1½ and 2½ years old, respectively), given my father’s autumnal watching of and swearing at the perpetually poorly performing Packers. (Except for 1972, when the Pack won the NFC Central, only to get literally stuffed by Washington in the playoffs.)

It took 20 years after that, including the 1982 playoff team and a few .500 seasons, but most other seasons of play that ranged from mediocre to abysmal, for the Packers to start getting it right. (The nadir of Wisconsin football was 1988, when the pACKers were 4–12, but the BADgers were 1–10.) The genesis was 1987, when Bob Harlan was on the track to becoming the Packers’ president and was genuinely bothered by the perception that the Packers didn’t care about winning because they sold out games regardless of record.

Harlan focused on the business end of the franchise, while breaking the previous mold of general manager/coaches by hiring Tom Braatz to be the GM, with complete football authority. Braatz produced only one winning team, so Harlan fired him in 1991 and hired Ron Wolf. Wolf hired Mike Holmgren to coach and traded for quarterback Brett Favre, and you know how that turned out.

And then Ted Thompson replaced Mike Sherman, and Thompson hired Mike McCarthy and drafted Aaron Rodgers, and you know how that turned out.

And now the Packers are in the Brian Gutekunst/Matt LaFleur era, and we will all see how that turns out.


Another -30-

The Wisconsin State Journal reports on the death of one of its own:

Retired Wisconsin State Journal state editor and columnist Steve Hopkins, who died Friday at 90, is being remembered by friends and family as a lyrical writer, dogged reporter, thoughtful editor and avid lover of the outdoors.

“He was really a legendary part of the State Journal,” said Ron Seely, who was hired by Hopkins in 1978. “A lot of people will be sad to see that he passed and will remember the pleasure of reading his columns.”

Hopkins joined the State Journal in September 1957 and retired in February 1994. During his more than 35 years at the paper, he was a copy boy, reporter, feature writer, state editor and columnist.

Seely, who worked for Hopkins for more than 15 years, said Hopkins’ love for the outdoors was probably second only to his “love for the written word.” Those two loves were combined effortlessly in his weekly outdoor column in which he would travel to different places throughout Wisconsin, describe what he saw and include a little life lesson for readers.

The column was widely popular because of his vivid descriptions, witty humor and lyrical phrasing, said Susan Lampert Smith, who also had Hopkins as an editor when she was a reporter at the State Journal.

“He took readers on walks with him,” Lampert Smith said.

In a 1993 column, Hopkins told readers that his heroes were not cowboys, but rather “the great walkers of our time.” He wrote that like Henry David Thoreau and John Muir, he walked “for pure pleasure, enjoying the freedom of movement and the relaxation of the mind it produced.”

“It was hot, humid and still. Mosquitoes and horse flies lurked in the shadows along the side of the road, hiding behind the Queen Anne’s lace, waiting to hop a ride,” Hopkins wrote in the column about a walk through the Arboretum in August 1993.

“There was not a breeze to stir the cattails along the marshy edge of Lake Wingra, nor was there as much as a ripple on the smooth surface of the lake. The sun burned like a fiery dagger through the openings in the trees overhead. The walker, lost in thought, is only vaguely aware of all of this.”

Although Hopkins loved to get lost in thought while meandering through the woods, he was also a dogged reporter, who loved breaking news and believed in the value of providing “straightforward, honest accounts” of the news as it happened, Seely said.

Lampert Smith called Hopkins an “old-school newspaper guy.” Seely noted that he insisted on being called “a newspaperman.”

“I think he was sort of in love with the idea of a hard-bitten newspaper reporter who would cover a fire, come in and bang out a story, then cover a homicide,” Seely said.

When Hopkins was Seely’s editor, Seely remembers him saying, “Just write it straight, Seely.”

George Hesselberg, who was a general assignment and police reporter when Hopkins was an editor, said Hopkins was always ready to chat about anything, and never gave anyone “that just don’t bother me look.”

“You could approach him about any possible subject in the world,” Hesselberg said.

Hopkins was down to earth, with a droll sense of humor and a quiet chuckle, Seely said.

And he brought his love of melodic writing to his editing. Hesselberg remembers how careful and observant Hopkins was when editing his prose.

Lampert Smith said Hopkins would sit down with her and explain why a sentence worked or didn’t work, and tweak the punctuation.

After retiring, Hopkins built a cabin in the hills near the Kickapoo River and published a couple books of his columns, with some of his writings winning awards.

“At 90, he was still editing the newspaper from his recliner,” his children wrote in his obituary. “He’d be editing this if he could.”

Seely said he can still picture Hopkins wearing an old, beat-up fedora, a plaid shirt, a pair of chinos, old boots and a wool vest.

When he writes, Seely said, his words “bear the stamp” of Hopkins.

“I do still think about him when I write,” Seely said. “I think, ‘What would Steve think of this?’”

Hopkins was preceded in death by his wife, Frances Zopfi Hopkins; an infant daughter, Christine Mae Hopkins; his infant grandson, Alex Steven Hopkins Anderson; and his parents, Walter and Beulah Hopkins.

He is survived by three children, Peter Hopkins, Katy Anderson and Jayne Kubler, and six grandchildren.

Another lost star of my childhood


David Hedison, a film, television, and theater actor known for his role as Captain Lee Crane in the sci-fi adventure television series “Voyage to the Bottom of the Sea” and as the crazed scientist turned human insect in the first iteration of the film “The Fly,” died on July 18. He was 92, and the family said in a statement that he “died peacefully” with his daughters at his side.

“Even in our deep sadness, we are comforted by the memory of our wonderful father. He loved us all dearly and expressed that love every day. He was adored by so many, all of whom benefited from his warm and generous heart. Our dad brought joy and humor wherever he went and did so with great style,” said the family in a statement.

David Hedison, born Al Hedison, was from Providence, R.I. and studied at Brown University where he grew fond of the theater, becoming a part of the university’s theater production group “Sock and Buskin Players.” He then moved to New York, studying with Sanford Meisner at “The Neighborhood Playhouse” as well as Lee Strasberg of “The Actor’s Studio.” In the 1950s, he appeared in “Much Ado About Nothing” and “A Month in the Country,” working with Uta Hagen and Michael Redgrave on productions by Clifford Odets and Christopher Fry, among others.

Shortly after “A Month in the Country,” Hedison first hit the big screen with his role in the 1957 film “The Enemy Below” and in the 1958 film “Son of Robin Hood.” He also played André Delambre in “The Fly,” (1958) which became a cult phenomenon and sparked a remake in 1986 with Jeff Goldblum reprising the role. Hedison then signed with Twentieth Century Fox in 1959 and changed his first name to David, his given middle name. In 1964, he hit his big television break as Captain Lee Crane in producer Irwin Allen’s “Voyage to the Bottom of the Sea,” which ran until 1968.

He also joined Roger Moore in the 1973 James Bond film “Live and Let Die” as well as Timothy Dalton in 1989 with “License to Kill,” becoming the first actor to play CIA agent Felix Leiter twice. In the 1980s and 1990s, he worked on shows such as “Another World,” “T.J. Hooker,” “Dynasty,” “The Love Boat,” “Who’s the Boss” and “The Colbys.”

According to family members, Hedison joked during his final days that “instead of RIP he preferred SRO ‘Standing Room Only.’” They said that he was “tall and strikingly handsome,” and “a true actor through and through.”

Hedison’s wife, Bridget, a production associate on “Dynasty” and an assistant to producer on “The Colbys,” died in 2016. He is survived by two daughters; Serena and Alexandra, an actress and director who is married to Jodie Foster.

Donations may be made to the Actor’s Fund.

Hedison was in one of my favorite World War II films …

… and my favorite James Bond movie …

… and one of my favorite weekend shows, “Voyage to the Bottom of the Sea” …

… along with other entertainment:

“Voyage” was based on the movie of the same name, created by Irwin Allen, the “master of disaster” for a variety of ’70s disaster movies. (Hedison was offered the captain role in the movie, but turned it down, though Allen got him for the series.) “Voyage” featured a submarine unlike any other in the world, created by a brilliant admiral who hired Hedison’s character away from the Navy to be its captain.
As is always the case except for anthologies, the strength of “Voyage” was its characters and their relationships. The first-season black-and-white episodes are mostly Cold War-related, and quite good. Then came color, and while Cold War episodes continued …

… monsters and aliens showed up as well, some of which were more believable than others.

The irony of “Killers of the Deep” is that it included stock footage from “The Enemy Below,” which included Hedison, and an actor, Michael Ansara, from the original movie. It would have been hilarious if they had figured out a way to get Hedison’s “Enemy” character to be in the same scene as the Seaview captain.


50 years ago

George S. Will:

Thirty months after setting the goal of sending a mission 239,000 miles to the moon, and returning safely, President John Kennedy cited a story the Irish author Frank O’Connor told about his boyhood. Facing the challenge of a high wall, O’Connor and his playmates tossed their caps over it. Said Kennedy, “They had no choice but to follow them. This nation has tossed its cap over the wall of space.” Kennedy said this on Nov. 21, 1963, in San Antonio. The next day: Dallas.

To understand America’s euphoria about the moon landing 50 years ago, remember 51 years ago: 1968 was one of America’s worst years — the Tet Offensive in Vietnam, Martin Luther King Jr. and Robert Kennedy assassinated, urban riots. President Kennedy’s May 25, 1961, vow to reach the moon before 1970 came 43 days after Soviet cosmonaut Yuri Gagarin became the first person to enter outer space and orbit the Earth, and 38 days after the Bay of Pigs debacle. When Kennedy audaciously pointed to the moon, America had only sent a single astronaut on a 15-minute suborbital flight.

Kennedy’s goal was reckless, and exhilarating leadership. Given existing knowledge and technologies, it was impossible. But Kennedy said the space program would “serve to organize and measure the best of our energies and skills.” It did. The thrilling story of collaborative science and individual daring is told well in HBO’s twelve-part From the Earth to the Moon, and PBS’s three-part Chasing the Moon, and in the companion volume with that title, by Robert Stone and Alan Andres, who write:

The American effort to get to the moon was the largest peacetime government initiative in the nation’s history. At its peak in the mid-1960s, nearly 2% of the American workforce was engaged in the effort to some degree. It employed more than 400,000 individuals, most of them working for 20,000 different private companies and 200 universities. 

The “space race” began as a Cold War competition, military and political. Even before Sputnik, the first orbiting satellite, jolted Americans’ complacency in 1957 (ten days after President Dwight Eisenhower sent paratroopers to Little Rock’s Central High School), national security was at stake in the race for rockets with ever-greater thrusts to deliver thermonuclear warheads with ever-greater accuracy.

By 1969, however, the Soviet Union was out of the race to the moon, a capitulation that anticipated the Soviets’ expiring gasp, two decades later, when confronted by the technological challenge of Ronald Reagan’s Strategic Defense Initiative. By mid-1967, a majority of Americans no longer thought a moon landing was worth the expense.

But it triggered a final flaring of post-war confidence and pride. “The Eagle has landed” came as defiant last words of affirmation, at the end of a decade that, Stone and Andres note, had begun with harbingers of a coming culture of dark irony and satire: Joseph Heller’s novel Catch-22 (1961) and Stanley Kubrick’s film Dr. Strangelove (1964). …

Stone and Andres say Apollo 11 was hurled upward by engines burning “15 tons of liquid oxygen and kerosene per second, producing energy equal to the combined power of 85 Hoover Dams.” People spoke jauntily of “the conquest of space.” Well.

The universe, 99.9 (and about 58 other nines) percent of which is already outside Earth’s atmosphere, is expanding (into we know not what) at 46 miles per second per megaparsec. (One megaparsec is approximately 3.26 million light years.) Astronomers are studying light that has taken perhaps twelve billion years to reach their instruments. This cooling cinder called Earth, spinning in the darkness at the back of beyond, is a minor speck of residue from the Big Bang, which lasted less than a billionth of a trillionth of a trillionth of a second 13.8 billion years ago. The estimated number of stars — they come and go — is 100 followed by 22 zeros. The visible universe (which is hardly all of it) contains more than 150 billion galaxies, each with billions of stars. But if there were only three bees in America, the air would be more crowded with bees than space is with stars. The distances, and the violently unheavenly conditions in “the heavens,” tell us that our devices will roam our immediate cosmic neighborhood, but in spite of Apollo 11’s still-dazzling achievement, we are not really going anywhere.