11/18/14

Thanksgiving, American Style... (whatever that means...)

“All earlier pluralist societies destroyed themselves because no one took care of the common good. They abounded in communities but could not sustain community, let alone create it.”
--Peter Drucker (Austrian born immigrant to the United States)

One of the most cherished of traditions in American education is the presentation of a pleasant reconstruction of the “First Thanksgiving” – a celebration at Plymouth, Massachusetts supposed to have taken place in 1621, prompted by a good harvest in the aftermath of the 1620-21 Winter during which somewhere on the order of 50% of the immigrant population of Puritans had died of cold and malnutrition.  The good harvest is attributed (particularly in stories told to elementary school children) to the good relationship between the Puritans and the local aboriginal communities, the Wampanoag in particular, with local agricultural technology seemingly superior in every way to the inferior European understanding of how to scratch out subsistence from the American wilderness.

There are a few things glaringly wrong with the way the story is typically presented, of course.  For one thing, colonizers in Virginia several decades earlier, though unsuccessful in establishing a permanent community, had made a “Day of Thanksgiving” a part of their very charter.  The celebration of harvest festivals, in fact, is fairly universal, being a part of virtually every culture ever studied.  So what was being done at Plymouth was not really original.

Moreover, the particular celebrations in Massachusetts, though certainly inspired by thanks for survival, and aided strongly by the assistance of the locals, serve only to put in bas relief the cultural duplicity that has been part of the American character since the founding of our country.

The Puritan Pilgrims, you see, were not inventing a new holiday.   They were mimicking one – having abandoned Great Britain in search of a place in which they could live according to the strictures of their own rigid and exclusive religious views (sound familiar?  Sort of like Sharia Law, perhaps?), the Puritans did not immediately set out for “The New World” – no, first they made a stopover in Leiden, in what is today the Netherlands. 

And while in Leiden, they were on hand for the first several decades of what is a profoundly Dutch holiday.  Known as 3 Oktober Feest or simply 3 Oktober, this Leiden festival celebrates the anniversary of the 1573-1574 Siege of Leiden during the Eighty Years War, when the Spanish attempted to capture the city; conditions were so bad at the height of the siege that thousands of citizens starved; when William of Orange entered the city on October 3rd, 1574, he fed the people of the town haring en wittebrood (herring and white bread sandwiches).  Today, these sandwiches are handed out for free at De Waag (the weigh house); lots of beer is obviously also available, along with pretty much every festival attraction you can think of.  Think "Oktoberfest" except with an actual excuse.  Plus herring.

The Pilgrims placed their own stamp on this tradition, not being tied for any particular reason to the herring that saved the Dutch, but being immensely grateful for the venison, turkey, maize, turnips, squash, beans, fish, berries and what-not afforded by their agriculturally superior hosts (and we use that term loosely, given the land-grab they would make over the next few decades), and created the trappings we in this country now associate with “Thanksgiving” – though the advent of tofurkey would have to wait a few centuries, and the vast improvement of pecan pie over pumpkin pie would, sadly, have to wait for Mrs. Myrtle Maintenance to come along, but we digress.

In point of fact, the closest direct descendants (both genetically and culturally) of those earliest Puritan residents of Massachusetts are not, strictly speaking, American.  They are, in direct contradiction of the conceit American patriots like to cling to regarding the whole story, the Tory sympathizers who in the wake of the American revolution had fled to Canada, where they brought the Americanized version of the holiday with them, and superimposed it upon Canada’s own historical tradition of Thanksgiving.

Most students in the U.S., if they hear at all that Canada has a “Thanksgiving Day” on the first Monday of October every year, assume it is in mimicry of the American version of the holiday, and that it is in October because it is just too cold after that to celebrate a harvest festival. 

The reality, though, is that the Canadian holiday came first.

In 1578, the third voyage of Martin Frobisher from England in search of the Northwest Passage set out with the
intention of establishing a small settlement in the present day Canadian Territory of Nunavut.  His fleet of 15 vessels was buffeted by terrible storms, and the fleet was scattered in icy waters, and virtually all hands on all ships lost all hope.  Mayster Wolfall, appointed by Her Majesty’s Council to be the minister and preacher “made unto them a godly sermon, exhorting them especially to be thankefull to God for their strange and miraculous deliverance in those so dangerous places,” and the Canadian Thanksgiving was born.

Not to be outdone by the English Canadians, French settlers who arrived with Samuel de Champlain, from at least as early as 1604, also held huge feasts of thanksgiving on a more or less annual basis.

And, much as in the U.S., Canada’s thanksgiving festivities were held on a wide array of dates, sometimes in April, sometimes in June, sometimes in October, sometimes in November – ultimately settling on the current date (more or less – November and October have flip-flopped a couple of times) since the mid-19th Century.  In the U.S., Abraham Lincoln had affixed an official Thanksgiving Day in November, but the Confederacy (really, is there anything those people didn’t do wrong?!) refused to take part, so it was not until during Reconstruction in the 1870s that we finally had a national Thanksgiving Day.  And it was not until Franklin Roosevelt signed a joint resolution of Congress on December 26, 1941, that the current affixation of Thanksgiving as the fourth Thursday in November became official.

The one thing all this history makes abundantly clear is that Thanksgiving, so clearly and tautologically about being grateful has also been multicultural.  But it has also been beset by racism, nativism, ignorance, and fear.

Yes, in 1621, the Pilgrims at Plymouth Rock were saved by Squanto and Chief Massasoit, but the natives soon learned to hate the names of bloodthirsty men like Myles Standish, and betrayal, murder and deceit became the norm of relations between Europeans and Native Americans for the next several centuries.  And even the cultural heritage of fellow European contributors to the Puritans welfare were not merely whitewashed, but outright scrubbed from the record.

Moreover, we glean from history only that the Puritans “sought to worship in their own way” and yet, in modern America, we hear numerous cries from populist politicians decrying the incursion of alien cultures, most notably in calls to prevent the inclusion of Islam in codified American law – the so called “Sharia Law” debates, which, for those who actually look for any factual basis, consist of a whole lot of nothing – there is nothing there to fear, other than the mere fact of the presence of people whose culture is not the same as the one in which the fearful were raised.  The idea that American democracy can be swept away by immigrants holds about as much weight as fears of children that they will be pulled down the bathtub drain, or that the boogeyman will get them in the middle of the night.  There are plenty of anecdotal stories, but the problem is, none of them are real.

This is the month in which a tremendous amount of the trappings of American culture are on display – and yet the spirit in which those superficial elements are supposed to have been codified is on the verge of receiving a black eye.  Debate is currently raging over whether or not President Barack Obama can or cannot, should or should not, show leniency in issuing an executive order whose practical effect would be a dramatic reduction in the deportation of immigrants to this country who have arrived without documentation.

Great timing.  We are about to celebrate a holiday supposedly founded by people who, completely without any legal documentation or say-so from the local government, set up their little haven for illegal immigrants in the illegal immigrant town they called Plymouth.

We’re going to celebrate Thanksgiving Day this year, and we urge everyone else to do the same… but while being grateful for the blessings in our own lives, we are not going to begrudge anyone else seeking to be blessed, too.  So to those who oppose immigration, our message is really fairly simple:  grow up.  You are violating your own foundational mythology – attempting to beexclusively American is the surest way to be unAmerican.

We liken the attempt to co-opt the purity and innocence of the foundational myths surrounding American culture by those who would concomitantly seek to limit the benefits accrued by those previous forays into the world of pluralism and multicultural community to the behavior of fire ants in the presence of a carefully tended crop of sugar beets, or maybe maize, or some other sweet fruit or veggie.  Yes, the ants certainly display all the characteristics of a species on whom Darwinian Natural Selection has shone the most favorable of lights… but when they ravage the crop (as they surely will), there are only a limited number of possible outcomes – either the farmer liquidates the field in a fury of pesticidal apocalypse, or he plants something else which the ants can’t live off of (say, tobacco), or he gives up, moves away, and plants nothing.  In none of those scenarios do the ants thrive.  The farmer doesn’t fare so well, either.

And while we tend to think of ants as “alien” or “invaders”… and even accuse fire ants of being foreign… the reality is, there are four separate solenopsis invictus species, and two of them have been in the Continental U.S. longer even than the aboriginal “native Americans” – the farmer is the new guy on the block, not the ants.  In this story, we’re the bugs, not the humans.

Honeybees, in contrast, live alongside the farmer, pollinate her crops, and if she is the sort who is careful and thoughtful in her interactions, can even get the bees to give her some of their honey.  Too ham-fisted an analogy for you?  Think of it this way – when we work alongside immigrants, we can accrue all sorts of economic benefits.  Skilled labor or not, there is something they can provide (whether medical-degree carrying new doctors, or totally unskilled grapefruit pickers, and all skills and life experiences in between) which we would not have otherwise had.  Either we accept them and live in harmony with them… or life goes sour for all of us.  It’s not either or – it’s not “us or them” – it’s how will we all live?

Multiculturalism sometimes doesn’t work, as when we try to act more like the ants, but sometimes it does.  And when it does, life is sweeter for everyone.  Let’s be more like bees, and have a happy Thanksgiving.

Happy farming!

11/11/14

Bright Lights, Big City (Hot Time in the Old Town Tonight?)

“Many clever men like you have trusted to civilization.  Many clever Babylonians, many clever Egyptians, many clever men at the end of Rome.  Can you tell me, in a world that is flagrant with the failures of civilization, what there is particularly immortal about yours?”
--G.K. Chesterton, The Napoleon of Notting Hill (1904)

We at Myrtle’s place do not believe in the cynical philosophies of survivalists, nor do we believe in the fatalist philosophies of nihilists.  Neither, however, do we believe in the supremacy of human beings central to the dominant moral, social, and economic philosophies of the Judeo-Christian capitalist West. 

People are great – don’t get us wrong.  It’s just that we aren’t the center of the universe, as you would be led to believe if you follow the teachings key to the ideological structures underpinning most of what we recognize as “civilization”.  There is a dichotomy in most systems of thought between “natural” and “man-made”.  We do not make that distinction.  As far as we are concerned… it’s all“natural” and is all subject to the immutable laws of physics.

In fact, in the World According to Myrtle, the inevitable truth of everything being interconnected is so obvious that we sometimes don’t think about the fact that not everyone shares this basic understanding.  Which is why we are so often astounded by things like global-warming-denialism, trust in petrochemical companies, and big grassy lawns.

One of the basic facts of life regarding how so-called “civilization” fits in to the natural world is something called the urban heat island, and though the concept has been around since the first decade of the 1800s, when an intrepid investigator named Luke Howard first described the phenomenon of cities being warmer than the surrounding countryside, there is a surprisingly large percentage of the modern population that is utterly unfamiliar with the idea.

Basically, not only do cities generate more “man-made” heat (though, again, we think this term is preposterous) than do areas outside the urban center, cities also retain more heat, owing to 1) more materials with heat retaining properties, such as asphalt, cement, insulating materials in houses and commercial properties, vertical structures with large volumes of retained non-externally-circulated air (i.e. “buildings”); 2) fewer radiating surfaces such as open fields, tall trees with upward-facing surfaces of high albedo (i.e., glossy leaves); 3) numerous heat-generating entities such as power plants, automobiles, street lamps, etc. and 4) lots of houses with central heat in winter, and air conditioners in summer (which, since most people are short-sighted, they don’t realize put out more heat than cold, only the heat goes outside, not in).


There are a number of secondary effects generated by heat islands, most of which people simply choose to ignore.  Monthly rainfall, for example, is much higher downwind from most cities, in large measure because the urban heat island effect causes a change in the windflow around cities.  In Bryan-College Station, people sometimes humorously refer to the “Aggie Dome” which causes large storm systems to “magically” break up shortly before hitting the city-proper, only to reform once they move South and East of town.  Guess what?  It’s not magic… it’s civilization.  The “Aggie Dome” is a locally obvious manifestation of a very well known scientific principle.  It is a real thing, and it is a direct response of the environment to the activities of human beings.

Conservative blogosphere types frequently decry the possibility of macroscopic versions of this same phenomenon – to wit, they refuse to accept the possibility of anthropogenic climate change – but this strikes us at Myrtle’s place as not only wrong, but downright infantile.  Of course humans have an impact on the environment.  Everything has an impact on the environment, and the last time we checked, human beings are a subset of “everything”.  We promise we will update you the first time we encounter any evidence to the contrary.

The only question is not “do we have an impact on the environment” but “how big an impact do we have?”  There is certainly plenty of room to discuss this question on the macro level (we have done so before, and will unquestionably do so again), but equally important, we think, is a discussion of the micro level, which generally goes unexamined.  Regardless of what is going on in the climate generally – and rest assured, that is certainly a massive question – what is going on in the microclimates of individual human habitations is just as important.

So, to begin with:  what causes the urban heat island?  Long story short, the principle cause is the fact that short-wave radiation absorbed during the day by asphalt, concrete and buildings of wood, glass, ceramic, and various other modern construction materials is released as long-wave radiation during the night, making cooling a much slower process in urban areas than in the pastoral surroundings.  Basically, stuff people build generally cools down more slowly than stuff Mother Nature built.  There are plenty of counterexamples, but basically the “slow-to-cool-down” stuff in nature is concentrated in the hands of homo sapiens because we feel more comfortable in houses and office buildings made out of such stuff, and such stuff is also easier to drive on/more resilient to store still-other-stuff on.

What are the impacts of this basic reality?  First, night time temperatures in the city are mostly higher than they are in the country.  The people to first notice this truth are gardeners… especially tomato gardeners.  Fruit set for virtually all vegetables, but for tomatoes in particular, depends upon sufficiently warm day time temperatures for energy creation (critical for growth and development) coupled with night time temperatures low enough for consolidation of sugars (a process in large measure dependent upon the differences in fluidity of various chemical components at different temperature gradients which then utilize gravity – basically, the stuff that solidifies at lower temperatures sinks faster than the stuff that solidifies at higher temperatures) in order to create a “fruit” (aka a tomato) which has the proper nutritional value to ensure the germination of its progeny (aka a seed).


As finicky as tomatoes are, it becomes apparent quite quickly to connoisseurs that Fall tomatoes taste better than Summer tomatoes in the Brazos Valley, because while the daytime temperatures stay high enough well into the Fall… during the Spring and Summer, the night time temperatures get too high far too quickly.  Sure, the plants still set fruit… but the flavor is just, well… wrong.  And it’s wrong because it’s too hot at night.

There are other, more dramatic impacts, of course.  You will occasionally hear stories of large tornadoes hitting urban areas.  However… your everyday, ordinary garden-variety tornado almost never hits an urban area.  The trailer park on the edge of town, sure.  But town square?  No way! 

Why?

Because small thunderstorms almost never happen in urban areas.  The urban heat sink causes lower-level temperature inversions that most cumulonimbus constructs simply cannot penetrate – they hit the heat island and “poof!” The system may (if it is strong enough) recreate itself once it moves past the interfering heat source (see:  “Dome, Aggie”), but otherwise it just disappears in a puff of disappointed agricultural chappiness.  Only a very large storm system is likely to be able to penetrate the urban heat island, and as a consequence only a very large tornado is likely to impact an urban environment.

So… what should people do about this phenomenon that we have created?

The answer to this question depends upon the answer to a vast number of other questions, not least of which is “What do we want?”

If what we want is to control nature, then, hey, do whatever you want.  You’re not going to succeed, so you may as well go down swinging with whatever ridiculous philosophy you were wanting to pursue in the first place.  If you’re going to be a failure, you should at least be a self-satisfied failure.

If, on the other hand, what we seek is a way to live in a more sensible, survivable equilibrium, then there are several steps we can take, some of which have plenty of empirical support, and others of which make good sense based on what we know about the laws of physics, in addition to several decades of meteorological evidence.

One of the first cities in North America to take this problem seriously was Atlanta, Georgia, which in the 1990s instituted several statutes related to building and development aimed at lowering the urban temperature.  Rooftops and roadways in Atlanta must be built of certain materials and of certain colors which increased the city’s albedo considerably.  White or grey replaced black and brown… and the citywide average temperature dropped by nearly 3° Fahrenheit over a decade.  Given that during that same decade the global average daily temperature rose… yeah, Atlanta was on to something.


In addition to cool roofs (about which we have written before – if you haven’t yet painted your roof white, hey, get on it!), cool road surfaces, permeable asphalt, increased greenways, replacing lawns with herb gardens, shrubberies, and tall trees (capable of both absorbing heat, and reflecting any unused solar energy so as to avoid heat pollution) are all effective mediators of urban heat.  Replacing combustion engines with electric vehicles is another obvious reduction urban dwellers can implement… and better still, getting rid of engines altogether (also known as “ride your bike to work, ya bum!”)

Sometimes we are accused by traditionalists of taking glee in the idea that Western civilization is going to crumble.  Note that we didn’t say in the possibility that Western civilization will crumble – that is an inevitability, as an serious student of history would know.

Well… we have to confess to being more or less guilty to this charge.  The thing is, the fall of one form of civilization has always led to the institution of another.  And we are not pessimists, focused on what we will be losing when the status quo comes to its inevitable conclusion.

No, we are optimists.  We see that the way of life to which the vast majority of our species has become accustomed is unsustainable, and cannot last.  And we look at the possibilities and realize… you know what?  We can do better.

In fact, we are quite sure that, even though we’ll be dragged there kicking and screaming, humanity will do better.

Happy farming!

9/28/14

Almost Time for Ghoulies, Ghosties, Gremlins and... Gardeners?

“The basis of optimism is sheer terror.”--Oscar Wilde, ‘The Picture of Dorian Gray’
Scholars from a variety of fields including anthropology, theology, linguistics, history and literature will debate practically anything, including whether or not Halloween is merely a Christianized version of Samhain, or if it evolved independently.

As is the case in many such scholarly disagreements, they’re all being silly – the important thing about Halloween is that it is fun.  We don’t mean to demean anyone’s deeply held convictions, of course, but if your deeply held convictions lead to a drab, dreary, colorless, somber life of drudgery… well, maybe you ought to consider shopping for a new set of beliefs, beliefs which include the idea that being human should be enjoyable, not miserable.  There’s enough troubles that happen to us without bringing unhappiness on ourselves.

Which makes Halloween a perfect holiday, when you think about it – ghosts and ghouls and creepy-crawlies represent all the horrors of our imagination… and Halloween places all of those creepy-crawlies in an ironic and humorous context.  We cringe and cower in fear on a typical Monday morning when our bosses, or supervisors, or teachers (or students), or customers, or suppliers, or whomever, load us down with a laundry list of complaints and problems with which we are ill equipped to cope.  However, if we have a freaky automated rubber hand reach up at us out of a candy dish, our hearts pound with fear… but we laugh.

We laugh because we know the fear is not real, and that knowledge equips us to rise above our fears of those things which are real – it gives us the hope and courage necessary to face harsh realities, whether they be in the form of the coming of winter and desolation (oh so important to farmers for as long as there have been farmers, and even more important to hunter-gatherers before that), or more modern fears, whether of problems at work, school or in our personal lives.

Not all of the traditions associated with Halloween speak to this psychological reality, of course; some are simply festive.  Candy corn, caramel apples, bonfires, costumes of all sorts (frightening and – more and more frequently – not so frightening), silly songs, silly movies, and silly decorations form the centerpiece of what is really just one extended party marking the end of summer, and the beginning of something else.

Samhain was the Celtic New Year for a reason; harvest marked the dying of the old year, when the plants have given their bounty and are returning to the dust from whence they came.  There are equivalent harvest festivals in most of the world, and even in places where religious fanaticism drowns out the individual freedom to get freaky on Halloween, something takes its place.

The Puritans, for example, strongly discouraged recognition of All Hallow’s Eve (the evening before All Saint’s Day) – not because of any sort of direct satanic influence, as is frequently the case among extremist fundamentalist Christians in our own era, but because the holiday was seen as essentially Catholic.  (Though, to be fair, they considered Catholics little better than Satanists, but why quibble?  One flavor of crazy intolerance is as good as another.)

In spite of their innate problems with All Hallow’s Eve (and associated harvest festival traditions, Celtic or otherwise)… the Puritans brought us Thanksgiving, which is nothing more than a dressed-down Protestant version of Halloween.  Simple clothes and formal dining (or at least as formal as working-class families get) take the place of free-wheeling frivolity… but the basic message is the same:  the time has come to give thanks for the bounty we have received, that we may be prepared for what follows.

There is plenty of room for both ways of celebrating the harvest; most people never even stop to consider the subtle tension between the perspectives offered by these two intimately related holidays – the vast majority of Americans celebrate both Halloween and Thanksgiving without giving it a second thought.

Halloween is not a uniquely American holiday, of course, but what Americans have done with the day says a lot about why it is important to us.  Historian Nicholas Rogers writes that “some folklorists have detected its origins in the Roman feast of Pomona, the goddess of fruits and seeds, or in the festival of the dead called Parentalia,” but that “it is more typically linked to the Celtic festival of Samhain,” which comes from the Old Irish words meaning “summer’s end”.

The mish-mash of traditions in modern American Halloween festivities have stripped virtually all of the religious overtones of the holiday and replaced them with purely secular meanings and traditions – even where religious or mystical festivals such as Día de los Muertos are celebrated alongside Halloween, they are clearly seen as two separate entities.  You celebrate the one the night of October 31st, and the next day you move on to the festival honoring the dead.  Where there is overlap, it feels very much like a meeting of friends from different backgrounds at Yule, some of whom are celebrating Chanukah, others are celebrating Christmas, Kwanzaa, or some other festival.  (Except maybe Festivus, because those people don’t get along with anybody.  But we digress.)

Much of the mish-mash of American Halloween is to be expected, based simply on the idea that the American melting pot is itself a mish-mash; we are the mongrels of the world, a mix of ethnicities, races, religions, cultures, languages and traditions so diverse we often lose track even within our own families within a generation or two of just exactly who we are.  It makes perfect sense, then, that what we do will not have the same degree of continuity you would find in places in the world where families have been in residence for hundreds or even thousands of years.

But the strongest influences on the holiday also tend to make it a particularly prone set of traditions to have evolved over time.  The Celtic celebration of Halloween is not altogether easy to enumerate – yes, we know that “Samhain” was the fall harvest/New Year celebration… but exactly what early Celts did during this time is a matter of conjecture.  Many of the traditions passed down as “pagan” actually originated during the long and influential era of Celtic Christianity, and how to draw distinctions between which Irish traditions date from the first millennium C.E., and which came from before then is an almost completely pointless exercise, both intellectually, and philosophically.

Is your Jack O’Lantern carved from a turnip really more authentic than one carved in a Boston Squash or some other kind of pumpkin?  And even if it were, would it be any more fun?  Probably not.  Probably, you’d get to display your “authentic” Jack O’Lantern at the kind of party where no one else was much fun to be around, either.  (But hey, that’s just us.  We like candy corn, so what do we know?)

And ultimately, that is the American contribution – while many bemoan the crass commercialism of Halloween (and make no mistake, there is clearly a lot of that on display), this is missing the point.  Commercialism is the manifestation of a vibrant truth, one which is not so negative:  If life is a game, then whoever throws the best parties wins.  We prefer the kinds of parties where everything is homemade… but make no mistake, the reason Halloween sells is because whoever makes “the stuff,” it is stuff people want.  And even if life isn’t “a game,” learned optimists know that you frequently only get where you’re trying to go if you treat it like a game.

This is the optimistic American contribution to All Hallow’s Eve – some are frightened by the idea, because it smacks of the kind of licentiousness which at its worst brings us things like Detroit’s “Devil’s Night” – but that is just one extreme.  At the other end of the spectrum, this spirit of freedom (best represented by the tradition of wearing costumes, and freeing our identities from our workaday selves, which, after all, are just another disguise we wear, albeit on a regular basis) helps us escape the fears and troubles which all too easily overwhelm us.

We have nothing against other harvest festivals:  Jewish Sukkot, Turkmen Hasyl toýy, Persian Mehregan, Russian Dozhynki, Yoruban Ikore, and Korean Chuseok all have unique stories to tell, and each contribute in their way to the succor of the human spirit.  Some are more closely tied to the simple life which we advocate on a regular basis, and there is much to be said for celebrating traditions in a more agrarian manner, as a means to encourage people to return to the land… but in America, Halloween is what it is because people have become what they are.  As such, we approve.  Strongly.

We’re a little over a month away, but the supply of pumpkins and other strangely shaped winter squash and gourds has started making its way to the vegetable stands around town.  It’s almost time to break out the black and orange, string up some “ghosts” in the trees in the front yard, and hang “Witchy-Poo” on our front door.  Because, you know…. fun!

Happy farming!

9/23/14

Ever Get Tired of Ragging on Ragweed?

“Naturam expellas furca, tamen usque revenit.”(“You can drive nature out with a pitchfork; she will nevertheless come back.”)--Horace
At any given time, roughly 25% of the population is suffering from allergic rhinitis, known more commonly by the inaccurate name “hay fever”.  There are, of course, some people who are allergic to hay, but since they are typically smart enough not to go on hayrides, or venture too close to a horse barn, they generally don’t have much of a problem.  And yet… there’s that nasty fact that airborne pollen from a whole host of non-hay sources gets under their skin (or, at least, into their
nostrils).

Roughly half of all reported cases are the result of sensitivity to one particular culprit – ragweed.  It is difficult to know, of course, how many cases there are which go unreported simply because, while irritating, the symptoms were not bad enough to lead to a doctor’s visit.  That having been said… Ragweed season is an early Autumn affair in the Brazos Valley, and… yeah, it’s here.

We say “ragweed”… but this is actually not just one species, it is an entire category of plants, some of which are even grown on purpose, believe it or not.  Still, the two most commonly cited varieties are most definitely wild and unwelcome in the typical garden (though we know of some atypical gardeners who swear by them!) – common ragweed (Ambrosia artemisiifolia) and great ragweed (Ambrosia trifida).  There are around 50 species in total, found all over the Americas (and now running rampant as invasive species in Europe), and many of them are quite attractive, aside from the fact that they make so many people miserable.

Hay fever was first identified in 1819 by physician John Bostock; pollen was identified as the causal agent in 1859 by Charles Blackley; it was not until 1906 that Clemens von Pirquet identified the hypersensitivity of the human immune system as the mechanism by which the condition creates such misery.

All the years since then, of course, have seen the most typical of human reactions – burn the offending plants!  Kill them all!  And, sure enough, since the identification of the various ambrosia species known as “ragweed” there have been an almost uncountable number of strategies attempted to eradicate the offending weeds.  Each has met with very limited success.  Some have only made things worse.

Pulling the plants up by hand, of course, presents a high degree of difficulty.  For one thing, they must be identified very young, before their tough root system grows sufficiently to tax one’s muscles mere minutes into what (by nature) must be a long day’s work.  To make matters worse, while the pollen is not toxic (remember, it’s your own immune system that generates the histimines in your system that are making you ill), the leaves and stocks of the plant are mildly toxic, enough to irritate your skin and, after enough exposure, give you a nasty rash.

Mowing is somewhat more effective, provided it is done once the plants are tall enough to cut, but still young enough not to have bloomed.  And, of course, it must be repeated frequently.  And… it only gets those plants growing in a lawn or field… any plants growing in a garden plot or on a farmstead, or in an out of the way ditch, culvert, wild space, vacant lot, etc. will be untouched.  And, since the pollen can stay in the air for days or even weeks at a time, and can be borne hundreds or sometimes even thousands of miles… mowing down your own ragweed does you no good whatsoever if you are downwind from someone else’s.

Not too many years ago, a commonly applied “solution” was to burn fields with ragweed; we’ll leave to your imagination the sum total of what exactly was wrong with this little stratagem.  Among other things, ambrosia smoke is its own form of toxin, irritating in ways the pollen could only dream of being.  Fortunately, this strategy is no longer recommended even by the most backwards of agricultural extension agents.

And then there is the chemical approach.  Vast quantities of herbicide have been applied to ragweed patches over the years.  The problem, of course, is that ambrosia is particularly resistant to the vast majority of commercially available herbicides; pour buckets of Round Up on it, and it will thank you for the watering and go on its merry way.  Even through our sneezes, we can admire its tenacity.

Worse than the fact that herbicides don’t work, however, is that herbicides do kill the one thing that does work.  Ragweed populations can only be kept in check using natural means.  And there are, it turns out, plenty of animals who not only are not irritated by these plants, they thrive on them.  There is a long laundry list of Lepidoptera species which thrive on ragweed.  That’d be moths and butterflies, species who, while they like eating ragweed, do not like ingesting herbicides.  Oops.

A 1973 review of ragweed control techniques published in the Bulletin of the Torrey Botanical Club found that regardless of which control technique was used, after a few years there was no appreciable difference from simply leaving the ragweeds in place.  We could not find any more recent efficacy studies, but we suspect that given the increase in environmental degradation, fields with herbicidal controls are probably worse than simply leaving the plants in place, owing primarily to the decimation of foraging populations.

Proper population control of any “weed” (basically, any plant you find noxious for whatever reason) cannot take the form of eradication.  The sooner we rid ourselves of the notion that we can “do away with” things we don’t like in nature, the better.

No, “control” can only come in the form of management.  And management means creating balance.

We already noted one part of this equation, tending to the foraging populations, namely moths and butterflies.  Helping those populations along by planting other food sources is one important step in limiting the wild stands of ragweed – plant enough milkweed in your garden, and not only will it take up space that might otherwise be used by a resourceful a. artemisiifolia plant, but it will succor enough flying critters to eat any nearby a. artemisiifolia plants that might have otherwise given you trouble.  Plants in your neighbors yard, for example, which you would not have been able to get at with a mower, a blowtorch, or a spray bottle full of poisons.

And we just hinted at another solution – increased biodiversity in your garden, making use of every available space to plant other things.  One of the biggest problems with lawns (and we have written before about how icky we find them) is that they limit biodiversity and create niches for invasive “weed” species.  We hate the term “weed” but in this case, its meaning is appropriate – a “weed” is a plant whose presence is indicative of a problem.  Note that the “weed” is not the problem, it is there because of the problem.

By increasing biodiversity (putting in a flower, herb, veggie, etc. bed, preferably a combination of all of the above) the niche for the offending plant is eliminated.  There m
ay still be room for the odd individual or two… but there is no longer room for a large stand of invasives, and there will be a much greater population of foraging insects and other creatures who, after having sampled one species, moves on to another.

Ragweed sensitivity in the general population is higher now than it has been in hundreds of years; there are a lot of reasons for that, all of them related to human activity.  Various forms of pollution have left our immune systems primed for hypersensitivity to pollen; additionally, once pollen sensitivity has been kicked off, all those chemical pollutants in the air are more keenly felt in our nasal passages, lungs, eyes, skin… it’s enough to make you sick.

But it’s not the ragweed’s fault.  It is the fault of the lack of balance in our relationship with nature.  We won’t begin to truly breathe easy until we give up on our stubborn attempt to tell Mother Nature what it is we want her to be doing.  Mother Nature knows exactly what she needs to be doing – in the meantime, try some lemon and local honey in hot water; ragweed won’t be in bloom forever, and a nice hot beverage is as good a way as any to while away the time until the elm trees are in bloom…

Happy farming!

9/21/14

The Circle of Life is Made of Leaves (and Big Logs)

“I like trees because they seem more resigned to the way they have to live than other things do.”--Willa Cather, O Pioneers (1913)
One of the unfortunate consequences of living in a drought-prone area is the occasional need to cut down dead or dying trees which should have been healthy for many years to come.  We at Myrtle’s place are in the middle of just such a project, with three exceptionally large elm trees all having died during the last couple of summers – had they only been able to hold out one more year, Summer 2014 was especially wet by comparison to the dry blast furnaces of the last couple of years.
Felled logs make excellent garden bed borders

We have undertaken this sad task at precisely the time of year when we are ordinarily thinking a lot about our trees anyway, given that we are on the verge of the autumnal leaf-drop season.  Of course, a lot of leaves drop throughout the year, simply based on the wide variety of species of trees and bushes on our lot, but the largest number of our trees conform to the stereotypical fall foliage festival, and so, we are eagerly anticipating that day (probably in October, but one never knows for sure) when our tallest oaks start looking a little pale, and then overnight turn to gold, and then quickly to red, and then quickly to bare.

The first signs of the coming of fall actually come from our grape vines, which start going all brown and crinkly in late August every year, though they manage to limp along sometimes until cooler weather finally arrives shortly before Halloween.  Every year the timid among Brazos Valley gardeners wonder if they’ve done something wrong, if their muscadine vines are dying… and every year, they come back stronger than before, often because of and not just in spite of the abuse and neglect they received the year before.

We find a lot of surprises when the grape leaves start falling.  Some of these surprises are charming and amusing, like the birds’ nests tucked away in precarious nooks and crannies that are tantalizingly oh-so-close-and-oh-so-far-away from our cat’s reach.  Some are natural and important ecologically, but still give us the willies, like giant nests of wasps going about their business mere inches above our heads on a daily basis with us none the wiser… until the leaf cover falls.

We know, though, that the falling of the grape leaves is the surest sign that we are entering an important part of the leaf cycle of our garden.  Er… life cycle.  Sorry about that.

Chickens in the foreground.  Oaks in the background.
The oaks are the tall ones who don't cluck.
Anyway, as we were saying… we frequently joke that we operate on an oak leaf economic basis.  The bedding for our chickens comes from a three-foot deep layer of oak leaves, for example, and as that stews and composts it becomes the rich hummus which we use to fill our raised planter boxes for our various veggies.  Likewise, while we use cypress mulch for our pathways, when we mulch our herbs and vegetables we use the much cheaper and much more readily available oak leaf mulch – sometimes whether we plan to do so or not, particularly in our herb beds directly beneath the canopy of our largest water oak, which exfoliates so dramatically that there is simply no way we could keep the leaves out of the rosemary even if we wanted to, which, of course, we don’t.

There is almost unanimous support in the composting world for including leaves in any mix of organic matter being broken down for soil nutrition, and there are good reasons for that.  We have mentioned before that the healthiest soils are those which best reflect the natural growing conditions for whichever ecological niche you happen to occupy.  In most of the world, the biggest contributors to the native “compost” will be trees and shrubs, and usually it will be the deciduous varieties of each.

But… apart from their obvious bounty, why?  What makes leaves so important nutritionally for the myriad plants and animals that live on them?

The primary clue comes from the fact that leaves are so important to the plants which grew them in the first place.  Photosynthesis – the conversion of light to food – is the first and foremost function of leaves.  Their green color comes from the chlorophyll which forms the building blocks of their light-conversion-engines, and the spines and veins you can see in a leaf if you hold it up to the light show that they are very much more akin to animal life forms than we typically give them credit for being.  Trees and shrubs have circulatory systems, and it wouldn’t be too far off the mark to credit them with a central nervous system, albeit one that reacts somewhat differently from what you might see in the animal kingdom.

Pretty much everything in our garden is growing in
composted oak leaves, even the stuff in baskets.
The process of leaf loss is technically referred to as abscission, and though it happens every year, it never ceases to amaze us.  In temperate, boreal, and seasonally dry climates, abscission allows the tree to allocate resources properly for the seasonal unavailability of one or more essential components of its usual life-cycle.  Basically, in cooler climates, or in regions far enough north that winter daylight is insufficient for enough photosynthesis to meet usual nutritional needs, or in areas where long monsoon rains are followed by many months of drought-like aridity, trees will shed their leaves and “shut down” – the metabolism of the entire organism slows to as close to nothing as possible, and the plant waits for conditions to become favorable again before sprouting new leaves.

Obviously, given this strategy, the tree or shrub would not want to discard leaves which still had any sort of nutritional value to the organism as a whole, and for that reason alone one might question whether fallen leaves have any organic value, but that would be an oversimplification.  There is still plenty left in the biochemical goodie bag of a fallen leaf, it’s just that it is not in a form readily accessible to the plant “as is”.

It doesn't matter where you are on our property, the
big oaks out front dominate the horizon.  Good.
In addition to creating energy, leaves are often storehouses for energy, as well.  Further, many of the defense mechanisms plants use to fend off foraging animal life – thorns made of tightly arrayed lignins, tannins, other natural poison compounds – are comprised of nutritional components which have been allocated for the purpose of defense, and which are readily broken down in the compost heap (or the natural hummus of a decomposing forest floor), but which would not be easily broken down by the tree itself when getting ready for a long winter’s nap.

It makes sense, then, that the animals which forage on green leaves would ignore fallen browns, reds and yellows, but it also makes sense that a whole new category of creatures would feast on the discard pile.  A wide variety of insects and worms (not to mention microbes), in addition to molds and other miscellany, go to town on the forgotten side of the forest salad bar.  And that’s before even considering the leaves that get hauled off to the chicken coop, where they get mixed in with… ahem! other organic materials.

Even if we didn’t use fallen leaves in our garden or our chicken coop, of course, it still goes without saying that a pile of raked up leaves is just plain fun to jump in.

Happy farming!

9/16/14

Hurry, Curry Favor With Fava Curry (five times fast...)

“Κυάμων ἀπέχεσθαι” (“Abstain from beans.”)
--Pythagoras of Samos
For a guy who was most famous for a theorem often recognized as one of the building blocks of modern mathematics, Pythagoras was a crackpot.  A total nutjob.  A lunatic.  Really out there, is what we’re getting at.  If you want an early look at a Jim Jones-esque cult in the ancient world, look no further than the Pythagoreans.

However, one of the beliefs espoused by this small society of differently rational individuals which often meets with the most ridicule by those first studying their ways turns out (as so many behaviors of that sort do) to have a reasonable basis.  Pythagoras’ importuning of his followers to abstain from beans was probably a specific reference to the fava (Vicia faba).  And it may have saved at least some of them from a fairly painful death from complications resulting from glucose-6-phosphate dehydrogenase deficiency, also known as favism.

This condition affects certain elements of Mediterranean and African populations; carriers of the G6PD allele have a degree of resistance to malaria, which explains why the condition evolved, but it sadly renders them unable to eat fava beans without sometimes fatal interactions with the high levels of vicine and convicine in the beans.  High levels of tyramine also make favas dangerous for those who consume certain MAOI inhibiting drugs as a treatment for depression or other psychiatric conditions.

Now that we’ve scared you... let’s talk about how wonderful fava beans are!  (Because, you know, they are!)  Somewhere around 10% of those with Mediterranean ancestry need to worry about favism; the rest of us need to worry about how to get enough favas in our gardens and on our plates, and Pythagoras can go jump in a lake.

Vicia faba, also known as fava, broad bean, and horse bean, is native to North Africa, Southwest and Southern Asia, and is widely cultivated elsewhere.  In the U.S., the vast majority of the fava beans in cultivation are not actually grown for human consumption, but rather as a cover crop for forage and for nitrogen affixation (a quality which makes favas perfect in the backyard garden – more on that in a minute).

The history of fava cultivation is long – as long, in fact, as any crop other than the earliest stands of wheat-like plants which represented the first human attempts at agriculture.  To this day, farmers from India, Iran, Egypt, Morocco, Sicily, Sudan, Greece, Ethiopia, Nepal, and even Peru and Colombia grow favas in basically the same way as  farmers would have done six or seven thousand years ago.

Favas are a cool weather crop, which surprises those who are not familiar with how they are grown – one would not associate a plant from North Africa and the Mediterranean with “cool weather” but there you go.  In northern climates, or in mountainous areas (Ethiopia or Iran, for example) favas are easy to grow in the summer, but elsewhere (Egypt?  Southeast Texas?) one would grow them in fall, winter, and early spring.  In fact, one of the best reasons to grow favas is their ability to overwinter; we have had luck with plants surviving the harshest freezes the Brazos Valley can dish out; snow has not fazed them, though last winter’s ice storm, where freezing rain coated them with a half-inch of ice did wipe out a sizable portion of our crop – sizable, but not complete; several plants survived even that amount of cold-weather abuse.

In addition to the beans, whose culinary uses we’ll describe in greater detail in a moment, the plants themselves have a lot going for them.  Favas are one of the best beans available in terms of nitrogen affixing qualities.  A plot which has been overwintered with favas is perfectly ready come spring to grow practically anything you want with no soil additives necessary – corn, tomatoes, whatever you will.

Further, a mature fava plant can stand anywhere from three to five feet high, standing straight upright with leaves radiating outward only six to twelve inches, with white or purple flowers (depending on variety) which provide the local bee population with all the winter forage their little hearts could desire, creating a showpiece in your winter garden that will be the envy of your neighborhood.  When everything else is dead and brown, favas are gloriously and optimistically green and vibrant.

As if that weren’t enough… the leaves and flowers are tasty additions to the salad bowl.  Many of the same nutritional qualities found in the beans are found in the leaves and flowers.  High in fiber, high in protein, with a complete panoply of B vitamins, C and K vitamins, Calcium, Iron, Magnesium, Manganese, Phosphorous, Potassium and Zinc, favas are also one of two classes of beans (velvet beans aka mucuna pruriens being the other) which contain L-DOPA, a naturally occurring dopamine agonist, basically a naturally occurring anti-depressant.  And if that weren’t good enough, L-DOPA is also a natriuretic agent, potentially hypotensive… ie. lowers blood pressure.

The beans are eaten in about as many different ways as there are cultures which have been exposed to them.  Historically, both the Romans and Greeks took young beans and parboiled them, occasionally eating them as a puree.  Mature beans are often fried and salted or spiced (this is popular everywhere from Latin America to Thailand).  Chinese cooking features a paste called la doubanjiang (“hot pepper beans”) and in Mexico habas con chile are eaten much like spiced peanuts are consumed in the U.S.

In Egypt (as opposed to the Levant and most of the rest of the world) favas are the primary ingredient of falafel.  Garbanzos (aka “chickpeas”) are the main ingredient in the falafel with which most people are familiar, but fava-based falafel has a richer, nuttier taste that Egyptian foodies rhapsodize at great lengths as being superior – we would never turn up our nose at Levantine falafel, of course, but the Egyptians have a point.  This is somewhat akin to a debate about the relative aesthetic merits of a meadow versus a glen, but if the question were merely over nutritive content, the Egyptians with their favas win, hands down, no contest.  Garbanzos are healthy, but favas are the king of beans in that regard.

As such, it comes as no surprise that favas are the most widely consumed food in North Africa.  The Egyptians eat ful medames the same way Norwegians eat fish.  In the Sudan, mashed favas flavored with sesame oil and a bit of Jibna (sheep-cheese, similar to Feta) form the basis for the most common lunch entrée.  In Morocco, street vendors serve up a fava bean dip called bessara the way New Yorkers eat hot dogs.  Ethiopian shiro flour forms the basis of most of their dishes, and one of the main ingredients in the flour is favas.  And numerous dishes important to the Ethiopian Orthodox Church make heavy use of favas.

And speaking of religion… there is a long history of theological/spiritual/supernatural association with the fava.  Pythagoras and his followers were vegetarians owing to an intriguing form of animism in which they believed not only in reincarnation for human and animal spirits, but also for the spirit of beans – details are sketchy because reliable witnesses tended not to get invited to their parties, but that’s as believable an interpretation of what data is available as any other.

The Romans would make an offering of favas to the lemures (“house ghosts”) on the festival known as Lemuria.   In Ubykh culture (historically people from the Southern Caucasus mountains in modern Georgia, Armenia and Turkey) there was a form of fortunetelling anthropologists refer to as “favamancy” where fava beans were thrown on the ground (similar to the throwing of bones in Norse culture).  In the Ubykh language, in fact, the word for “fortune teller” was literally “bean thrower”.  And in much of Italy, to this day, fava beans are traditionally planted on November 2nd, All Souls Day.

At Myrtle’s place, we do not ascribe any magical powers to the fava bean, nor do we believe they house the souls of long departed ancestors.  We do treat them with reverence, however, because they meet so many of the standards we have set for what makes a good garden plant:  they grow easily, and once sprouted do not require much in the way of maintenance.  They are hard to kill.  They produce tasty and nutritious fruit.  And by the time they die off each Spring, they have left the soil better than it was when they found it.  They are the best possible plant for the cool half of the year, in fact.

And in the Brazos Valley, October is when to plant them.  We can hardly wait.

Happy farming!

9/15/14

To Eat or Not to Eat... Or What to Eat or Not Eat... Or Something...

“If beef is your idea of ‘real food for real people’ you’d better live real close to a real good hospital.”
--Neal Barnard, M.D.

Meat has always been a problematic question for modern humans, even for those who have chosen not to think about the problems associated with the consumption of meat.  Leaving aside the ethical questions for a moment, and just focusing on health, there are a handful of advantages posed by meat consumption (particularly seafood, but also including red meat), juxtaposed with an ever mounting pile of disadvantages (particularly as related to red meat).  We aren’t doctors, but we do think it’s a subject worth revisiting from time to time, particularly because most people on both sides of the consumption aisle are (to put it mildly) not used to discussing the matter politely.

Lest you think we’re going to dogmatically say “don’t eat it,” we’d like to start with some interesting data points from a 1999 metastudy of data from 5 different countries, published in the American Journal of Clinical Nutrition: 

Dietary Style
Mortality Ratio
Pescetarian (fish eater)
0.82
Vegetarian (lacto-ovo)
0.84
Occasional Meat Eater
0.84
Regular Meat Eater
1.0
Vegan
(0.7 to 1.44 owing to limited data points)

Obviously, as is true of any population study, these findings do not mean that there are absolute truths applicable to each and every individual regarding healthy eating habits… but the trend lines are clear.  As a general rule, one would expect a person whose diet consists of no animal flesh other than fish or the occasional egg or dairy product to greatly outlive the person who has red meat at every meal.

It is interesting to note, of course, that the statistical differential between this optimal group and the occasional meat eaters is not particularly significant; there is a far greater difference between the frequent meat eaters and the occasional meat eaters (defined as someone who eats no more than two servings of red meat per week) than there is between the occasional meat eaters and the vegetarians and pescetarians.

Vegans, naturally, are in a category all to themselves, owing to the fact that their nutritional intake is perhaps more variable within their category than is true for any of the other categories – a careful vegan is better off than anyone.  A not-so-careful vegan?  May as well be playing in traffic.  We’ll get to why in a future post, more than likely, but the odds are that most vegans reading this blog probably know more about how to eat a healthy vegan diet than we do, anyway.  We’re more concerned with elucidating for the omnivore crowd for purely utilitarian reasons, so please, don’t feel excluded.  And for any vegans who don’t know about nutrients typically not available in plant sources, for heaven’s sake, go find yourself a vegan mentor who does.

Now then, back to meat…

We suspect that a great deal of the differential between the occasional meat eaters and the regular meat eaters has less to do with the dietary value of beef and more to do with the effects of a whole host of corollary factors – quantity consumed at any given meal, preparation methods, what else is eaten, etc.  For example, an occasional consumer of beef is more likely to consume fatty fishes (that is, fishes high in omega-3 fatty acids) than is a regular beef eater; as it turns out, omega-3 fatty acids are essential for a host of bodily functions that have a strong correlation to long-term health.  So… it’s not just that occasional beef eaters eat beef; it’s that they also eat other things in greater proportion than do frequent beef eaters.

Likewise, the occasional beef-eater (especially those who are doing their best to minimize the ecological impact they have vis-à-vis cattle raising method – hello grass-fed free-range, good-bye corn-fed, factory farmed) is much more likely than the regular beef-eater to be getting a healthy dose of dark green vegetables and healthy starches (long grain rice, quinoa, etc.) and is much less likely to be gobbling fried foods and processed flour and sugar – it’s not just what they are eating, it’s also what they are not eating.

Then, too, the occasional beef-eater is more likely to be a gourmand, someone who takes the tastes they consume seriously, and is therefore not likely to be eating lower quality cuts of meat, nor are they likely to be eating processed meats.

And, as it turns out, there are strong correlations between heavy consumption of processed meats (hot dogs, bologna, pepperoni, spam, etc.) and several different cancers, as well as cardiovascular disease.  Those same correlations are not found to be red-meat specific.  In other words, there is something about the way in which the meat is processed which makes it inherently unhealthy.  Much the same can be said for processed flour, processed sugar… seems like maybe processing is a bad idea, no?

Lest you think this means there is a green-light for beef consumption, though, just so long as you’re paying extra for the grass-fed good stuff, there are other considerations that require attention. 

Heterocyclic amines (HCAs) are chemical compounds containing at least one heterocyclic ring (atoms of at least two different elements) and at least one amine (nitrogen containing) group – long story short, it’s just a category of organic compounds.  A lot of them are not only beneficial, they are downright essential.  Niacin would be a good example. 

However, there are several HCAs which are classified as carcinogenic (cancer causing), and they are created by the charring of flesh.  Like you might find in, say, the famous “bark” (that tasty outside crust) on a particularly well cooked brisket.

Let that sink in for a minute… the thing that demarcates beef as “really good” for certainly most Texans, and we’re guessing most people in other parts of the world… is carcinogenic.  Not “might be”, but “is”.

Now, can you cook red meat without charring it?  Yes, you can.  Does it still satisfy your meat cravings?  We can’t answer that for you.  And depending on the method one chooses, there may still be other health risks involved – meat cooked on a grill or over a flame which is not hot enough to char (and therefore not hot enough to create carcinogenic HCAs) may also not be hot enough to destroy flesh-borne pathogens (bacteria and viruses).  Microwaves can kill those pathogens without charring the meat, but they also have the nasty side effect of changing the chemical composition of meat (and of anything else they are used to heat) in unpredictable and hard-to-quantify ways, especially when cooked in, on, or near plastics.  Microwaves do break down a variety of prions, though, which may be beneficial, in light of…

Prion disease.  One form of which is known as Bovine Spongiform Encephalopathy (BSE).  Also known as “Mad Cow Disease”.  There are actually variations of this particularly nasty affliction for every kind of consumable mammalian flesh, including human flesh, if you’re into cannibalism.  And while some sources of beef are free-and-clear of the potential for BSE (that would be local, free range grass-fed beef), the vast majority of red meat sources for grocers, restaurants, etc. are not. 

Safeguards in place are laughable, given that the only protection measures in place are to prevent the use of bone meal made with already infected animals.  These measures are sensible, of course, in that allowing contaminated animals to be used to make feed for non-contaminated animals would, naturally, spread the condition around.  The problem is, this approach ignores how the condition started in the first place.

Spongiform encephalopathy, whether of the bovine or other variety, is a condition wherein prions (protein fragments which are self-replicatable, but do not comprise a complete RNA or DNA sequence) run amok in the host animal; they invariably attack the central nervous system, and are only noticeable by their effects.  Autopsies done on diseased animals (including affected humans) will find brains eaten away like millions of little swiss-cheese bubbles.

And while on extremely rare occasions these prions are a more-or-less spontaneous creation in a genetically prone individual… on more occasions than not, these prions are created during the process of ingesting, digesting, and metabolizing flesh from a creature with similar DNA to the affected animal’s own DNA.  Hence the references to cannibalism.

Most beef (and pork… and chicken… and farm-raised fish) in the United States (and increasingly in the rest of the world) is “factory farmed” – that is, raised in cramped conditions and fed a slurry made from a mixture of corn, bone meal, and animal wastes (recycled poop, yum!); which means most meat sources are, in fact, cannibal meat sources.  Animals who have eaten their own kind, or a kind awfully similar to their own.

Given these conditions, it’s not a question of if some new strain of encephalopathy will emerge; it’s a question of when will it make itself known. 

Now, there are a few factors limiting the likelihood of onset, and they should be almost as troubling as the event they are forestalling.  A good example is the famed “pink slime” of McDonald’s fame.  Various industrial processes, such as the “cold pasteurizing” (euphemism for irradiation) of meat, or the use of ammonia-baths, etc. are good for removing bacteria, viruses, and even (in the case of irradiation) prions… though if those procedures don’t make you nervous, you are either very brave, or very drunk.

All of which, we are sure, has by now convinced you that it might be easier just to forego that big platter of ribs you were planning on smoking this weekend, right?

No?

Well, at least let us convince you to spend a few extra dollars to make sure that if you are going to continue to be a meat eater, you get your beef from a healthy source.  Archer-Daniels-Midland will do just fine on their own without you throwing away years of your life just to line their pockets.

And make sure that you eat plenty of veggies along with your main dish of choice, no matter in which longevity category you’ve decided to plant yourself.  As we noted when first breaking down the meaning of the statistics, it is quite likely what unhealthy folk aren’t eating that is putting them in the wrong categories; dark green veggies and fatty fishes top that list, so hop to!  We like you; we’d like to have you around reading our blog for a long, long time.


Happy farming!