“A Republic, if you can keep it.”
— Benjamin Franklin, in reply to a question about what sort of government the delegates to the 1787 Constitutional Convention had settled on.
February 2 is the day some people, primarily in North America, attempt to divine the next six weeks of weather by observing groundhogs who briefly exit from winter hibernation in their burrows. If it’s a sunny day, the groundhog will see his or her shadow and, counter intuitively, those watching the animal will pronounce six more weeks of wintry weather. On a cloudy day, with no shadows in sight, the prediction is for an early start of spring weather. People in some parts of Europe have a similar tradition involving different animals, such as badgers in Germany and hedgehogs in Britain.
Emerging briefly from hibernation in February 2014, a groundhog takes leaves to line its burrow nest or toilet chamber. Photo by Ladycamera.
This is all silliness, of course, with no proof of accuracy, but it is mostly harmless except for possibly obnoxious intrusions on the lives of peace loving groundhogs. In ancient Rome, prognostication using animals took a more deadly turn. All sorts of animals – chickens, sheep, and goats among them – were confined until the day they were sacrificed for the purpose of having a kind of priest called a haruspex examine the dead animal’s entrails for signs of the future. This was deadly serious business, not only for the sacrificial animals, but for the generals and politicians who often did not make a move unless the signs from the entrails were auspicious.
There is no record proving the consistent accuracy of haruspicy (divination by the inspection of entrails), just as there is no record for the accuracy of groundhogs at predicting the weather based on the presence or absence of cloud cover on a particular day. Nonetheless, people have been wasting their time and efforts on these methods of divination for millennia. The ancient method, haruspicy, was a nasty business all around, while Groundhog Day observations cause little harm and are of no consequence.
The Danish National Symphony Orchestra performs a suite of themes from Ennio Morricone’s music for the 1968 Sergio Leone film Once Upon a Time in the West. Tuva Semmingsen performs the vocals that were sung by Edda Dell’Orso on the original soundtrack recording.
What about reading the signs of the times, such as looking at newspapers to follow developments in the republic called the United States of America? What about a Senate majority of Republicans who vote to exclude witnesses in the impeachment trial of a corrupt president? What about a Republican state legislator in Montana who maintains that the Constitution of the United States sanctions the shooting and imprisonment of Socialists, merely for being Socialists? What about the chortling lunatics cheering on Orange Julius as he threatens and demeans his opponents at his demented pep rallies? And what about those same cheering, jeering lunatics threatening violence if their Chosen One is removed from office either by impeachment or by the results of an election?
Those signs and others are easy enough to read for anyone paying attention to developments in order to honor the obligations of an informed citizen. There are those citizens, however, who are too lazy to pay attention. Very well; they should continue in their laziness and stay home on Election Day in nine months, rather than show up and vote for the incumbent president simply because the wolf is not yet at their door. And then there are those voters, more culpable in the decay of the republic than anyone else, who are interested only in the health of their financial portfolio, and who are deaf and blind to the cries and despair of anyone shut out of the bounty and suffering under the oppression of the oligarchy. The signs now point toward a Tyranny by Corporate Oligarchy, and if citizens continue to choose it by doing nothing, then after Election Day in November there will be no going back and we will have gotten the government we deserve.
For those who can’t get enough of the sound of the loss of the republic, here it is on the theremin. Katica Illényi performs with the Győr Philharmonic Orchestra in Budapest, Hungary.
Private companies have been making their electric scooters available for riders to share in cities around the United States and in Europe over the past two years, and the results are a mixed bag. Riders appear to appreciate the service, even if some of them don’t show that appreciation in how they ride or park the e-scooters. City governments appear to like that the service fills gaps in their often inadequate public mass transit services, even though they are learning that more regulation is required of e-scooter companies to rein in their sometimes arrogant disregard for city ordinances and of inconsiderate riders whose behavior can be a public nuisance. Members of the public who have no personal need for the e-scooters are largely tolerant of their presence in their cities, but in many places they are finding their patience tested by the problems mentioned above.
The technology behind e-scooters and smartphones or, in some places, simple cellular phones, makes the business model of sharing e-scooters in a city possible. An e-scooter rigged for sharing has a Global Positioning System (GPS) module and an inexpensive, basic cellular connection for small amounts of data transfer to communicate its exact position and condition. A lithium ion battery provides power. A rider needs to use the internet application provided by the company for use on a smartphone to unlock the e-scooter and provide for payment for the service. Some localities insist as a condition for operating in their city that e-scooter companies make the service available to people without a data connection on a simple cellular phone. One of the ideas behind the service, after all, is to provide a low cost transportation option for poor people.
Lime e-scooters parked next to a subway entrance at Masaryk train station in Prague, Czech Republic. Photo by Martin2035.
The problems arise because, like all private services which take advantage of the public commons, there are abuses. The private companies either do not seek out and pay for permission to park their e-scooters on public property or they may not hold up their end of agreements they have with cities that allow their operations. Since the e-scooters do not belong to them, some riders are unconcerned about how they use them or park them. Equipment abuse is the lookout of the company operating the service, but the abuse of the commons caused by careless parking is a public nuisance at best, a menace at worst. Crime problems have arisen mostly from overnight vandalism of the equipment and from the dangers to workers who must go out at night to find and maintain the equipment.
Bringing e-scooters into cities is a good idea on its surface, and they solve a mobility problem for some poor people or for commuters without cars who find using them more appealing than walking or biking. But with the problems their presence and use are causing by abuse of the commons, it would be better if cities improved their mass transit systems instead. For one thing, e-scooters are not as ecologically benign overall as people may assume, and certainly not in comparison to mass transit options. For another, solving the problems encountered during the initial rollout of e-scooter sharing programs would appear to take up public resources in the form of tighter regulation and consequent enforcement. Wouldn’t it be easier in that case to regulate a comparatively smaller number of mass transit units and operators rather than thousands or tens of thousands of e-scooter units and operators strewn all over a city?
E-scooter sharing programs may last only a year or two more if the current abuses continue, and that’s a shame because many decent people who appreciate the services and have a dearth of other options would probably like to see them continue. Unfortunately this business model appears to go against human nature in that where the commons are concerned, there are always enough bad faith users around to take unfair or inconsiderate advantage of the situation and eventually push the public at large to demand an end to it for everyone. In the words of James Madison, “If men were angels, no government would be necessary.” — Techly
Roger Ebert, the great movie critic who worked primarily in Chicago, Illinois, and over the course of his career garnered respect and influence internationally, believed movies were “like a machine that generates empathy”. By that he meant a well-made movie encourages viewers to lose themselves for a time and step into the shoes of others. There were more movies like that being made 50 years ago than there now, in the current era of comic book special effects franchises.
Stanley Kubrick took this photo in 1949 for LOOK magazine. Mr. Kubrick was a staff photographer for the magazine from 1947 to 1950, and he then went on to direct many great movies, becoming a model for other filmmakers of the New Hollywood. The Chicago Theatre was one of many movie palaces built around the country in the 1920s, and after renovations in the 1980s, it remains a popular venue for film exhibitions and live performances.
Mr. Ebert became the film critic for the Chicago Sun-Times newspaper in 1967, about the same time as the emergence of New Hollywood filmmaking, an era lasting roughly from 1965 to 1985 when Hollywood studios financed character driven films made by directors like Mike Nichols, Bob Rafelson, and Francis Ford Coppola, who came from backgrounds in theater, television, or film school. Filmmakers in Old Hollywood often came up through the ranks, and many of them were refugees from Europe, escaping the fascist regimes spreading throughout the continent in the 1920s, ’30s, and early ’40s.
Old Hollywood was vertically integrated, meaning the studios controlled production and distribution and held talent under long term contracts. All that started to fall away in the 1950s when the federal government forced the studios to divest themselves of most of their wholly owned distribution channels, which had behaved as a cartel, and as television poached audience share from the movie industry. Some star actors and directors cut themselves loose from the major studio system, forming ad hoc film companies which sought limited input from the big studios. Finally, in order to compete with television, studios more frequently rolled the dice on big budget spectaculars such as Ben-Hur or Cleopatra, and those high stakes gambles either saved financially unstable studios or sank them nearly to insolvency.
By the late 1960s, the movie studios primarily served as film financers and weren’t as heavily involved in production and distribution as they once were. Along with discarding the Hays Code of movie censorship, a relic of Old Hollywood, the changed paradigm of filmmaking allowed greater freedom and creative control for directors, actors, and writers. The result was the flowering of small to medium scale films that became the hallmark of the New Hollywood, films such as The Graduate and Bonnie and Clyde, both released in 1967, and continuing with other great films made for adult sensibilities through the 1970s.
Jack Nicholson had a breakout role as an alcoholic civil rights lawyer in the 1969 film Easy Rider, directed by Dennis Hopper, who also starred in the film along with co-writer Peter Fonda. In taking on multiple tasks in the making of Easy Rider, Mr. Hopper and Mr. Fonda were more typical of New Hollywood than they were of Old Hollywood, where vertical integration assigned discrete tasks to different individuals within the studio system, and auteurism was discouraged by studio bosses who were leery of the practice ever since Orson Welles made Citizen Kane in 1941.
Jack Nicholson was the actor who became the face of New Hollywood filmmaking, simply because he was in more hit movies than anyone else during that time. His face, voice, and acting style and choices personified the New Hollywood era. Starting with Easy Rider in 1969, Mr. Nicholson was in one successful movie nearly every year, and in some years more than one, through the 1970s and into the ’80s. He has of course been in many successful films since then, and what is remarkable in retrospect from today’s vantage point when big budget sequels and reboots of franchises are Hollywood’s major output is that he has never repeated himself nor acted in one of those kinds of movies.
Since the demise of New Hollywood filmmaking, Jack Nicholson has chosen to stay with character driven films, though the number available for his participation diminished over the years, as he related in a 1995 interview with Roger Ebert. Even Tim Burton’s 1989 film Batman, in which Mr. Nicholson played The Joker, can be seen as character driven despite its comic book origins and inclusion of special effects. It was the first film of its kind to take the source material seriously, and it was well-made by some exceptional talents.
In a later scene in Easy Rider, Jack Nicholson’s character, George Hanson, discusses the state of the country with Dennis Hopper’s character, Billy.
Unfortunately the endless variations on Batman in the 30 years since its release have grown wearisome. But the movie that started the push for a return to blockbuster filmmaking came out 14 years earlier, in 1975, when Steven Spielberg’s film Jaws appeared in theaters that summer and set box office records. Jaws was followed in the summer of 1977 by Star Wars, a film created and directed by George Lucas that started a media franchise which continues to this day. Those films, too, were well-made by exceptional talents. In the years since their release, however, those kind of films and their lesser cousins have increasingly crowded out the kind of smaller, character driven movies Jack Nicholson and the New Hollywood were known for, the kind Roger Ebert described as generators of empathy. In times when we are in need of empathy generators perhaps more than ever, we are largely left to project ourselves onto special effects beclouded superheroes.
In 1947, as Jews leaving Europe were working toward establishing their independent state of Israel in Palestine, an anti-communist scare was gaining momentum in the United States, leading President Harry Truman to sign an executive order requiring loyalty oaths from federal workers suspected of communist sympathies and possibly conflicted allegiance. Over 70 years later, the state of Israel is well established with economic and military help from the United States, and the idea of a loyalty oath as an assurance that a government employee owes allegiance to America only, and not to any foreign power, has been turned on its head by state and federal laws assuring loyalty to Israel as well, or at least not to engage in criticism of that nation’s increasingly aggressive policies toward Palestinians within and without its disputed borders.
2015 release of the 100 dollar bill, showing the design measures taken to foil counterfeiting. The portrait of Benjamin Franklin remains. Presentation by Sar Maroof.
These laws, which require a state employee or government contractor to sign a pledge not to engage in Boycott, Divest, and Sanction (BDS) actions against Israel, are so blatantly unconstitutional that it beggars belief they have not been challenged and struck down in the courts already. They are a return to the old days of anti-communist loyalty oaths, but with a bizarre twist. And it’s that twist which complicates matters, because any criticism of the pledges or of Israel bypasses reason and plain reading of the Constitution and goes straight to emotional howls of anti-Semitism. Most people know that’s coming, and since they don’t want to withstand it, they don’t speak up in the first place. The lobbyists for Israel then have their own way.
What has also complicated the relationship between the United States and Israel since the late 1940s is how support for Israel has taken on a polyglot nature in the intervening years, particularly with the rise of white evangelical Christians in American politics since the 1980s. In the 1940s, American support for Israel came largely from American Jews and from the large numbers of people who sympathized with the plight of European Jews after the tragedy of the Holocaust. There are other reasons having to do with the labyrinth of Middle Eastern politics and, of course, oil, but those are beyond the scope of this post.
Since the 1980s, as support for Israel’s increasingly hard line toward Palestinians and relations with its Arab neighbors dwindled among some American Jews, the slack was taken up by white evangelical Christians who looked at the modern state of Israel and saw the fulfillment of Biblical prophecy. They cared little about the multitude of practical complications, and they had an interested ear in the White House with Ronald Reagan. By the 1990s, a litmus test for election to political office in some parts of the country was support for Israel, right or wrong, and the test was administered not by American Jews, but by white evangelical Christians and, increasingly, by lobbying groups supported by the right wing in Israeli politics.
Lobbying in Congress by foreign powers is supposedly regulated by law, though in practice it goes on mostly unimpeded. In the 1980s, when Boycott, Divestment, and Sanctions against South Africa’s apartheid regime gained steam in this country and around the world, the South African government did not have anywhere near the lobbying clout in American politics of the Israeli lobby then, and certainly not as powerful as it has become since. South Africa did not have millions of Christian soldiers in this country who were willing to go onward for it no matter what. About all South Africa had were diamonds, and it turned out they were not enough to resist pressure from the rest of the world to reform its immoral system.
A scene early in the 1960 film Exodus, directed by Otto Preminger, with Sal Mineo and Jill Haworth arguing their different world views in 1947 aboard a refugee ship from Europe bound for Palestine. Paul Newman looks on. Indeed, those were the days.
Now times have changed for Israel, and it’s no longer the plucky underdog deserving sympathy; its policies of the last 40 to 50 years have tainted that image, turning it into a kind of South African apartheid regime, and if people in this country want to criticize it for that, or for anything else, then it’s none of this government’s business, no matter how many “Benjamins” change hands in the halls of Congress, or how many white evangelical Christians with fever dreams of a picturesque Holy Land as they imagine it from their family Bibles, a place for fulfillment of the Gospel that they probably suppose would be nicer if it weren’t inhabited by all those dusky modern Jews, no matter how many of those people angrily pull away their support from any politician who dares criticize Israel, and with it their fantasy.
In most of the country, daffodils (genus Narcissus) bloom in March and are among the first signs of spring. Some places might have blooms as early as February, and others not until April. In all places, the leaves pop up from the ground while freezing weather is still frequent, and inexperienced gardeners and curious onlookers worry that the plants have come up too early and will suffer damage from the cold. Not to worry. The daffodils have been through it all before and will be fine. Any damage they do incur from late winter weather usually comes from being bent down to the ground or snapped by the weight of a late snow or ice storm.
Deer, rabbits, and squirrels do not eat daffodil bulbs, foliage, or flowers since they are toxic. The plants spread by jumping from place to place by seed dispersal as well as increasing into clumps formed by daughter bulbs dividing from their parent bulbs, rather like offspring who have matured and set up housekeeping next door. Not all daffodils are noticeably fragrant, and as often happens with flowers it is the older, original varieties that are most fragrant, because plant hybridizers sometimes lose that aspect in pursuit of other traits such as size or color. Trade-offs.
Despite a substantial list of pests, fungi, and viruses that can adversely affect daffodils, in practice they should not gravely concern the gardener since the daffodils seem to cope well on their own. The worst condition affecting daffodils, particularly their bulbs, comes from poor drainage or excessive water, particularly in winter. Hardly anyone likes cold, wet feet, and daffodils are no exception. On account of the wet winter in most of the eastern half of the United States, daffodil displays may be subdued this March.
In the 1965 film Doctor Zhivago, a long winter finally turns to spring, heralded by a field of daffodils.
About the only thing an American gardener can say against daffodils is that they are not native to North America or to any part of the Western Hemisphere. Daffodils originate from southern Europe and northern Africa. That daffodils are not native here is an academic complaint, however, since the genie can hardly be stuffed back in the bottle at this point. Most of the people living now in the Western Hemisphere do not belong here, either, and it’s possible to argue they have done far more damage to the native habitat than anything innocent daffodils could have done. On the contrary, daffodils perform a great service everywhere because their trouble free disposition, loosening of hard soils, and cheerful announcement of spring give a greater portion to the gardener and non-gardener alike than they require in return.
“To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”
— Article I, Section 8, Clause 8 of the United States Constitution
Celebration of the unofficial holiday of Public Domain Day on January 1 is ordinarily bigger in Europe than in the United States except for this year, when extraordinary circumstances brought it into the news. Because of the Copyright Term Extension Act (CTEA) passed by Congress in 1998, there was effectively a 20 year moratorium on works passing into the public domain in the United States, making this January a special occasion because of the backlog of works coming into the public domain all at once.
A European Public Domain Day poster for 2011 noting the artists and writers whose works would move into the public domain. Poster by derochoaleer.org.
Copyright has always been a double-edged sword in that, as the wording in the Constitution states, it protects the rights of authors, but unstated in Clause 8 is the protection for creative rights extended to corporations by later legislation. Those rights have been inferred by lawmakers. This has been a matter of some controversy, as noted in the derogatory nickname for the CTEA as the Mickey Mouse Protection Act. It’s hard to parse out the rights of struggling authors from the rights of billionaire corporations that (who?) hire struggling authors and artists and place their works under the corporation’s copyright.
It’s good that writers and artists have their financial interests in their works protected for, as the Constitution states, “limited Times”. Those limited times extend beyond the lives of the creators, continuing to grant returns to the creators’ heirs or designated beneficiaries. But then exclusive rights end, as they should so that the public can more easily benefit from a work that has stood the test of time. The works of William Shakespeare and Mark Twain have certainly widened their circle of beneficiaries among readers and performers due to being in the public domain.
President Ronald Reagan with his wife, Nancy, greeted upon their arrival in Palm Springs, California, in December 1988 by Palm Springs Mayor Sonny Bono and his wife, Mary. Photo from the Ronald Reagan Presidential Library. Sonny Bono would later serve in the House of Representatives and, after his death in a skiing accident in 1998, would be succeeded in office by Mary Bono. With her support, Congress named the CTEA after Sonny Bono, even though he hadn’t had an especially strong attachment to the bill, having been merely one of twelve sponsors of a similar bill.
It seems the same rules pertaining to inherited artistic wealth could be and should be applied to inherited financial wealth. Why should the heirs of a monetary fortune be entitled to pad their nests in perpetuity with gains they did not secure themselves, or could not have secured without the advantage of great wealth? Heirs of artistic wealth, though they possess a legacy more worthwhile to the rest of humanity than money, are allowed to coast on it for only a generation or two before legal support is withdrawn and they have to make their own way in the world. Will the rules of inheritance, ingrained in humanity for as long as anyone can remember, ever change to reflect a more practical view of what a person is entitled to by birthright, the way it is in copyright law? Most likely not in the near term, but it’s important for the future to plant a seed now.
When Charles Dickens wrote A Christmas Carol and had it published in 1843, the Christmas goose was a traditional feast, and turkey was an uncommon replacement. Goose was relatively inexpensive and plentiful, and turkey was quite the opposite in Europe at least, where it was not native. After Scrooge, the rich man, has metamorphosed into a warm, charitable human being, he makes a gift of a turkey to the family of his clerk, Tom Cratchit. At the time, a gift of a turkey for Christmas dinner was considered quite an upgrade over goose.
A mixed Greylag and Canada geese flock in a farm field in The Netherlands in February 2011. Photo by Uwactieve. During winter, geese often feed in farmers’ fields, gleaning grain fallen among the stubble of the harvest.
Now the tables have turned, so to speak. Turkeys raised on factory farms have become cheap to buy for Thanksgiving and Christmas dinners, but since they have been bred for size and other characteristics, such as being able to withstand close quarters, flavor has been lost in the breeding. Roast goose, meanwhile, has been largely neglected in Western culture over the past 100 years. At the same time, Canada goose (Branta canadensis) numbers have exploded, to the point they are now nuisances in many urban and suburban areas across North America and even western Europe, where they have been both introduced by people and settled by way of natural migration in the past several centuries.
Canada goose populations have followed a curve similar to that of white-tailed deer (Odocoileus virginianus), another once common North American animal that European settlers hunted to such low numbers by the early twentieth century that conservationists took measures to curtail hunting and preserve and protect both species. From that low point in the early twentieth century, Canada geese and white-tailed deer have rebounded to numbers higher perhaps than they were before Europeans migrated to North America. Both species have adapted so well to modern urban and suburban development, liking and even preferring some human-made habitats over undeveloped country, that many people now consider them pests, and even expanded hunting seasons cannot keep up with controlling their booming numbers.
Canada geese have found well-tended parks and golf courses with water features to be ideal habitats year round, making long migrations unnecessary. Photo by Marta Boroń.
Some municipalities in North America hire hunters to cull Canada geese and white-tailed deer, donating the meat to food banks. It’s an interesting development that in 150 years goose has once again become the roast meat at the center of holiday dinners for some poor folks like the Cratchits. They are perhaps eating some of the same Canada geese that have been pestering the rich folks on their golf courses, though naturally the municipalities paying to cull geese to help feed the poor would only do so on public lands, such as public golf courses and parks, and not on privately owned golf courses, since everyone knows rich people don’t believe in government assistance for anyone but themselves.
“11 When I was a child, I spake as a child, I understood as a child, I thought as a child: but when I became a man, I put away childish things. 12 For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known. 13 And now abideth faith, hope, charity, these three; but the greatest of these is charity.”
— The Apostle Paul in a letter to the Corinthians, 1 Corinthians 13:11-13, from the King James Version of the Bible.
DNA kits are very popular now, both for people ordering them for themselves and for those giving them as gifts. Sales of kits doubled in 2017 over all previous years, and have increased again in 2018 over 2017. Interest appears to derive mostly from curiosity about immediate ancestors, on which the kits do a good job of enlightening people, and secondarily about genetic health risks, on which the kits dealing with the subject deliver mixed results, needing confirmation from a medical professional.
One area of controversy with the results, at least for Americans, has come from links to African ancestry for European-Americans, and links to European ancestry for African-Americans. Most European-Americans, or white folks, get results that include some ancestry going back to Africa two hundred years or more, usually less than 10 percent of their total genetic makeup. Most African-Americans, or black folks, get results that include around 25 percent European ancestry.
Portraits of six generations of the Sternberg family in Jiří Sternberg’s study, Český Šternberk Castle, Czech Republic. Photo by takato marui. Tracing ancestry is easier in the pure bred lines and close quarters of Old Europe than in the melange of ethnicities and transcontinental migrations of the New World.
Compared to the ethnic homogeneity of most Old World countries, Americans are mutts, and the melting pot was particularly active in the years of heavy immigration from Europe from the mid-nineteenth century to the early twentieth century. Africans brought into the country as slaves until the Civil War eventually made up a larger proportion of the total population through that period than they have since, averaging close to 20 percent of the total for the first hundred years of the republic, and settling to a range of 11 to 13 percent of the total population afterward. It shouldn’t therefore surprise white people taking DNA tests to discover they have at least a small percentage of relatively recent African ancestry.
It is interesting, however, that DNA test results for black people yield an average of 25 percent European ancestry. It is not surprising there has been mixing of the races, despite laws against miscegenation going back centuries, but that black people have a much higher percentage of European ancestry than white people have a percentage of African ancestry, and yet black people are still and always considered black. This is Pudd’nhead Wilson territory, in which even a tiny percentage of African blood is tantamount to an entirely African genetic heritage. In America, once a person has been accepted into white society, it requires a considerable amount of African genetic input, along with other factors involving economics and relationships, for a person to fall from grace, as it were, from whiteness to blackness.
In this 1896 illustration by Ernst Haeckel (1834-1919), “Man” is precariously at the top of a tree encompassing all life on earth.
The grade has always been uphill, on the other hand, for black people to be accepted into American society, always predominantly white, no matter how much they were genetically European. Initial branding of blackness meant staying black in the eyes of white society until extraordinary circumstances or subterfuge intervened. In all this, what the majority of people, black and white, seemed to have missed was that racial differences were minuscule, along the lines of one half of one percent of the genetics every human being shares. Race itself is an artificial concept, a social construct, rather than a real biological divider.
It’s all in the mind, and those who would reinforce race as a divider of the human species have to perform mental and ethical gymnastics to justify their beliefs since science won’t do it for them. The idea that DNA test results with some percentage of African ancestry are showing merely what goes back millions of years for all of humanity also does not hold up. Yes, everyone on earth does have common ancestors in Africa, but that is not what the makers of DNA home test kits design them to illustrate, since they typically only research genetic relations going back several centuries, that being within the time frame for which they have reliably detailed information on background in their databases.
The immigration scene of young Vito Corleone, played by Oreste Baldini, in Francis Ford Coppola’s 1974 film The Godfather: Part II. Unlike Vito Corleone when he matured, the majority of immigrants, then as now, do not end up engaged in dangerous and unlawful activities, despite what rabble rousing politicians want everyone to believe.
There also appears to be an assumption among even open-minded white people that since Africa is the cradle of all humanity, then Africans themselves must be genetically closer to that cradle than the rest who emigrated to far continents. African ancestry noted in the DNA test results must, they reason, be hearkening back to long ago ancestors white people share with everyone on earth. No. The test results show genetic input from recent African ancestors. And those recent African ancestors have evolved along with everyone else on earth, including Europeans, triggered by similar environmental and social changes pushing them to adapt. The European discoverers should not be so quick to flatter themselves with ideas of inherent superiority that they lose sight of how other societies have adapted quite well under unique circumstances without prizing discovery and conquest above all else as the sine qua non of human existence.
“Render therefore unto Caesar the things which are Caesar’s; and unto God the things that are God’s.”
— Words of Jesus Christ quoted in Matthew 22:21, King James Version of the Bible.
Leonardo da Vinci’s (1452-1519) Mona Lisa, with digitally added mustache. Derivative work by Perhelion.
This past Friday evening at a Sotheby’s art auction in London, the English graffiti artist Banksy remotely activated a shredder hidden within the frame of his painting Girl With Balloon moments after it had sold for one million British pounds. The lower half of the painting shredded, and there is some question now about the status of the sale and whether Banksy’s vandalizing of his own painting will render an even greater value for it.
Discussion of an artwork’s valueoutside of its aesthetic appeal is a reminder that for the rich who can afford to pay tremendous prices for art the value lies more in other, equally idiosyncratic, considerations than in its aesthetics. For the rich, art is an investment and a step on the ladder of social climbing. They may not find a particular piece they buy aesthetically appealing whatsoever. The essential thing is that enough other important people find an artwork appealing so that its value is driven up, checking off the boxes for high return on investment and an increase in high society credentials for its new owner. The artwork itself may languish in a warehouse after sale rather than go on private or public display.
The investment value of an artwork is, like money itself, largely artificial and sustained by the beliefs of the people who hold it or wish to hold it. No one can eat art, any more than they can eat money, nor can they grow food on it like they could on land, nor withdraw food from it as they might withdraw fish from the sea. It has no monetary value unless enough people believe it does. Aesthetic value, on the other hand, is almost entirely in the eye of the beholder, though some people may in their appreciation of art be too dependent on the opinions of “experts”. For an extreme case of wishful thinking brought on by peer pressure, look to the Hans Christian Andersen tale “The Emperor’s New Clothes”.
Before the Renaissance, art was for decoration of public spaces and the homes of the rich, and for religious instruction in places of worship since most people were illiterate and did not receive their education from books. The names of very few medieval and ancient artists have come down to us along with their works. That changed with the Renaissance, when artists such as Leonardo, Michelangelo, and Raphael acquired reputations beyond their immediate patrons among the rich and powerful. Note how we have come to know all three by single names, as if they were modern day celebrities. And it was the widening of cultural influence beyond the insularity of any one city-state’s walls during the Renaissance that allowed artists to break out of anonymity.
The international renown of a few popular artists such as Rembrandt was slow to build at first, and their artworks commanded modest prices by today’s standards. It is the international culture of today and the concentration of great wealth among an ever smaller percentage of the population that has enabled the explosion in high prices for the artworks of a relatively small number of well known artists. The last great jump in prices was roughly during the Gilded Age around the turn of the twentieth century, when a great concentration of wealth created a new aristocracy of capitalists.
In the 1941 film Citizen Kane, wealthy newspaper publisher and art collector Charles Foster Kane, modeled on tycoon William Randolph Hearst and played by Orson Welles, discusses his changing economic circumstances with his banker Mr. Thatcher, played by George Coulouris, and his longtime assistant Mr. Bernstein, played by Everett Sloane.
Now there is another concentration of wealth occurring, this time on a worldwide scale rather than limited to Europe and North America. Nothing has changed, of course: as always, the rich get richer. It’s the scale of wealth accumulation that has changed, and when artworks are selling for hundreds of millions of British pounds or American dollars, a mere million for a painting by anti-establishment artist Banksy is entry level stuff. The rich people sitting on mountains of the wealth of the world would not flinch at shredding a million pounds, and the irony of one artist’s rendering matters not at all to them as long as the artist’s growing fame increases their return on investment.
10/8/2018 Update: Since last Friday, when Banksy’s Girl With Balloon partially shredded after being sold at auction for about £1,000,000, its value has increased by at least 50%, and may have doubled.
A jury at San Francisco’s Superior Court of California has awarded school groundskeeper Dewayne Johnson $289 million in damages in his lawsuit against Monsanto, maker of the glyphosate herbicide Roundup. Mr. Johnson has a form of cancer known as non-Hodgkin’s lymphoma, and it was his contention that the herbicides he used in the course of his groundskeeping work caused his illness, which his doctors have claimed will likely kill him by 2020. Hundreds of potential litigants around the country have been awaiting the verdict in this case against Monsanto, and now it promises to be the first of many cases.
Migrant laborers weeding sugar beets near Fort Collins, Colorado, in 1972. Photo by Bill Gillette for the EPA is currently in the National Archives at College Park, Maryland. Chemical herbicides other than Roundup were in use at that time, though all presented health problems to farm workers and to consumers. Roundup quickly overtook the chemical alternatives because Monsanto represented it, whether honestly or dishonestly, as the least toxic of all the herbicides, and it overtook manual and mechanical means of weeding because of its relative cheapness and because it reduced the need for backbreaking drudgery.
Monsanto has long been playing fast and loose with scientific findings about the possible carcinogenic effects of glyphosate, and the Environmental Protection Agency (EPA) currently sides with Monsanto in its claim that there is no conclusive evidence about the herbicide’s potential to cause cancer. In Europe, where Monsanto has exerted slightly less influence than in the United States, scientific papers have come out in the last ten years establishing the link between glyphosate and cancer. Since Bayer, a German company, acquired Monsanto in 2016 it remains to be seen if European scientists will be muzzled and co-opted like some of their American colleagues.
The intensive use of glyphosate herbicide to remove all ground vegetation in olive groves on Corfu, a Greek island in the Ionian Sea, is evidenced by the large number of discarded chemical containers in its countryside. Photo by Parkywiki.
The scope of global agribusiness sales and practices that is put at risk by the verdict in Johnson v. Monsanto is enormous. From the discovery of glyphosate in 1970 by Monsanto chemist John E. Franz to today, the use of the herbicide has grown to the preeminent place in the chemical arsenal of farmers around the world and has spawned the research into genetically modified, or Roundup Ready, crops such as corn, cotton, and soybeans. There are trillions of dollars at stake, and Monsanto and its parent company, Bayer, will certainly use all their vast resources of money and lawyers to fight the lawsuits to come.
Because scientists have found traces of glyphosatein the bodies of most people they have examined in America for the chemical over the past 20 years as foods from Roundup Ready corn and soybeans spread throughout the marketplace, they have inferred it’s presence is probably widespread in the general population. That means there are potentially thousands of lawsuits in the works. Like the tobacco companies before them and the fossil fuel industry currently, agribusiness giants will no doubt fight adverse scientific findings about their products no matter how overwhelming the evidence against them, sowing doubt among the populace and working the referees in the government.