Slavery Made the South Poor, Not Rich

This is the article Larry Elder was referencing: “INDUSTRY AND ECONOMY DURING THE CIVIL WAR” (Also see “The Truth Behind ’40 Acres and a Mule’) —  here is the excerpt from chapter 22 of MY BONDAGE AND MY FREEDOM:

…The reader will be amused at my ignorance, when I tell the notions I had of the state of northern wealth, enterprise, and civilization. Of wealth and refinement, I supposed the north had none. My Columbian Orator, which was almost my only book, had not done much to enlighten me concerning northern society. The impressions I had received were all wide of the truth. New Bedford, especially, took me by surprise, in the solid wealth and grandeur there exhibited. I had formed my notions respecting the social condition of the free states, by what I had seen and known of free, white, non-slaveholding people in the slave states. Regarding slavery as the basis of wealth, I fancied that no people could become very wealthy without slavery. A free white man, holding no slaves, in the country, I had known to be the most ignorant and poverty-stricken of men, and the laughing stock even of slaves themselves—called generally by them, in derision, “poor white trash.” Like the non-slaveholders at the south, in holding no slaves, I suppose the northern people like them, also, in poverty and degradation. Judge, then, of my amazement and joy, when I found—as I did find—the very laboring population of New Bedford living in better houses, more elegantly furnished—surrounded by more comfort and refinement—than a majority of the slaveholders on the Eastern Shore of Maryland. There was my friend, Mr. Johnson, himself a colored man (who at the south would have been regarded as a proper marketable commodity), who lived in a better house—dined at a richer board—was the owner of more books—the reader of more newspapers—was more conversant with the political and social condition of this nation and the world—than nine-tenths of all the slaveholders of Talbot county, Maryland. Yet Mr. Johnson was a working man, and his hands were hardened by honest toil. Here, then, was something for observation and study. Whence the difference? The explanation was soon furnished, in the superiority of mind over simple brute force. Many pages might be given to the contrast, and in explanation of its causes. But an incident or two will suffice to show the reader as to how the mystery gradually vanished before me.

My first afternoon, on reaching New Bedford, was spent in visiting the wharves and viewing the shipping. The sight of the broad brim and the plain, Quaker dress, which met me at every turn, greatly increased my sense of freedom and security. “I am among the Quakers,” thought I, “and am safe.” Lying at the wharves and riding in the stream, were full-rigged ships of finest model, ready to start on whaling voyages. Upon the right and the left, I was walled in by large granite-fronted warehouses, crowded with the good things of this world. On the wharves, I saw industry without bustle, labor without noise, and heavy toil without the whip. There was no loud singing, as in southern ports, where ships are loading or unloading—no loud cursing or swearing—but everything went on as smoothly as the works of a well adjusted machine. How different was all this from the nosily fierce and clumsily absurd manner of labor-life in Baltimore and St. Michael’s! One of the first incidents which illustrated the superior mental character of northern labor over that of the south, was the manner of unloading a ship’s cargo of oil. In a southern port, twenty or thirty hands would have been employed to do what five or six did here, with the aid of a single ox attached to the end of a fall. Main strength, unassisted by skill, is slavery’s method of labor. An old ox, worth eighty dollars, was doing, in New Bedford, what would have required fifteen thousand dollars worth of human bones and muscles to have performed in a southern port. I found that everything was done here with a scrupulous regard to economy, both in regard to men and things, time and strength. The maid servant, instead of spending at least a tenth part of her time in bringing and carrying water, as in Baltimore, had the pump at her elbow. The wood was dry, and snugly piled away for winter. Woodhouses, in-door pumps, sinks, drains, self-shutting gates, washing machines, pounding barrels, were all new things, and told me that I was among a thoughtful and sensible people. To the ship-repairing dock I went, and saw the same wise prudence. The carpenters struck where they aimed, and the calkers wasted no blows in idle flourishes of the mallet. I learned that men went from New Bedford to Baltimore, and bought old ships, and brought them here to repair, and made them better and more valuable than they ever were before. Men talked here of going whaling on a four years’ voyage with more coolness than sailors where I came from talked of going a four months’ voyage…

Did The Party’s Switch?


THE SWITCH


Just a quick intro to this video, it was at a Young American’s Foundation sponsored eveny at the University of Wisconsin, and a professor gets up to correct D’Souza on the Dixiecrat’s all becoming Republicans. It didn’t go well for the professor:

From a wonderful article from Freedom’s Journal Institute’s series, URBAN LEGENDS: The Dixiecrats and the GOP

THE DIXIECRATS

…During the Philadelphia nominating convention of the Democrat Party in 1948 a number of disgruntled southern segregationist democrats stormed out in protest. They were upset about planks in the new platform that supported Civil Rights.[1]

They left to form a new Party called the State’s Rights Democratic Party also known as the Dixiecrats. Segregationist like George Wallace and other loyalists, although upset, did not bolt from the party; but instead supported another candidate against Harry Truman. According to Kari Frederickson, the goal for the Dixiecrats “was to win the 127 electoral-college votes of the southern states, which would prevent either Republican Party nominee Thomas Dewy or Democrat Harry Truman from winning the 266 electoral votes necessary for election. Under this scenario, the contest would be decided by the House of Representatives, where southern states held 11 of the 48 votes, as each state would get only one vote if no candidate received a majority of electors’ ballots. In a House election, Dixiecrats believed that southern Democrats would be able to deadlock the election until one of the parties had agreed to drop its civil rights plank.”[2]

Notably, this stated aim is apparent in the third plank of the Dixiecrat’s platform which states, “We stand for social and economic justice, which, we believe can be guaranteed to all citizens only by a strict adherence to our Constitution and the avoidance of any invasion or destruction of the constitutional rights of the states and individuals. We oppose the totalitarian, centralized bureaucratic government and the police nation called for by the platforms adopted by the Democratic and Republican Conventions.”[3]

What is even more telling, and speaks directly to the incredulous nature of this urban legend, is the fact that the Dixiecrats rejected the Civil rights platforms of not one, but both parties. Republicans had always supported civil rights since their inception (see GOP party platform here). What was new is that the Democrats, led by Harry Truman, were publicly taking a stand for Civil rights (see Democrat Party Platform here). The ‘totalitarian, centralized bureaucratic government”, according to the Dixiecrats, was the federal government’s enforcement of the 14th and 15th amendments to the U.S. Constitution. With both parties, now, standing for Civil rights the segregationist had no party to go too. Thus, they started their own with the idea of causing a stalemate, which they hoped to break, once both parties relinquished their pro-civil rights planks.

Which way did they go?

The strategy of the State’s Rights Democratic Party failed. Truman was elected and civil rights moved forward with support from both Republicans and Democrats. This begs an answer to the question: So where did the Dixiecrats go? Contrary to legend, it makes no sense for them to join with the Republican Party whose history is replete with civil rights achievements. The answer is, they returned to the Democrat party and rejoined others such as George Wallace, Orval Faubus, Lester Maddox, and Ross Barnett. Interestingly, of the 26 known Dixiecrats (5 governors and 21 senators) only three ever became republicans: Strom Thurmond, Jesse Helms and Mills E. Godwind, Jr. The segregationists in the Senate, on the other hand, would return to their party and fight against the Civil Rights acts of 1957, 1960 and 1964. Republican President Dwight Eisenhower proffered the first two Acts.

Eventually, politics in the South began to change. The stranglehold that white segregationist democrats once held over the South began to crumble. The “old guard” gave way to a new generation of politicians. The Republican Party saw an opportunity to make in-roads into the southern states appealing to southern voters. However, this southern strategy was not an appeal to segregationists, but to the new political realities emerging in the south.[4]


[1] See the 1948 Democrat Party Platform.

[2] Encyclopedia of Alabama – Dixiecrat.

[3] Read more at the American Presidency Project.

[4] I will talk more about the Southern Strategy in another article.





Here is another great excerpt from Ann Coulter from her excellent book, Mugged, regarding this “change dealing with Senators:

In 1948, Thurmond did not run as a “Dixiecan,” he ran as a “Dixiecrat.” As the name indicates, the Dixiecrats were an offshoot of the Democratic Party. When he lost, Thurmond went right back to being a Democrat.

All segregationists were Democrats and—contrary to liberal fables—the vast majority of them remained Democrats for the rest of their lives. Many have famous names—commemorated in buildings and statues and tribute speeches by Bill Clinton. But one never hears about their segregationist pasts, or even Klan memberships. Among them are: Supreme Court justice Hugo Black; Governor George Wallace of Alabama; gubernatorial candidate George Mahoney of Maryland; Bull Connor, Commissioner of Public Safety for Birmingham, Alabama; Governor Orval Faubus of Arkansas; and Governor Lester Maddox of Georgia.

But for practical purposes, the most important segregationists were the ones in the U.S. Senate, where civil rights bills went to die. All the segrega­tionists in the Senate were of course, Democrats. All but one remained Democrats for the rest of their lives—and not conservative Democrats. Support for segregation went hand in hand with liberal positions on other issues, too.

The myth of the southern strategy is that southern segregationists were conservatives just waiting for a wink from Nixon to switch parties and join the Reagan revolution. That could not be further from the truth. With the exception of Strom Thurmond—the only one who ever became a Republi-can—they were almost all liberals and remained liberals for the rest of their lives. Of the twelve southern segregationists in the Senate other than Thurmond, only two could conceivably be described as “conservative Democrats.”

The twelve were:

  • Senator Harry Byrd (staunch opponent of anti-communist Senator Joseph McCarthy);
  • Senator Robert Byrd (proabortion, opponent of 1990 Gulf War and 2002 Iraq War, huge pork barrel spender, sending more than $1 bil­lion to his home state during his tenure, supported the Equal Rights Amendment, won a 100 percent rating from NARAL Pro-Choice America and a 71 percent grade from the American Civil Liberties Union in 2007);
  • Senator Allen Ellender of Louisiana (McCarthy opponent, pacifist and opponent of the Vietnam War);
  • Senator Sam Ervin of North Carolina (McCarthy opponent, anti-Vietnam War, major Nixon antagonist as head the Watergate Com­mittee that led to the president’s resignation);
  • Senator Albert Gore Sr. of Tennessee (ferocious McCarthy oppo­nent despite McCarthy’s popularity in Tennessee, anti-Vietnam War);
  • Senator James Eastland of Mississippi (conservative Democrat, though he supported some of FDR’s New Deal, but was a strong anti-communist);
  • Senator J. William Fulbright of Arkansas (staunch McCarthy opponent, anti-Vietnam War, big supporter of the United Nations and taxpayer-funded grants given in his name);
  • Senator Walter F. George of Georgia (supported Social Security Act, Tennessee Valley Authority and many portions of the Great Society);
  • Senator Ernest Hollings (initiated federal food stamp program, sup­ported controls on oil, but later became a conservative Democrat, as evidenced by his support for Clarence Thomas’s nomination to the Supreme Court);
  • Senator Russell Long (Senate floor leader on LBJ’s Great Society pro­grams);
  • Senator Richard Russell (strident McCarthy opponent, calling him a “huckster of hysteria,” supported FDR’s New Deal, defended Truman’s firing of General Douglas MacArthur, mildly opposed to the Vietnam War);
  • Senator John Stennis (won murder convictions against three blacks based solely on their confessions, which were extracted by vicious police floggings, leading to reversal by the Supreme Court; first senator to publicly attack Joe McCarthy on the Senate floor; and, in his later years, opposed Judge Robert Bork’s nomination to the Supreme Court).

The only Democratic segregationist in the Senate to become a Republican—Strom Thurmond—did so eighteen years after he ran for president as a Dixiecrat. He was never a member of the terroristic Ku Klux Klan, as Hugo Black and Robert Byrd had been. You could make a lot of money betting people to name one segregationist U.S. senator other than Thurmond. Only the one who became a Republican is remembered for his dark days as a segregationists Democrat.

As for the remaining dozen segregationists, only two—Hollings and Eastland—were what you’d call conservative Democrats. The rest were dyed-in-the-wool liberals taking the left-wing positions on issues of the day. Segregationist beliefs went hand in hand with opposition to Senator Joe McCarthy, opposition to the Vietnam War, support for New Deal and Great Society programs, support for the United Nations, opposition to Nixon and a 100 percent rating from NARAL. Being against civil rights is now and has always been the liberal position.


OPPOSING CIVIL RIGHTS


Related as well is the recorded votes of which party supported the Civil Rights history regarding persons of color

WHICH PARTY OPPOSED CIVIL RIGHTS?

The voting rolls of the Civil Rights laws speak for themselves. The Civil Rights Act of 1964 passed the House with 153 out of 244 Democrats voting for it, and 136 out of 171 Republicans. This means that 63 percent of Democrats and 80 percent of Republicans voted “yes.” In the Senate, 46 out of 67 Democrats (69 percent) and 27 out of 33 Republicans (82 percent) supported the measure.

The pattern was similar for the Voting Rights Act of 1965. It passed the House 333-85, with 24 Republicans and 61 Democrats voting “no.” In the Senate, 94 percent of Republicans compared with 73 percent of Democrats supported the legislation.

Here’s a revealing tidbit: had Republicans voted for the Civil Rights laws in the same proportion as Democrats, these laws would not have passed. Republicans, more than Democrats, are responsible for the second civil rights revolution, just as they were solely responsible for the first one. For the second time around, Republicans were mainly the good guys and Democrats were mainly the bad guys.

Here’s further proof: the main opposition to the Civil Rights Movement came from the Dixiecrats. Note that the Dixiecrats were Democrats; as one pundit [Coulter] wryly notes, they were Dixiecrats and not Dixiecans.

The Dixiecrats originated as a breakaway group from the Democratic Party in 1948. For a time, the Dixiecrats attempted to form a separate party and run their own presidential ticket, but this attempt failed and the Dixiecrats reconstituted themselves as a rebel faction within the Democratic Party.

Joined by other Democrats who did not formally ally themselves with this faction, the Dixiecrats organized protests against desegregation rulings by the Supreme Court. Dixiecrat governors refused to enforce those rulings. Dixiecrats in the Senate also mounted filibusters against the Civil Rights Act of 1957 and the Civil Rights Act of 1964. Johnson’s Democratic allies in Congress required Republican votes in order to defeat a Dixiecrat-led filibuster and pass the Civil Rights Act of 1964.

Leading members of the Dixiecrat faction were James Eastland, Democrat from Mississippi; John Stennis, Democrat from Mississippi; Russell Long, Democrat from Louisiana; Strom Thurmond, Democrat from South Carolina; Herman Talmadge, Democrat from Georgia; J. William Fulbright, Democrat from Arkansas; Lester Maddox, Democrat from Georgia; Al Gore Sr., Democrat from Tennessee; and Robert Byrd, Democrat from West Virginia. Of these only Thurmond later joined the Republican Party. The rest of them remained Democrats.

The Dixiecrats weren’t the only racists who opposed civil rights legislation. So did many other Democrats who never joined the Dixiecrat faction. These were racists who preferred to exercise their influence within the Democratic Party, which after all had long been the party of racism, rather than create a new party. Richard Russell of Georgia—who now has a Senate Building named after him—and James Eastland of Mississippi are among the segregationist Democrats who refused to join the Dixiecrat faction.

Now the GOP presidential candidate in 1964, Barry Goldwater, did vote against the Civil Rights Act. But Goldwater was no racist. In fact, he had been a founding member of the Arizona NAACP. He was active in integrating the Phoenix public schools. He had voted for the 1957 Civil Rights Act.

Goldwater opposed the 1964 act because it outlawed private as well as public discrimination, and Goldwater believed the federal government did not have legitimate authority to restrict the private sector in that way. I happen to agree with him on this—a position I argued in The End of Racism. Even so, Goldwater’s position was not shared by a majority of his fellow Republicans.

It was Governor Orval Faubus, Democrat of Arkansas, who ordered the Arkansas National Guard to stop black students from enrolling in Little Rock Central High School—until Republican President Dwight Eisenhower sent troops from the 101st Airborne to enforce desegrega­tion. In retaliation, Faubus shut down all the public high schools in Little Rock for the 1958-59 school year.

It was Governor George Wallace, Democrat of Alabama, who attempted to prevent four black students from enrolling in elementary schools in Huntsville, Alabama, until a federal court in Birmingham intervened. Bull Connor, the infamous southern sheriff who unleashed dogs and hoses on civil rights protesters, was a Democrat.

Progressives who cannot refute this history—facts are stubborn things—nevertheless create the fantasy of a Nixon “Southern strategy” that supposedly explains how Republicans cynically appealed to racism in order to convert southern Democrats into Republicans. In reality Nixon had no such strategy—as we have seen, it was Lyndon Johnson who had a southern strategy to keep blacks from defecting to the Repub­lican Party. Johnson, not Nixon, was the true racist, a fact that progres­sive historiography has gone to great lengths to disguise.

Nixon’s political strategy in the 1968 campaign is laid out in Kevin Phillips’s classic work The Emerging Republican Majority. Phillips writes that the Nixon campaign knew it could never win the presidency through any kind of racist appeal. Such an appeal, even if it won some converts in some parts of the Lower South, would completely ruin Nixon’s pros­pects in the rest of the country. Nixon’s best bet was to appeal to the rising middle classes of the Upper South on the basis of prosperity and economic opportunity. This is exactly what Nixon did.

There are no statements by Nixon that even remotely suggest he appealed to racism in the 1968 or 1972 campaigns. Nixon never dis­played the hateful, condescending view of blacks that Johnson did. The racist vote in 1968 didn’t go to Nixon; it went to George Wallace. A longtime Democratic segregationist, Wallace campaigned that year on an independent ticket. Nixon won the election but Wallace carried the Deep South states of Arkansas, Louisiana, Mississippi, Alabama, and Georgia.

Nixon supported expanded civil rights for blacks throughout his career while Johnson was—for the cynical reasons given above—a late convert to the cause. Nixon went far beyond Johnson in this area; in fact, Nixon implemented America’s first affirmative action program which involved the government forcing racist unions in Philadelphia to hire blacks.

To sum up, starting in the 1930s and continuing to the present, progressive Democrats developed a new solution to the problem of what they saw as useless people. In the antebellum era, useless people from the Democratic point of view were mainly employed as slaves. In the postbellum period, southern Democrats repressed, segregated, and subjugated useless people, seeking to prevent them from challenging white suprem­acy or voting Republican. Meanwhile, northern progressives like Mar­garet Sanger sought to prevent useless people from being born. Today’s progressives, building on the legacy of Wilson, FDR, and Johnson, have figured out what to do with useless people: turn them into Democratic voters.

For MANY MORE resources on this topic,

see my page titled, “U.S. RACIAL HISTORY

Believing In God Is Natural ~ Atheism is Not (Updated)


We ARE programmed to believe one way and through the creative power (and infinite genius) of God, get to choose this natural tendency or to cover it up with our sinful, selfish nature that Romans 1 alludes to by numbing our faculties with an whole array of options.

What else does this craving, and this helplessness, proclaim but that there was once in man a true happiness, of which all that now remains is the empty print and trace? This he tries in vain to fill with everything around him, seeking in things that are not there the help he cannot find in those that are, though none can help, since this infinite abyss can be filled only with an infinite and immutable object; in other words, by God himself.

Blaise Pascal (Pensees 10.148)


Deborah Keleman studies cognitive development in children and Josh Rottman is a PhD student working with her. In a chapter in “Science and the World’s Religions.” they write (p. 206-207):

  • …religion primarily stems from within the person rather than from external, socially organised sources …. evolved components of the human mind tend to lead people towards religiosity early in life.

Before continuing I just want to make a point, none of them by myself but brought here to review by myself. It has to do with merely assuming the evolutionist position, if true, makes theism true and atheism anathema to the survival of the species. For instance, Patricia Churchland notes what the brains primary chore is:

And this is the main point… okay… if I assume evolution is true, then, out of the choices of “religion” and “non-religion” — which of the two provide a better survival rate of the species? To wit:

Even Darwin had some misgivings about the reliability of human beliefs. He wrote, “With me the horrid doubt always arises whether the convictions of man’s mind, which has been developed from the mind of lower animals, are of any value or at all trustworthy. Would any one trust in the convictions of a monkey’s mind, if there are any convictions in such a mind?”

Given unguided evolution, “Darwin’s Doubt” is a reasonable one. Even given unguided or blind evolution, it’s difficult to say how probable it is that creatures—even creatures like us—would ever develop true beliefs. In other words, given the blindness of evolution, and that its ultimate “goal” is merely the survival of the organism (or simply the propagation of its genetic code), a good case can be made that atheists find themselves in a situation very similar to Hume’s.

The Nobel Laureate and physicist Eugene Wigner echoed this sentiment: “Certainly it is hard to believe that our reasoning power was brought, by Darwin’s process of natural selection, to the perfection which it seems to possess.” That is, atheists have a reason to doubt whether evolution would result in cognitive faculties that produce mostly true beliefs. And if so, then they have reason to withhold judgment on the reliability of their cognitive faculties. Like before, as in the case of Humean agnostics, this ignorance would, if atheists are consistent, spread to all of their other beliefs, including atheism and evolution. That is, because there’s no telling whether unguided evolution would fashion our cognitive faculties to produce mostly true beliefs, atheists who believe the standard evolutionary story must reserve judgment about whether any of their beliefs produced by these faculties are true. This includes the belief in the evolutionary story. Believing in unguided evolution comes built in with its very own reason not to believe it.

This will be an unwelcome surprise for atheists. To make things worse, this news comes after the heady intellectual satisfaction that Dawkins claims evolution provided for thoughtful unbelievers. The very story that promised to save atheists from Hume’s agnostic predicament has the same depressing ending.

It’s obviously difficult for us to imagine what the world would be like in such a case where we have the beliefs that we do and yet very few of them are true. This is, in part, because we strongly believe that our beliefs are true (presumably not all of them are, since to err is human—if we knew which of our beliefs were false, they would no longer be our beliefs).

Suppose you’re not convinced that we could survive without reliable belief-forming capabilities, without mostly true beliefs. Then, according to Plantinga, you have all the fixins for a nice argument in favor of God’s existence For perhaps you also think that—given evolution plus atheism—the probability is pretty low that we’d have faculties that produced mostly true beliefs. In other words, your view isn’t “who knows?” On the contrary, you think it’s unlikely that blind evolution has the skill set for manufacturing reliable cognitive mechanisms. And perhaps, like most of us, you think that we actually have reliable cognitive faculties and so actually have mostly true beliefs. If so, then you would be reasonable to conclude that atheism is pretty unlikely. Your argument, then, would go something like this: if atheism is true, then it’s unlikely that most of our beliefs are true; but most of our beliefs are true, therefore atheism is probably false.

Notice something else. The atheist naturally thinks that our belief in God is false. That’s just what atheists do. Nevertheless, most human beings have believed in a god of some sort, or at least in a supernatural realm. But suppose, for argument’s sake, that this widespread belief really is false, and that it merely provides survival benefits for humans, a coping mechanism of sorts. If so, then we would have additional evidence—on the atheist’s own terms—that evolution is more interested in useful beliefs than in true ones. Or, alternatively, if evolution really is concerned with true beliefs, then maybe the widespread belief in God would be a kind of “evolutionary” evidence for his existence.

You’ve got to wonder.

Mitch Stokes, A Shot of Faith: To the Head (Nashville, TN: Thomas Nelson, 2012), 44-45.

While I am not a fan of Charisma… as of late they have posted a few good articles. This being one of them:

Science Proves Your Brain Recognizes the Reality of God, Researchers Say

Remember, there was much discussion about destroying or harming parts of the brain that decrease belief in God:

This has to be embarrassing… if you’re an atheist. A new study performed at the University of York used targeted magnetism to shut down part of the brain. The result: belief in God disappeared among more than 30 percent of participants.

That in itself may not seem so embarrassing, but consider that the specific part of the brain they frazzled was the posterior medial frontal cortex—the part associated with detecting and solving problems, i.e., reasoning and logic.

In other words, when you shut down the part of the brain most associated with logic and reasoning, greater levels of atheism result.

You’ve heard the phrase, “I don’t have enough faith to be an atheist”? Apparently we can now also say, “I have too many brains to be an atheist.”…

(Via my previous post on targeted magnetism)

I also posit that person’s who use illicit drugs, such as marijuana, are less likely to believe in the Judeo-Christian God due to deterioration/destruction of sections of the brain. Parts of the brain most affected are memory and cognitive or parts of the brain that use logic and reason). Whereas,  it seems, we see that a healthy brain is ready to receive faith:

…In a piece for the Washington Post, atheist Elizabeth King writes that she cannot shake the idea of God’s existence.

★ “The idea of God pesters me and makes me think that maybe I’m not as devoted to my beliefs as I’d like to think I am and would like to be. Maybe I’m still subconsciously afraid of hell and want to go to heaven when I die. It’s confusing and frustrating to feel the presence of something you don’t believe in. This is compounded by the fact that the God character most often shows up when I’m already frustrated,” King writes.

Neurotheologian Newberg says this is because science does back the reality of religious experiences.

(Charisma)

This supports another study of Japanese kids raised with no thoughts of a monotheistic God

For example, researchers at Oxford University (at which Dawkins himself was until recently the holder of the Charles Simonyi Chair in the Public Understanding of Science) have earlier reported finding children who, when questioned, express their understanding that there is a Creator, without having had any such instruction from parents or teachers. As Dr Olivera Petrovich, who lectures in Experimental Psychology at Oxford, explained in an interview with Science and Spirit:

My Japanese research assistants kept telling me, ‘We Japanese don’t think about God as creator—it’s just not part of Japanese philosophy.’ So it was wonderful when these children said, ‘Kamisama! God! God made it!’—Dr Olivera Petrovich, Oxford University.

“I tested both the Japanese and British children on the same tasks, showing them very accurate, detailed photographs of selected natural and man-made objects and then asking them questions about the causal origins of the various natural objects at both the scientific level (e.g. how did this particular dog become a dog?) and at the metaphysical level (e.g. how did the first ever dog come into being?). With the Japanese children, it was important to establish whether they even distinguished the two levels of explanation because, as a culture, Japan discourages speculation into the metaphysical, simply because it’s something we can never know, so we shouldn’t attempt it. But the Japanese children did speculate, quite willingly, and in the same way as British children. On forced choice questions, consisting of three possible explanations of primary origin, they would predominantly go for the word ‘God’, instead of either an agnostic response (e.g., ‘nobody knows’) or an incorrect response (e.g., ‘by people’). This is absolutely extraordinary when you think that Japanese religion — Shinto — doesn’t include creation as an aspect of God’s activity at all. So where do these children get the idea that creation is in God’s hands? It’s an example of a natural inference that they form on the basis of their own experience. My Japanese research assistants kept telling me, ‘We Japanese don’t think about God as creator — it’s just not part of Japanese philosophy.’ So it was wonderful when these children said, ‘Kamisama! God! God made it!’ That was probably the most significant finding.”

Today, nearly a decade since Petrovich’s study, there is now a “preponderance of scientific evidence” affirming that “children believe in God even when religious teachings are withheld from them”.

(Creation.com)

I often hear atheists exude confidence in natural selection and evolution and all that it entails. However, when natural belief in God emerges… they reject this as fantasy rather than a superior survival mechanism. It is important to understand that I am not arguing for evolution but showing that it is self-referentially false:

  • NOTE: if you believe in evolution and are an atheist, you would root for and support neo-Darwinian evolutionary “natural selection” in choosing religious belief as superior to that of non-belief!

In a debate during the Q&A session between a theist and atheist/evolutionist, a student asked this great question… and while he did not have the answer to Dr. Pigliucci’s challenge, I do:

Assuming the validity of the “underlying instinct to survive and reproduce” then, out of the two positions (belief and non-belief) available for us to choose from which would better apply to being the most fit if the fittest is “an individual… [that] reproduces more successfully…”?[1]  The woman that believes in God is less likely to have abortions and more likely to have larger families than their secular counterparts.[2]  Does that mean that natural selection will result in a greater number of believers than non-believers?[3]


Footnotes


[1]  From my son’s 9th grade biology textbook: Susan Feldkamp, ex. ed., Modern Biology (Austin, TX: Holt, Rineheart, and Winston, 2002), 288; “…organisms that are better suited to their environment than others produce more offspring” American Heritage Science Dictionary, 1st ed. (Boston, MA: Houghton Mifflin, 2005), cf. natural selection, 422; “fitness (in evolution) The condition of an organism that is well adapted to its environment, as measured by its ability to reproduce itself” Oxford Dictionary of Biology, New Edition (New York, NY: Oxford University Press, 1996), cf. fitness, 202; “fitness In an evolutionary context, the ability of an organism to produce a large number of offspring that survive to reproduce themselves” Norah Rudin, Dictionary of Modern Biology (Hauppauge, NY: Barron’s Educational Series, 1997), cf. fitness, 146.

[2]  Dinesh D’Souza points to this in his recent book, What’s So Great About Christianity:

  • Russia is one of the most atheist countries in the world, and abortions there outnumber live births by a ratio of two to one. Russia’s birth rate has fallen so low that the nation is now losing 700,000 people a year. Japan, perhaps the most secular country in Asia, is also on a kind of population diet: its 130 million people are expected to drop to around 100 million in the next few decades. Canada, Australia, and New Zealand find themselves in a similar predicament. Then there is Europe. The most secular continent on the globe is decadent in the quite literal sense that its population is rapidly shrinking. Birth rates are abysmally low in France, Italy, Spain, the Czech Republic, and Sweden. The nations of Western Europe today show some of the lowest birth rates ever recorded, and Eastern European birth rates are comparably low.  Historians have noted that Europe is suffering the most sustained reduction in its population since the Black Death in the fourteenth century, when one in three Europeans succumbed to the plague. Lacking the strong religious identity that once characterized Christendom, atheist Europe seems to be a civilization on its way out. Nietzsche predicted that European decadence would produce a miserable “last man’ devoid of any purpose beyond making life comfortable and making provision for regular fornication. Well, Nietzsche’s “last man” is finally here, and his name is Sven. Eric Kaufmann has noted that in America, where high levels of immigration have helped to compensate for falling native birth rates, birth rates among religious people are almost twice as high as those among secular people. This trend has also been noticed in Europe.” What this means is that, by a kind of natural selection, the West is likely to evolve in a more religious direction. This tendency will likely accelerate if Western societies continue to import immigrants from more religious societies, whether they are Christian or Muslim. Thus we can expect even the most secular regions of the world, through the sheer logic of demography, to become less secular over time…. My conclusion is that it is not religion but atheism that requires a Darwinian explanation. Atheism is a bit like homosexuality: one is not sure where it fits into a doctrine of natural selection. Why would nature select people who mate with others of the same sex, a process with no reproductive advantage at all? (17, 19)

Some other studies and articles of note: Mohit Joshi, “Religious women less likely to get abortions than secular women” (last accessed 9-6-2016), Top Health News, Health News United States (1-31-08); Anthony Gottlieb, “Faith Equals Fertility,” Intelligent Life, a publication of the Economist magazine (winter 2008) [THIS LINK IS DEAD] most of the original Economist article can be found at the Washington Times as well as The Immanent Frame (both accessed 9-6-2016); W. Bradford Wilcox, “Fertility, Faith, & the Future of the West: A conversation with Phillip Longman” (last accessed 9-6-2016), Christianity Today, Books & Culture: A Christian Review (5-01-2007); Pippa Norris and Ronald Inglehart, Sacred and Secular: Religion and Politics Worldwide (New York, NY: Cambridge University Press, 2004), 3-32, esp. 24-29 — I recommend this book for deep thinking on the issue.

  • And churchgoing women have more children than their nonreligious peers, according to the Center for Disease Control’s National Survey of Family Growth, an ongoing survey spanning 2011-2015. The survey involves about 5,000 interviews per year, conducted by the University of Michigan Institute for Social Research. Women between the ages of 15 and 44 who attend religious services at least weekly have 1.42 children on average, compared with the 1.11 children of similar-age women who rarely or never attend services. More religious women said they also intend to have more kids (2.62 per woman) than nonreligious women (2.10 per woman), the survey found. (Baby Boom: Religious Women Having More Kids ~ LiveScience)
  • In fact, Blume’s research also shows quite vividly that secular, nonreligious people are being dramatically out-reproduced by religious people of any faith. Across a broad swath of demographic data relating to religiosity, the godly are gaining traction in offspring produced. For example, there’s a global-level positive correlation between frequency of parental worship attendance and number of offspring. Those who “never” attend religious services bear, on a worldwide average, 1.67 children per lifetime; “once per month,” and the average goes up to 2.01 children; “more than once a week,” 2.5 children. Those numbers add up—and quickly. Some of the strongest data from Blume’s analyses, however, come from a Swiss Statistic Office poll conducted in the year 2000. These data are especially valuable because nearly the entire Swiss population answered this questionnaire—6,972,244 individuals, amounting to 95.67% of the population—which included a question about religious denomination. “The results are highly significant,” writes Blume: “…women among all denominational categories give birth to far more children than the non-affiliated. And this remains true even among those (Jewish and Christian) communities who combine nearly double as much births with higher percentages of academics and higher income classes as their non-affiliated Swiss contemporaries.” (God’s little rabbits: Religious people out-reproduce secular ones by a landslide ~ Scientific American)
  • Another value that is both measurable and germane to fertility is the importance of religion. People who are actively religious tend to marry more and stay together longer. To the extent that time spent married during reproductive years increases fertility, then religion would be a positive factor in fertility rates. For example, in Canada women who had weekly religious attendance were 46 percent more likely to have a third child than women who did not. (The Northern America Fertility Divide ~ Hoover Institute

[3] Adapted from a question by a student at a formal debate between Dr. Massimo Pigliucci and Dr. William Lane Craig. The debate is entitled “Craig vs. Pigliucci: Does the Christian God Exist?”  (DVD, Christian Apologetics, Biola University, apologetics@biola.edu ~ Category Number: 103000-400310-56107-Code: WLC-RFM014V).

Another aspect that shows the increased natural selective nature of belief and longevity (the opportunity to leave more offspring) is tha positive influence of religion:


Social Sciences Agree

~ Religious More “Fit” ~


Via my post on family values: A Family Values [Atheist] Mantra Dissected: Nominal vs. Committed

Social Scientists Agree

  • Religious Belief Reduces Crime Summary of the First Panel Discussion Panelists for this important discussion included social scientists Dr. John DiIulio, professor of politics and urban affairs at Princeton University; David Larson, M.D., President of the National Institute for Healthcare Research; Dr. Byron Johnson, Director of the Center for Crime and Justice Policy at Vanderbilt University; and Gary Walker, President of Public/Private Ventures. The panel focused on new research, confirming the positive effects that religiosity has on turning around the lives of youth at risk.
  • Dr. Larson laid the foundation for the discussion by summarizing the findings of 400 studies on juvenile delinquency, conducted during the past two decades. He believes that although more research is needed, we can say without a doubt that religion makes a positive contribution.
  • His conclusion: “The better we study religion, the more we find it makes a difference.” Previewing his own impressive research, Dr. Johnson agreed. He has concluded that church attendance reduces delinquency among boys even when controlling for a number of other factors including age, family structure, family size, and welfare status. His findings held equally valid for young men of all races and ethnicities.
  • Gary Walker has spent 25 years designing, developing and evaluating many of the nation’s largest public and philanthropic initiatives for at-risk youth. His experience tells him that faith-based programs are vitally important for two reasons. First, government programs seldom have any lasting positive effect. While the government might be able to design [secular/non-God] programs that occupy time, these programs, in the long-term, rarely succeed in bringing about the behavioral changes needed to turn kids away from crime. Second, faith-based programs are rooted in building strong adult-youth relationships; and less concerned with training, schooling, and providing services, which don’t have the same direct impact on individual behavior. Successful mentoring, Walker added, requires a real commitment from the adults involved – and a willingness to be blunt. The message of effective mentors is simple. “You need to change your life, I’m here to help you do it, or you need to be put away, away from the community.” Government, and even secular philanthropic programs, can’t impart this kind of straight talk.
  • Sixth through twelfth graders who attend religious services once a month or more are half as likely to engage in at-risk behaviors such as substance abuse, sexual excess, truancy, vandalism, drunk driving and other trouble with police. Search Institute, “The Faith Factor,” Source, Vol. 3, Feb. 1992, p.1.
  • Churchgoers are more likely to aid their neighbors in need than are non-attendees. George Barna, What Americans Believe, Regal Books, 1991, p. 226.
  • Three out of four Americans say that religious practice has strengthened family relationships. George Gallup, Jr. “Religion in America: Will the Vitality of Churches Be the Surprise of the Next Century,” The Public Perspective, The Roper Center, Oct./Nov. 1995.
  • Church attendance lessens the probabilities of homicide and incarceration. Nadia M. Parson and James K. Mikawa: “Incarceration of African-American Men Raised in Black Christian Churches.” The Journal of Psychology, Vol. 125, 1990, pp.163-173.
  • Religious practice lowers the rate of suicide. Joubert, Charles E., “Religious Nonaffiliation in Relation to Suicide, Murder, Rape and Illegitimacy,” Psychological Reports 75:1 part 1 (1994): 10 Jon W. Hoelter: “Religiosity, Fear of Death and Suicide Acceptibility.” Suicide and Life-Threatening Behavior, Vol. 9, 1979, pp.163-172.
  • The presence of active churches, synagogues… reduces violent crime in neighborhoods. John J. Dilulio, Jr., “Building Spiritual Capital: How Religious Congregations Cut Crime and Enhance Community Well-Being,” RIAL Update, Spring 1996.
  • People with religious faith are less likely to be school drop-outs, single parents, divorced, drug or alcohol abusers. Ronald J. Sider and Heidi Roland, “Correcting the Welfare Tragedy,” The Center for Public Justice, 1994.
  • Church involvement is the single most important factor in enabling inner-city black males to escape the destructive cycle of the ghetto. Richard B. Freeman and Harry J. Holzer, eds., The Black Youth Employment Crisis, University of Chicago Press, 1986, p.354.
  • Attending services at a church or other house of worship once a month or more makes a person more than twice as likely to stay married than a person who attends once a year or less. David B. Larson and Susan S. Larson, “Is Divorce Hazardous to Your Health?” Physician, June 1990. Improving Personal Well-Being
  • Regular church attendance lessens the possibility of cardiovascular diseases, cirrhosis of the liver, emphysema and arteriosclerosis. George W. Comstock amd Kay B. Patridge:* “Church attendance and health.”* Journal of Chronic Disease, Vol. 25, 1972, pp. 665-672.
  • Regular church attendance significantly reduces the probablility of high blood pressure.* David B. Larson, H. G. Koenig, B. H. Kaplan, R. S. Greenberg, E. Logue and H. A. Tyroler:* ” The Impact of religion on men’s blood pressure.”* Journal of Religion and Health, Vol. 28, 1989, pp.265-278.* W.T. Maramot:* “Diet, Hypertension and Stroke.” in* M. R. Turner (ed.) Nutrition and Health, Alan R. Liss, New York, 1982, p. 243.
  • People who attend services at least once a week are much less likely to have high blood levels of interlukin-6, an immune system protein associated with many age-related diseases.* Harold Koenig and Harvey Cohen, The International Journal of Psychiatry and Medicine, October 1997.
  • Regular practice of religion lessens depression and enhances self esteem. *Peter L. Bensen and Barnard P. Spilka:* “God-Image as a function of self-esteem and locus of control” in H. N. Maloney (ed.) Current Perspectives in the Psychology of Religion, Eedermans, Grand Rapids, 1977, pp. 209-224.* Carl Jung: “Psychotherapies on the Clergy” in Collected Works Vol. 2, 1969, pp.327-347.
  • Church attendance is a primary factor in preventing substance abuse and repairing damage caused by substance abuse.* Edward M. Adalf and Reginald G. Smart:* “Drug Use and Religious Affiliation, Feelings and Behavior.” * British Journal of Addiction, Vol. 80, 1985, pp.163-171.* Jerald G. Bachman, Lloyd D. Johnson, and Patrick M. O’Malley:* “Explaining* the Recent Decline in Cocaine Use Among Young Adults:* Further Evidence That Perceived Risks and Disapproval Lead to Reduced Drug Use.”* Journal of Health and Social Behavior, Vol. 31,* 1990, pp. 173-184.* Deborah Hasin, Jean Endicott, * and Collins Lewis:* “Alcohol and Drug Abuse in Patients With Affective Syndromes.”* Comprehensive Psychiatry, Vol. 26, 1985, pp. 283-295. * The findings of this NIMH-supported study were replicated in the Bachmen et. al. study above.

(From a post entitled “Love“)

(Also see 52 Reasons To Go To Church) These indicators are also mentions in a Heritage Foundation article, “Why Religion Matters: The Impact of Religious Practice on Social Stability

More Stats

…A survey of 1,600 Canadians asked them what were their beliefs about God and what moral values they considered to be “very important.” The results of the survey are shown below:

o-CANADA-FLAG-facebook

Although the differences between theists and atheists in the importance of values such as honesty, politeness, and friendliness are generally small, moral values emphasized by religious beliefs, such as Christianity, including patience, forgiveness, and generosity exhibit major differences in attitudes (30%+ differences between theists and atheists). (Source)

  • The strength of the family unit is intertwined with the practice of religion. Churchgoers are more likely to be married, less likely to be divorced or single, and more likely to manifest high levels of satisfaction in marriage.
  • Church attendance is the most important predictor of marital stability and happiness.
  • The regular practice of religion helps poor persons move out of poverty. Regular church attendance, for example, is particularly instrumental in helping young people to escape the poverty of inner-city life.
  • Religious belief and practice contribute substantially to the formation of personal moral criteria and sound moral judgment.
  • Regular religious practice generally inoculates individuals against a host of social problems, including suicide, drug abuse, out-of-wedlock births, crime, and divorce.
  • The regular practice of religion also encourages such beneficial effects on mental health as less depression (a modern epidemic), more self-esteem, and greater family and marital happiness.
  • In repairing damage caused by alcoholism, drug addiction, and marital breakdown, religious belief and practice are a major source of strength and recovery.
  • Regular practice of religion is good for personal physical health: It increases longevity, improves one’s chances of recovery from illness, and lessens the incidence of many killer diseases.

So we can see that the above are important factors in a healthy, stable, family which would have the highest percentage or chance in a family situation to create “family values.” What about divorce rates and the 2009 data. This is dealt with well at Christian Action League, and shows how Barna and the Government can miss-categorize whole swaths of people and their affiliations:

...Party of the Rich?

Only one of the top 25 donors to political 527 groups has given to a conservative organization, shedding further light on the huge disparity between Democrats and Republicans in this new fund-raising area. The top three 527 donors so far in the 2004 election cycle – Hollywood producer Steven Bing, Progressive Corp. chairman Peter Lewis and financier George Soros – have combined to give nearly $24 million to prominent liberal groups. They include Joint Victory Campaign 2004, America Coming Together, and MoveOn.org.

Dems the richest five senators?

Financial statements revealed the five richest members of the United States Senate are Democrats. The annual disclosure allows senators to represent their net worth inside a broad range.

Presidential candidate Sen. John Kerry (D-MA) is far ahead of his colleagues with $163 million, most of it coming from his wife’s inheritance of the Heinz fortune. The actual estimate is over $400 million.

Lagging behind is Sen. Herb Kohl (D-WI) at $111 million. The Wisconsin senator’s family owns a department store chain. Sen. John “Jay” Rockefeller (D-WV) comes in third with a personal fortune reported to be $81 million.

Former Goldman Sachs chairman Sen. John Corzine (D-NJ) weighs in at $71 million, with Sen. Diane Feinstein (D-CA) rounding out the top five at $26.3 million. Sen. Peter Fitzgerald (R-IL) breaks the string of Democrat multimillionaires in sixth place at $26.1 million. Sens. Frank Lautenberg (D-NJ), Bill Frist (R-TN), John Edwards (D-NC), and Edward Kennedy (D-MA) complete the top ten.

Democrats are 10 of the top 15 richest senators.

(Rich Snobs)

…Wright did his own research using the General Social Survey; a huge study conducted by the National Opinion Research Center at the University of Chicago, and found that folks who identify as Christians but rarely attend church have a divorce rate of 60 percent compared to 38 percent among people who attend church regularly. More generally, he found that Christians, similar to adherents of other traditional faiths, have a divorce rate of 42 percent compared with 50 percent among those without a religious affiliation.

And his is not the only research that is showing a link between strong faith and increased marriage stability.

University of Virginia sociologist W. Bradford Wilcox, director of the National Marriage Project, concluded that “active conservative Protestants” who regularly attend church are 35 percent less likely to divorce than are those with no faith affiliation. He used the National Survey of Families and Households to make his analysis.

[….]

Glenn Stanton, the director for family formation studies at Focus on the Family in Colorado Springs, Colo., has been writing articles to spread the truth about the lower divorce rate among practicing Christians.

“Couples who regularly practice any combination of serious religious behaviors and attitudes — attend church nearly every week, read their Bibles and spiritual materials regularly; pray privately and together; generally take their faith seriously, living not as perfect disciples, but serious disciples — enjoy significantly lower divorce rates that mere church members, the general public and unbelievers,” Stanton wrote in the Baptist Press early this year.

At issue in Barna’s studies is how he defined “Christian” and to what other groups he compared the “Christian” divorce rate. Apparently, his study compared what he termed “born-again” Christians — those who described their faith in terms of “personal commitment,” “accept as savior” and other evangelical, born-again language to three other groups, which included self-identified Christians who do not describe their faith with those terms, members of other, non-Christian religions and people of no religious beliefs.

Because his second group would have included many Catholics and mainline Protestants, Wright points out that Barna was, in many ways, “comparing Christians against Christians.” No wonder the rates were similar….

In USA Today, David Kinnaman, Barna’s president, said that “the statistical differences reflect varied approaches, with Wright looking more at attendance and his research firm dwelling on theological commitments.” Duh! The bottom line seems to be that the more seriously couples take their faith, the less likely they are to get a divorce.  That seems like a self-evident truth, but it appears there is also evidence for it. In other words, this is a nominal, vs. committed Christian vs. secular person battle.

I can go on-and-on, but lets shorten what we have learned, and it all revolves around this:

  • “There’s something about being a nominal ‘Christian’ that is linked to a lot of negative outcomes when it comes to family life.”

I realize that much of this can be classified broadly as  “The Ecological Fallacy” — but it is an amassing of stats to show that in fact the committed Christian understands the totality of “family values” and commits to them more than the secular person.


1a) Those who attend church more are to be found in the Republican Party;
1b) Those who do not, the Democratic Party;
2a) Those in the Republican Party donate much more to charitable causes;
2b) Those in the Democratic Party, are much more stingy;
3a) Republicans earn less and give more;
3b) Democrats earn more and give less;
4a) Conservative Christians and Jews (people who believe in Heaven and Hell) commit less crimes;
4b) Liberal religious persons (universalists) have a higher rate of crime;
5a) Regular church attendees have a lower drug use rate;
5b) Irreligious persons have a higher rate;
6a) Moral “oughts” are answered in Christian theism (one “ought” not rape because it is absolutely, morally wrong);
6b) Moral “oughts” are merely current consensus of the most individuals, there is no absolute moral statement that can be made about rape;
7a) Republicans are happier than Democrats;
7b) Democrats are more depressed;
8a) The sex lives of  married, religious persons is better/more fulfilling — sex is being shown to be a “religious” experience after-all;
8b) The sex lives of the irreligious person is less fulfilling;
9a) The conservative is more likely to reach orgasm [conservative woman I assume];
9b) The liberal woman is not;
10a) They are less likely to sleep around, which would also indicate lower STDs;
10b Democrats are more likely to have STDs through having more sex partners;
11a) Republicans are less likely (slightly, but this is so because of the committed Christians in the larger demographic) to have extra-marital affairs;
11b) Democrats more likely;
12a) Republicans over the last three decades have been reproducing more…
12b) Democrats abort more often and have less children through educational/career decisions
13a) Christians are more likely to have children and impact the world;
13b) Skeptics replace family with pleasure and travel.


...Happiness Is A Moral Obligation

Forty-three percent of people who attend religious services weekly or more say they’re very happy, compared to 26 percent of those who go seldom or never. The Pew analysis does not answer the question of how religion, Republicanism and happiness might be related, however.

[….]

Most young people start out as naive, idealistic liberals. But as they get older, that changes. They get more conservative, usually because they grow up. But just imagine that you never get out of that liberal mindset. You go through your whole life trying to check people into a victim box, always feeling offended, always trying to right all of the wrongs in the world, and always blaming government for it. It’s no wonder you’d end up miserable when you get older! Going through your entire life feeling like that would make you a very angry, bitter, jealous, selfish person — and often, that describes aging liberals to a T.

All in all, being a Republican gives you a 7% edge in the happiness department, which doesn’t sound like much, but it’s a greater factor than race, ethnicity, or gender. And just a reminder — Republicans have the advantage across all class lines as well, from upper class to middle class to lower class. Lower class Republicans are happier than lower class Democrats. Middle class Republicans are happier than middle class Democrats. And upper class Republicans are happier than upper class Democrats.

And I’ll say it again. It’s because of the difference in world view.

(RightWing News)

[…..]

Survival of the Fittest!

“Since women that believe in God are less likely to have abortions, does that mean that natural selection will result in a greater number of believers than non-believers.” Assuming the validity of the “underlying instinct to survive and reproduce” then, out of the two positions (belief and non-belief) available for us to choose from which would better apply to being the most fit if the fittest is “an individual… [that] reproduces more successfully…”?  The woman that believes in God is less likely to have abortions and more likely to have larger families than their secular counterparts.  Does that mean that natural selection will result in a greater number of believers than non-believers?

Also,

  • Divorce. Marriages in which both spouses frequently attend religious services are less likely to end in divorce. Marriages in which both husband and wife attend church frequently are 2.4 times less likely to end in divorce than marriages in which neither spouse attends religious services.1
  • Mother-Child Relationship. Mothers who consider religion to be important in their lives report better quality relationships with their children. According to mothers’ reports, regardless of the frequency of their church attendance, those who considered religion to be very important in their lives tended to report, on average, a higher quality of relationship with their children than those who did not consider religion to be important.2
  • Father-Child Relationship. Fathers’ religiosity is associated with the quality of their relationships with their children. A greater degree of religiousness among fathers was associated with better relationships with their children, greater expectations for positive relationships in the future, investment of thought and effort into their relationships with their children, greater sense of obligation to stay in regular contact with their children, and greater likelihood of providing emotional support and unpaid assistance to their children and grandchildren. Fathers’ religiousness was measured on six dimensions, including the importance of faith, guidance provided by faith, religious attendance, religious identity, denominational affiliation, and belief in the importance of religion for their children.3
  • Well-Being of High School Seniors. Among high school seniors, religious attendance and a positive attitude toward religion are correlated with predictors of success and well-being. Positive attitudes towards religion and frequent attendance at religious activities were related to numerous predictors of success and wellbeing for high-school seniors, including: positive parental involvement, positive perceptions of the future, positive attitudes toward academics, less frequent drug use, less delinquent behavior, fewer school attendance problems, more time spent on homework, more frequent volunteer work, recognition for good grades, and more time spent on extracurricular activities.4
  • Life Expectancy. Religious attendance is associated with higher life expectancy at age 20. Life expectancy at age 20 was significantly related to church attendance. Life expectancy was 61.9 years for those attending church once a week and 59.7 for those attending less than once a week.5
  • Drinking, Smoking and Mortality. Frequent religious attendance is correlated with lower rates of heavy drinking, smoking, and mortality. Compared with peers who did not attend religious services frequently, those who did had lower mortality rates and this relationship was stronger among women than among men. In addition, frequent attendees were less likely to smoke or drink heavily at the time of the first interview. Frequent attendees who did smoke or drink heavily at the time of the first interview were more likely than nonattendees to cease these behaviors by the time of the second interview.6
  • Volunteering. Individuals who engage in private prayer are more likely to join voluntary associations aimed at helping the disadvantaged. Individuals who engaged in private prayer were more likely to report being members of voluntary associations aimed at helping the elderly, poor and disabled when compared to those who did not engage in private prayer. Prayer increased the likelihood of volunteering for an organization that assisted the elderly, poor and disabled, on average, by 20 percent.7
  • Charity and Volunteering. Individuals who attend religious services weekly are more likely to give to charities and to volunteer. In 2000, compared with those who rarely or never attended a house of worship, individuals who attended a house of worship nearly once a week or more were 25 percentage points more likely to donate to charity (91 percent vs. 66 percent) and 23 points more likely to volunteer (67 percent vs. 44 percent).8
  • Voting. Individuals who participated in religious activities during adolescence tend to have higher rates of electoral participation as young adults. On average, individuals who reported participating in religious groups and organizations as adolescents were more likely to register to vote and to vote in a presidential election as young adults when compared to those who reported not participating in religious groups and organizations.9
  • Ethics in Business. Business professionals who assign greater importance to religious interests are more likely to reject ethically questionable business decisions. Business leaders who assigned greater importance to religious interests were more likely to reject ethically questionable business decisions than their peers who attached less importance to religious interests. Respondents were asked to rate the ethical quality of 16 business decisions. For eight of the 16 decisions, respondents who attached greater importance to religious interests had lower average ratings, which indicated a stronger disapproval of ethically questionable decisions, compared to respondents who attached less importance to religious interests.10

Footnotes

  1. Vaughn R. A. Call and Tim B. Heaton, “Religious Influence on Marital Stability,” Journal for the Scientific Study of Religion 36, No. 3 (September 1997): 382-392.
  2. Lisa D. Pearce and William G. Axinn, “The Impact of Family Religious Life on the Quality of Mother-Child Relations,” American Sociological Review 63, No. 6 (December 1998): 810-828.
  3. Valerie King, “The Influence of Religion on Fathers’ Relationships with Their Children,” Journal of Marriage and Family 65, No. 2 (May 2003): 382-395.
  4. Jerry Trusty and Richard E. Watts, “Relationship of High School Seniors’ Religious Perceptions and Behavior to Educational, Career, and Leisure Variables,” Counseling and Values 44, No. 1 (October 1999): 30-39.
  5. Robert A. Hummer, Richard G. Rogers, Charles B. Nam, and Christopher G. Ellison, “Religious Involvement and U.S. Adult Mortality,” Demography 36, No. 2 (May 1999): 273-285.
  6. William J. Strawbridge, Richard D. Cohen, Sarah J. Shema, and George A. Kaplan, “Frequent Attendance at Religious Services and Mortality over 28 Years,” American Journal of Public Health 87, No. 6 (June 1997): 957-961.
  7. Matthew T. Loveland, David Sikkink, Daniel J. Myers, and Benjamin Radcliff, “Private Prayer and Civic Involvement,” Journal for the Scientific Study of Religion, 44, No. 1 (March 2005): 1-14.
  8. Arthur C. Brooks, Who Really Cares: America’s Charity Divide, (New York: Basic Books 2006), 31-52.
  9. Michelle Frisco, Chandra Muller and Kyle Dodson, “Participation in Voluntary Youth-Serving Associations and Early Adult Voting Behavior,” Social Science Quarterly 85, No. 3 (September 2004): 660-676.
  10. Justin Longenecker, Joseph McKinney, and Carlos Moore, “Religious Intensity, Evangelical Christianity, and Business Ethics: An Empirical Study,” Journal of Business Ethics 55, No. 4 (December 2004): 371- 384.

Excerpts on Slavery (D’Souza, Williams, and Sowell) UPDATED

First, just to note… this part of American history is rotten to the core. But I still feel the Democratic Party that committed these crimes are just as rotten to the core. Also worthy to keep in mind I am focusing [in these quotes] primarily on the economics of slavery.

For my oldest

(Click the title to jump)

  1. Thomas Sowell, Black Rednecks and White Liberals (San Francisco, CA: Encounter Books, 2005), 157-159.
  2. Thomas Sowell, The Thomas Sowell Reader (New York, NY: Basic Books, 2011), 245-247.
  3. Dinesh D’Souza, America: Imagine a World Without Her (New Jersey, NJ: Regnery, 2014), 24-26.
  4. Thomas Sowell, Economic Facts and Fallacies (New York, NY: Basic Books, 2008), 160-166.
  5. Paul Johnson, A History of the American People (New York, NY: Harper Perenial, 1997), 3-9.
  6. Walter E. Williams, Race & Economics: How Much Can Be Blamed On Discrimination? (Stanford, CA: Hoover Institution Press, 2011), 15-27, 29.

[p. 157>] Economics

Those who think of slavery in economic terms often assume that it is a means by which a society, or at least its non-slave popula­tion, becomes richer. Some have even claimed that the industrial revolution in Western civilization was based on the profits extracted from the exploitation of slaves. Rather than rehash a large and controversial literature on this issue, we may instead look at the economic condition of countries or regions that used vast numbers of slaves in the past. Both in Brazil and in the United States—the countries with the two largest slave populations in the Western Hemisphere—the end of slavery found the regions in which slaves had been concentrated poorer than other regions of these same countries. For the United States, a case could be made that this was due to the Civil War, which did so much damage to the South, but no such explanation would apply to Brazil, which fought no civil war over this issue. Moreover, even in the United States, the South lagged behind the North in many ways even before the Civil War.

Although slavery in Europe died out before it was abolished in the Western Hemisphere, as late as 1776 slavery had not yet died out all across the continent when Adam Smith wrote in The Wealth of Nations that it still existed in some eastern regions. But, even then, Eastern Europe was much poorer than Western Europe. The slavery of North Africa and the Middle East, over the centuries, took more slaves from sub-Saharan Africa than the Western Hemi­sphere did (in addition to large imports of slaves from Eastern Europe and Southern Europe to the Moslem countries of North [p. 158>] Africa and the Middle East). But these remained largely poor coun­tries until the discovery and extraction of their vast oil deposits.

In many parts of the non-Western world, slaves were sources of domestic amenities and means of displaying wealth with an impressive retinue, rather than sources of wealth. Often they were a drain on the wealth already possessed. According to a scholarly study of slavery in China, the slaves there “did not generate any surplus; they consumed it. Another study concluded: “The Mid­dle East and the Arab world rarely used slaves for productive activities. Even though some slaveowners—those whose slaves produced commercial crops or other saleable products—received wealth from the fruits of the unpaid labor of these slaves, that is very different from saying that the society as a whole, or even its non-slave population as a whole, ended up wealthier than it would have been in the absence of slavery.

Not only in societies where slaves were more often con­sumers than producers of wealth, but even in societies where commercial slavery was predominant, this did not automatically translate into enduring wealth. Unlike a frugal capitalist class, such as created the industrial revolution, even commercial slaveowners in the American antebellum South tended to spend lavishly, often ending up in debt or even losing their plantations to foreclosures by creditors. However, even if British slaveowners had saved and invested all of their profits from slavery, it would have amounted to less than two percent of British domestic investment.

In the United States, it is doubtful whether the profits of slav­ery would have covered the enormous costs of the Civil War—a war that was fought over the immediate issue of secession, but the reason for the secession was to safeguard slavery from the grow­ing anti-slavery sentiment outside the South, symbolized by the election of Abraham Lincoln. Brazil, which imported several times as many slaves as the United States, and perhaps consumed more slaves than any other nation in history, was nevertheless still a rel­atively undeveloped country when slavery ended there in 1888, and its subsequent economic development was largely the work of immigrants from Europe and Japan.

In short, even though some individual slaveowners grew rich and some family fortunes were founded on the exploitation of [p. 159>] slaves, that is very different from saying that the whole society, or even its non-slave population as a whole, was more economically advanced than it would have been in the absence of slavery. What this means is that, whether employed as domestic servants or pro­ducing crops or other goods, millions suffered exploitation and dehumanization for no higher purpose than the transient aggran­dizement of slaveowners.

Thomas Sowell, Black Rednecks and White Liberals (San Francisco, CA: Encounter Books, 2005), 157-159.


[APA] Sowell, T. (2005). Black Rednecks and White Liberals. San Francisco, CA: Basic Books.

[MLA] Sowell, Thomas. Black Rednecks and White Liberals. San Francisco: Basic Books, 2005. Print.

[Chicago] Sowell, Thomas. Black Rednecks and White Liberals. San Francisco: Basic Books, 2005.

Slavery

[p.245>] One of the many sad signs of our times is that people are not only playing

the race card, they are playing the slavery card, which is supposedly the biggest trump of all. At the so-called “million man march” in Washington, poet Maya Angelou rang all the changes on slavery, at a rally billed as forward-looking and as being about black independence rather than white guilt. Meanwhile, best-selling author Dinesh D’Souza was being denounced in the media for having said that slavery was not a racist institution.

First of all, anyone familiar with the history of slavery around the world knows that its origins go back thousands of years and that slaves and slaveowners were very often of the same race. Those who are ignorant of all this, or who think of slavery in the United States as if it were the only slavery, go ballistic when anyone tells them that this institution was not based on race.

Blacks were not enslaved because they were black, but because they were available at the time. Whites enslaved other whites in Europe for centuries before the first black slave was brought to the Western Hemisphere.

Only late in history were human beings even capable of crossing an ocean to get millions of other human beings of a different race. In the thousands of years before that, not only did Europeans enslave other Europeans, Asians enslaved other Asians, Africans enslaved other Africans, and the native peoples of the Western Hemisphere enslaved other native peoples of the Western Hemisphere.

D’Souza was right. Slavery was not about race. The fact that his critics are ignorant of history is their problem.

What was peculiar about the American situation was not just that slaves and slaveowners were of different races, but that slavery contradicted the whole philosophy of freedom on which the society was founded. If all men were created equal, as the Declaration of Independence said, then blacks had to be depicted as less than men.

While the antebellum South produced a huge volume of apologetic literature trying to justify slavery on racist grounds, no such justification was [p. 246>] considered necessary in vast reaches of the world and over vast expanses of time. In most parts of the world, people saw nothing wrong with slavery.

Strange as that seems to us today, a hundred years ago only Western civilization saw anything wrong with slavery. And two hundred years ago, only a minority in the West thought it was wrong.

Africans, Arabs, Asians and others not only maintained slavery long after it was abolished throughout the Western Hemisphere, they resisted all attempts of the West to stamp out slavery in their lands during the age of imperialism. Only the fact that the West had greater firepower and more economic and political clout enabled them to impose the abolition of slavery, as they imposed other Western ideas, on the non-Western world.

Those who talk about slavery as if it were just the enslavement of blacks by whites ignore not only how widespread this institution was and how far back in history it went, they also ignore how recently slavery continued to exist outside of Western civilization.

While slavery was destroyed in the West during the nineteenth century, the struggle to end slavery elsewhere continued well into the twentieth century— and pockets of slavery still exist to this moment in Africa. But there is scarcely a peep about it from black “leaders” in America who thunder about slavery in the past.

If slavery were the real issue, then slavery among flesh-and-blood human beings alive today would arouse far more outcry than past slavery among people who are long dead. The difference is that past slavery can be cashed in for political benefits today, while slavery in North Africa only distracts from these political goals. Worse yet, talking about slavery in Africa would undermine the whole picture of unique white guilt requiring unending reparations.

While the Western world was just as guilty as other civilizations when it came to enslaving people for thousands of years, it was unique only in finally deciding that the whole institution was immoral and should be ended. But this conclusion was by no means universal even in the Western world, however obvious it may seem to us today.

Thousands of free blacks owned slaves in the antebellum South. And, years after the Emancipation Proclamation in the United States, whites as [p. 247>] well as blacks were still being bought and sold as slaves in North Africa and the Middle East.

Anyone who wants reparations based on history will have to gerrymander history very carefully. Otherwise, practically everybody would owe reparations to practically everybody else.

Thomas Sowell, The Thomas Sowell Reader (New York, NY: Basic Books, 2011), 245-247.


[APA] Sowell, T. (2011). The Thomas Sowell Reader. New York, NY: Basic Books.

[MLA] Sowell, Thomas. The Thomas Sowell Reader. New York: Basic Books, 2011. Print.

[Chicago] Sowell, Thomas. The Thomas Sowell Reader. New York: Basic Books, 2011.

[p. 24>] Let’s begin with Tocqueville, who observes at the outset that America is a nation unlike any other. It has produced what Tocqueville terms “a distinct species of mankind.” Tocqueville here identifies what will later be called American exceptionalism. For Tocqueville, Americans are unique because they are equal. This controversial assertion of the Declaration—that all men are created equal—Tocqueville finds to be a simple description of American reality. Americans, he writes, have internalized the democratic principle of equality. They refuse to regard one another as superior and inferior. They don’t bow and scrape in the way that people in other countries—notably in France—are known to do. In America, unlike in Europe, there are no “peasants,” only farmers. In America, there are employees but no “servants.” And today America may be the only country where we call a waiter “sir” as if he were a knight.

Equality for Tocqueville is social, not economic. Competition, he writes, produces unequal outcomes on the basis of merit. “Natural inequality will soon make way for itself and wealth will pass into the hands of the most capable.” But this is justified because wealth is [p. 25>] earned and not stolen. Tocqueville is especially struck by the fact that rich people in America were once poor. He notes, with some disapproval, that Americans have an “inordinate” love of money. Yet he cannot help being impressed in observing among Americans the restless energy of personal striving and economic competition. “Choose any American at random and he should be a man of burn­ing desires, enterprising, adventurous, and above all an innovator.” What makes success possible, he writes, is the striving of the ordi­nary man. The ordinary man may be vulgar and have a limited education, but he has practical intelligence and a burning desire to succeed. “Before him lies a boundless continent and he urges onward as if time pressed and he was afraid of finding no room for his exer­tions.” Tocqueville observes what he terms a “double migration”: restless Europeans coming to the East Coast of America, while rest­less Americans move west from the Atlantic toward the Pacific Ocean. Tocqueville foresees that this ambitious, energetic people will expand the borders of the country and ultimately become a great nation. “It is the most extraordinary sight I have ever seen in my life. These lands which are as yet nothing but one immense wood will become one of the richest and most powerful countries in the world.”

There is one exception to the rule of the enterprising and hard­working American. At one point, Tocqueville stands on the Ohio-Kentucky border. He looks north and south and is startled by the contrast. He contrasts “industrious Ohio” with “idle Kentucky.” While Ohio displays all the signs of work and well-maintained houses and fields, Kentucky is inhabited “by a people without energy, without ardor, without a spirit of enterprise.” Since the climate and conditions on both sides of the border are virtually identical, what accounts for the difference? Tocqueville concludes that it is slavery. Slavery provides no incentive for slaves to work, since they don’t get to keep the product of their labor. But neither does slavery encourage [p. 26>] masters to work, because slaves do the work for them. Remarkably slavery is bad for masters and slaves: it degrades work, so less work is done.

Dinesh D’Souza, America: Imagine a World Without Her (New Jersey, NJ: Regnery, 2014), 24-26.


[APA] D’Souza, D. (2014). America: Imagine a World Without Her. New Jersey, NJ: Regnery.

[MLA] D’Souza, Dinesh. America: Imagine a World Without Her. New Jersey: Regnery, 2014. Print.

[Chicago] D’Souza, Dinesh. America: Imagine a World Without Her. New Jersey: Regnery, 2014.

[p. 160>] Slavery

In addition to its own evils during its own time, slavery has generated fallacies that endure into our time, confusing many issues today. The distinguished historian Daniel J. Boorstin said something that was well known to many scholars, but utterly unknown to many among the general public, when he pointed out that, with the mass transportation of Africans in bondage to the Western Hemisphere, “Now for the first time in Western history, the status of slave coincided with a difference of race.”

For centuries before, Europeans had enslaved other Europeans, Asians had enslaved other Asians and Africans had enslaved other Africans. Only [p. 161>] in the modern era was there both the wealth and the technology to organize the mass transportation of people across an ocean, either as slaves or as free immigrants. Nor were Europeans the only ones to transport masses of enslaved human beings from one continent to another. North Africa’s Barbary Coast pirates alone captured and enslaved at least a million Europeans from 1500 to 1800, carrying more Europeans into bondage in North Africa than there were Africans brought in bondage to the United States and the American colonies from which it was formed. Moreover, Europeans were still being bought and sold in the slave markets of the Islamic world, decades after blacks were freed in the United States.

Slavery was a virtually universal institution in countries around the world and for thousands of years of recorded history. Indeed, archaeological evidence suggests that human beings learned to enslave other human beings before they learned to write. One of the many fallacies about slavery— that it was based on race— is sustained by the simple but pervasive practice of focusing exclusively on the enslavement of Africans by Europeans, as if this were something unique, rather than part of a much larger worldwide human tragedy. Racism grew out of African slavery, especially in the United States, but slavery preceded racism by thousands of years. Europeans enslaved other Europeans for centuries before the first African was brought in bondage to the Western Hemisphere.

The brutal reality is that vulnerable people were usually taken advantage of wherever it was feasible to take advantage of them, regardless of what race or color they were. The rise of nation states put armies and navies around some people but it was not equally possible to establish nation states in all parts of the world, partly because of geography. Where large populations had no army or navy to protect them, they fell prey to enslavers, whether in Africa, Asia or along unguarded stretches of European coastlines where Barbary pirates made raids, usually around the Mediterranean but sometimes as far away as England or Iceland.      The enormous concentration of writings and of the media in general on slavery in the Western Hemisphere, or in the United States in particular, creates a false picture which makes it difficult to understand even the history of slavery in the United States.

[p. 162>] While slavery was readily accepted as a fact of life all around the world for centuries on end, there was never a time when slavery could get that kind of universal acceptance in the United States, founded on a principle of freedom, with which slavery was in such obvious and irreconcilable contradiction. Slavery was under ideological attack from the first draft of the Declaration of Independence and a number of Northern states banned slavery in the years immediately following independence. Even in the South, the ideology of freedom was not wholly without effect, as tens of thousands of slaves were voluntarily set free after Americans gained their own freedom from England.

Most Southern slaveowners, however, were determined to hold on to their slaves and, for that, some defense was necessary against the ideology of freedom and the widespread criticisms of slavery that were its corollary. Racism became that defense. Such a defense was unnecessary in unfree societies, such as that of Brazil, which imported more slaves than the United States but developed no such virulent levels of racism as that of the American South. Outside Western civilization, no defense of slavery was necessary, as non-Western societies saw nothing wrong with it. Nor was there any serious challenge to slavery in Western civilization prior to the eighteenth century.

Racism became a justification of slavery in a society where it could not be justified otherwise— and centuries of racism did not suddenly vanish with the abolition of the slavery that gave rise to it. But the direction of causation was the direct opposite of what is assumed by those who depict the enslavement of Africans as being a result of racism. Nevertheless, racism became one of the enduring legacies of slavery. How much of it continues to endure and in what strength today is something that can be examined and debated. But many other things that are considered to be legacies of slavery can be tested empirically, rather than being accepted as foregone conclusions.

[p. 163>] The Black Family

Some of the most basic beliefs and assumptions about the black family are demonstrably fallacious. For example, it has been widely believed that black family names were the names of the slave masters who owned particular families. Such beliefs led a number of American blacks, during the 1960s especially, to repudiate those names as a legacy of slavery and give themselves new names— most famously boxing champion Cassius Clay renaming himself Muhammad Ali.

Family names were in fact forbidden to blacks enslaved in the United States, as family names were forbidden to other people in lowly positions in various other times and places— slaves in China and parts of the Middle East, for example, and it was 1870 before common people in Japan were authorized to use surnames. In Western civilization, ordinary people began to have surnames in the Middle Ages. In many places and times, family names were considered necessary and appropriate only for the elite, who moved in wider circles— both geographically and socially— and whose families’ prestige was important to take with them. Slaves in the United States secretly gave themselves surnames in order to maintain a sense of family but they did not use those surnames around whites. Years after emancipation, blacks born during the era of slavery remained reluctant to tell white people their full names.

The “slave names” fallacy is false not only because whites did not give slaves surnames but also because the names that blacks gave themselves were not simply the names of whoever owned them. During the era of slavery, it was common to choose other names. Otherwise, if all the families belonging to a given slave owner took his name, that would defeat the purpose of creating separate family identities. Ironically, when some blacks in the twentieth century began repudiating what they called “slave names,” they often took Arabic names, even though Arabs over the centuries had enslaved more Africans than Europeans had.

A fallacy with more substantial implications is that the current fatherless families so prevalent among contemporary blacks are a “legacy of slavery,” where families were not recognized. As with other social problems [p. 164>] attributed to a “legacy of slavery,” this ignores the fact that the problem has become much worse among generations of blacks far removed from slavery than among generations closer to the era of slavery. Most black children were raised in two-parent homes, even under slavery, and for generations thereafter. Freed blacks married, and marriage rates among blacks were slightly higher than among whites in the early twentieth century. Blacks also had slightly higher rates of labor force participation than whites in every census from 1890 to 1950.

While 31 percent of black children were born to unmarried women in the early 1930s, that proportion rose to 77 percent by the early 1990s. If unwed childbirth was “a legacy of slavery,” why was it so much less common among blacks who were two generations closer to the era of slavery? One sign of the breakdown of the nuclear family among blacks was that, by 1993, more than a million black children were being raised by their grandparents, about two-thirds as many as among whites, even though there are several times as many whites as blacks in the population of the United States.

When tragic retrogressions in all these respects became painfully apparent in the second half of the twentieth century, a “legacy of slavery” became a false explanation widely used, thereby avoiding confronting contemporary factors in contemporary problems.

These retrogressions were not only dramatic in themselves, they had major impacts on other important individual and social results. For example, while most black children were still being raised in two-parent families as late as 1970, only one third were by 1995. Moreover, much social pathology is highly correlated with the absence of a father, both among blacks and whites, but the magnitude of the problem is greater among blacks because fathers are missing more often in black families. While, in the late twentieth century, an absolute majority of those black families with no husband present lived in poverty, more than four-fifths of black husband-wife families did not. From 1994 on into the twenty-first century, the poverty rate among black husband-wife families was below 10 percent.

It is obviously not simply the act of getting married which drastically reduces the poverty rate among blacks, or among other groups, but the [p. 165>] values and behavior patterns which lead to marriage and which have a wider impact on many other things.

Culture

As already noted, races can differ for reasons that are not racial, because people inherit cultures as well as genes. So long as one generation raises the next, it could hardly be otherwise. Many of the social or cultural differences between American blacks and American whites nationwide today were in antebellum times pointed out as differences between white Southerners and white Northerners. These include ways of talking, rates of crime and violence, children born out of wedlock, educational attainment, and economic initiative or lack thereof.

While only about one-third of the antebellum white population of the United States lived in the South, at least 90 percent of American blacks lived in the South on into the twentieth century. In short, the great majority of blacks lived in a region with a culture that proved to be less productive and less peaceful for its inhabitants in general. Moreover, opportunities to move beyond that culture were more restricted for blacks.

While that culture was regional, both blacks and whites took the Southern culture with them when they moved out of the South. As one small but significant example, when the movement for creating public schools swept across the United States in the 1830s and 1840s, not only was that movement more successful in creating public schools in the North than in the South, those parts of Northern states like Ohio, Indiana and Illinois that were settled by white Southerners were the slowest to establish public schools.

The legacy of the Southern culture is more readily documented in the behavior of later generations than is the legacy of slavery, which some distinguished nineteenth century writers said explained the behavior of antebellum Southern whites, and which later writers said explained the behavior of blacks. In reality, the regional culture of the South existed in particular regions of Britain in centuries past, regions where people destined [p. 166>] to settle in the American South exhibited the same behavior patterns before they immigrated to the South. They were called “crackers” and “rednecks” before they crossed the Atlantic— and before they ever saw a slave. As a well-known Southern historian said, “We do not live in the past, but the past in us.”

Educational and intellectual performance is a readily documented area where the persistence of culture can be tested. As late as the First World War, white soldiers from various Southern states scored lower on mental tests than black soldiers from various Northern states. Not only did black soldiers have the advantage of better schools in the North, they also had an opportunity for the Southern culture to begin to erode in their new surroundings. Over the years, much has been made of the fact that blacks score lower than whites nationwide on mental tests. From this, some observers have concluded that this is due to a racial difference and others have concluded that this is due to some deficiency or bias in the tests. But neither explanation would account for white Southerners’ mental test scores in the First World War.

Whatever the sources of the lower educational or intellectual attainments among blacks, there are major economic and social consequences of such differences. For many years, blacks received a lesser quantity and lower quality of education in the Southern schools that most attended. But, even after the quantity gap was eliminated by the late twentieth century, the qualitative gap remained large. The test scores of black seventeen-year-olds in a variety of academic subjects were the same as the scores of whites several years younger. That is obviously not a basis for expecting equal results in an economy increasingly dependent on mental skills.

Thomas Sowell, Economic Facts and Fallacies (New York, NY: Basic Books, 2008), 160-166.


[APA] Sowell, T. (2008). Economic Facts and Fallacies. New York, NY: Basic Books.

[MLA] Sowell, Thomas. Economic Facts and Fallacies. New York: Basic Books, 2008. Print.

[Chicago] Sowell, Thomas. Economic Facts and Fallacies. New York: Basic Books, 2008.

[p. 3>] The creation of the United States of America is the greatest of all human adventures. No other national story holds such tremendous lessons, for the American people themselves and for the rest of mankind. It now spans four centuries and, as we enter the new millennium, we need to retell it, for if we can learn these lessons and build upon them, the whole of humanity will benefit in the new age which is now opening. American history raises three fundamental questions. First, can a nation rise above the injustices of its origins and, by its moral purpose and performance, atone for them? All nations are born in war, conquest, and crime, usually concealed by the obscurity of a distant past. The United States, from its earliest colonial times, won its title—deeds in the full blaze of recorded history, and the stains on them are there for all to see and censure: the dispossession of an indigenous people, and the securing of self—sufficiency through the sweat and pain of an enslaved race. In the judgmental scales of history, such grievous wrongs must be balanced by the erection of a society dedicated to justice and fairness. Has the United States done this? Has it expiated its organic sins? The second question provides the key to the first. In the process of nation—building, can ideals and altruism—the desire to build the perfect community—be mixed successfully with acquisitiveness and ambition, without which no dynamic society can be built at all? Have the Americans got the mixture right? Have they forged a nation where righteousness has the edge over the needful self—interest? Thirdly, the Americans originally aimed to build another—worldly `City on a Hill,’ but found themselves designing a republic of the people, to be a model for the entire planet. Have they made good their audacious claims? Have they indeed proved exemplars for humanity? And will they continue to be so in the new millennium?

We must never forget that the settlement of what is now the United States was only part of a larger enterprise. And this was the work of the best and the brightest of the entire European continent. They were greedy. As Christopher Columbus said, men crossed the Atlantic primarily in search of gold. But they were also idealists. These adventurous young men thought they could transform the world for the better. Europe was too small for them—for their energies, their ambitions, and [p. 4>] their visions. In the 11th, 12th, and 13th centuries, they had gone east, seeking to reChristianize the Holy Land and its surroundings, and also to acquire land there. The mixture of religious zeal, personal ambition—not to say cupidity—and lust for adventure which inspired generations of Crusaders was the prototype for the enterprise of the Americas.

In the east, however, Christian expansion was blocked by the stiffening resistance of the Moslem world, and eventually by the expansive militarism of the Ottoman Turks. Frustrated there, Christian youth spent its ambitious energies at home: in France, in the extermination of heresy, and the acquisition of confiscated property; in the Iberian Peninsula, in the reconquest of territory held by Islam since the 8th century, a process finally completed in the 1490s with the destruction of the Moslem kingdom of Granada, and the expulsion, or forcible conversion, of the last Moors in Spain. It is no coincidence that this decade, which marked the homogenization of western Europe as a Christian entity and unity, also saw the first successful efforts to carry Europe, and Christianity, into the western hemisphere. As one task ended, another was undertaken in earnest.

The Portuguese, a predominantly seagoing people, were the first to begin the new enterprise, early in the 15th century. In 1415, the year the English King Henry V destroyed the French army at Agincourt, Portuguese adventurers took Ceuta, on the north African coast, and turned it into a trading depot. Then they pushed southwest into the Atlantic, occupying in turn Madeira, Cape Verde, and the Azores, turning all of them into colonies of the Portuguese crown. The Portuguese adventurers were excited by these discoveries: they felt, already, that they were bringing into existence a new world, though the phrase itself did not pass into common currency until 1494. These early settlers believed they were beginning civilization afresh: the first boy and girl born on Madeira were christened Adam and Eve. But almost immediately came the Fall, which in time was to envelop the entire Atlantic. In Europe itself, the slave—system of antiquity had been virtually extinguished by the rise of Christian society. In the 1440s, exploring the African coast from their newly acquired islands, the Portuguese rediscovered slavery as a working commercial institution. Slavery had always existed in Africa, where it was operated extensively by local rulers, often with the assistance of Arab traders. Slaves were captives, outsiders, people who had lost tribal status; once enslaved, they became exchangeable commodities, indeed an important form of currency.

[p. 5>] The Portuguese entered the slave—trade in the mid—15th century, took it over and, in the process, transformed it into something more impersonal, and horrible, than it had been either in antiquity or medieval Africa. The new Portuguese colony of Madeira became the center of a sugar industry, which soon made itself the largest supplier for western Europe. The first sugar— mill, worked by slaves, was erected in Madeira in 1452. This cash—industry was so successful that the Portuguese soon began laying out fields for sugar—cane on the Biafran Islands, off the African coast. An island off Cap Blanco in Mauretania became a slave—depot. From there, when the trade was in its infancy, several hundred slaves a year were shipped to Lisbon. As the sugar industry expanded, slaves began to be numbered in thousands: by 1550, some 50,000 African slaves had been imported into Sao Tome alone, which likewise became a slave entrepot. These profitable activities were conducted, under the aegis of the Portuguese crown, by a mixed collection of Christians from all over Europe—Spanish, Normans, and Flemish, as well as Portuguese, and Italians from the Aegean and the Levant. Being energetic, single young males, they mated with whatever women they could find, and sometimes married them. Their mixed progeny, mulattos, proved less susceptible than pure—bred Europeans to yellow fever and malaria, and so flourished. Neither Europeans nor mulattos could live on the African coast itself. But they multiplied in the Cape Verde Islands, 300 miles off the West African coast. The mulatto trading—class in Cape Verde were known as Lancados. Speaking both Creole and the native languages, and practicing Christianity spiced with paganism, they ran the European end of the slave—trade, just as Arabs ran the African end.

This new—style slave—trade was quickly characterized by the scale and intensity with which it was conducted, and by the cash nexus which linked African and Arab suppliers, Portuguese and Lancado traders, and the purchasers. The slave—markets were huge. The slaves were overwhelmingly male, employed in large—scale agriculture and mining. There was little attempt to acculturalize them and they were treated as body—units of varying quality, mere commodities. At Sao Tome in particular this modern pattern of slavery took shape. The Portuguese were soon selling African slaves to the Spanish, who, following the example in Madeira, occupied the Canaries and began to grow cane and mill sugar there too. By the time exploration and colonization spread from the islands across the Atlantic, the slave—system was already in place.

In moving out into the Atlantic islands, the Portuguese discovered [p. 6>] the basic meteorological fact about the North Atlantic, which forms an ocean weather—basin of its own. There were strong currents running clockwise, especially in the summer. These are assisted by northeast trade winds in the south, westerlies in the north. So seafarers went out in a southwest direction, and returned to Europe in a northeasterly one. Using this weather system, the Spanish landed on the Canaries and occupied them. The indigenous Guanches were either sold as slaves in mainland Spain, or converted and turned into farm—labourers by their mainly Castilian conquerors. Profiting from the experience of the Canaries in using the North Atlantic weather system, Christopher Columbus made landfall in the western hemisphere in 1492. His venture was characteristic of the internationalism of the American enterprise. He operated from the Spanish city of Seville but he came from Genoa and he was by nationality a citizen of the Republic of Venice, which then ran an island empire in the Eastern Mediterranean. The finance for his transatlantic expedition was provided by himself and other Genoa merchants in Seville, and topped up by the Spanish Queen Isabella, who had seized quantities of cash when her troops occupied Granada earlier in the year.

The Spanish did not find American colonization easy. The first island—town Columbus founded, which he called Isabella, failed completely. He then ran out of money and the crown took over. The first successful settlement took place in 1502, when Nicolas de Ovando landed in Santo Domingo with thirty ships and no fewer than 2,500 men. This was a deliberate colonizing enterprise, using the experience Spain had acquired in its reconquista, and based on a network of towns copied from the model of New Castile in Spain itself. That in turn had been based on the bastides of medieval France, themselves derived from Roman colony—towns, an improved version of Greek models going back to the beginning of the first millennium BC. So the system was very ancient. The first move, once a beachhead or harbour had been secured, was for an official called the adelantana to pace out the streetgrid.6 Apart from forts, the first substantial building was the church. Clerics, especially from the orders of friars, the Dominicans and Franciscans, played a major part in the colonizing process, and as early as 1512 the first bishopric in the New World was founded. Nine years before, the crown had established a Casa de la Contracion in Seville, as headquarters of the entire transatlantic effort, and considerable state funds were poured into the venture. By 1520 at least 10,000 Spanishspeaking Europeans were living on the island of Hispaniola in the [p. 7>] Caribbean, food was being grown regularly and a definite pattern of trade with Europeans had been established.

The year before, Hernando Cortes had broken into the American mainland by assaulting the ancient civilization of Mexico. The expansion was astonishingly rapid, the fastest in the history of mankind, comparable in speed with and far more exacting in thoroughness and permanency than the conquests of Alexander the Great. In a sense, the new empire of Spain superimposed itself on the old one of the Aztecs rather as Rome had absorbed the Greek colonies.8 Within a few years, the Spaniards were 1,000 miles north of Mexico City, the vast new grid—town which Cortes built on the ruins of the old Aztec capital, Tenochtitlan.

This incursion from Europe brought huge changes in the demography, the flora and fauna, and the economics of the Americas. Just as the Europeans were vulnerable to yellow fever, so the indigenous Indians were at the mercy of smallpox, which the Europeans brought with them. Europeans had learned to cope with it over many generations but it remained extraordinarily infectious and to the Indians it almost invariably proved fatal. We do not know with any certainty how many people lived in the Americas before the Europeans came. North of what is now the Mexican border, the Indians were sparse and tribal, still at the hunter—gatherer stage in many cases, and engaged in perpetual inter—tribal warfare, though some tribes grew corn in addition to hunting and lived part of the year in villages—perhaps one million of them, all told. Further south there were far more advanced societies, and two great empires, the Aztecs in Mexico and the Incas in Peru. In central and south America, the total population was about 20 million. Within a few decades, conquest and the disease it brought had reduced the Indians to 2 million, or even less. Hence, very early in the conquest, African slaves were in demand to supply labor. In addition to smallpox, the Europeans imported a host of welcome novelties: wheat and barley, and the ploughs to make it possible to grow them; sugarcanes and vineyards; above all, a variety of livestock. The American Indians had failed to domesticate any fauna except dogs, alpacas and llamas. The Europeans brought in cattle, including oxen for ploughing, horses, mules, donkeys, sheep, pigs and poultry. Almost from the start, horses of high quality, as well as first—class mules and donkeys, were successfully bred in the Americas. The Spanish were the only west Europeans with experience of running large herds of cattle on horseback, and this became an outstanding feature of the New World, where [p. 8>] enormous ranches were soon supplying cattle for food and mules for work in great quantities for the mining districts.

The Spaniards, hearts hardened in the long struggle to expel the Moors, were ruthless in handling the Indians. But they were persistent in the way they set about colonizing vast areas. The English, when they followed them into the New World, noted both characteristics. John Hooker, one Elizabethan commentator, regarded the Spanish as morally inferior `because with all cruel inhumanity … they subdued a naked and yielding people, whom they sought for gain and not for any religion or plantation of a commonwealth, did most cruelly tyrannize and against the course of all human nature did scorch and roast them to death, as by their own histories doth appear.’ At the same time the English admired `the industry, the travails of the Spaniard, their exceeding charge in furnishing so many ships … their continual supplies to further their attempts and their active and undaunted spirits in executing matters of that quality and difficulty, and lastly their constant resolution of plantation.”

With the Spanish established in the Americas, it was inevitable that the Portuguese would follow them. Portugal, vulnerable to invasion by Spain, was careful to keep its overseas relations with its larger neighbor on a strictly legal basis. As early as 1479 Spain and Portugal signed an agreement regulating their respective spheres of trade outside European waters. The papacy, consulted, drew an imaginary longitudinal line running a hundred leagues west of the Azores: west of it was Spanish, east of it Portuguese. The award was made permanent between the two powers by the Treaty of Tordesillas in 1494, which drew the lines 370 leagues west of Cape Verde. This gave the Portuguese a gigantic segment of South America, including most of what is now modern Brazil. They knew of this coast at least from 1500 when a Portuguese squadron, on its way to the Indian Ocean, pushed into the Atlantic to avoid headwinds and, to its surprise, struck land which lay east of the treaty line and clearly was not Africa. But their resources were too committed to exploring the African coast and the routes to Asia and the East Indies, where they were already opening posts, to invest in the Americas. Their first colony in Brazil was not planted till 1532, where it was done on the model of their Atlantic island possessions, the crown appointing `captains,’ who invested in land—grants called donatorios. Most of this first wave failed, and it was not until the Portuguese transported the sugar—plantation system, based on slavery, from Cape Verde and the Biafran Islands, to the part of Brazil they called Pernambuco, [p. 9>] that profits were made and settlers dug themselves in. The real development of Brazil on a large scale began only in 1549, when the crown made a large investment, sent over 1,000 colonists and appointed Martin Alfonso de Sousa governor—general with wide powers. Thereafter progress was rapid and irreversible, a massive sugar industry grew up across the Atlantic, and during the last quarter of the 16th century Brazil became the largest slave—importing center in the world, and remained so. Over 300 years, Brazil absorbed more African slaves than anywhere else and became, as it were, an Afro—American territory. Throughout the 16th century the Portuguese had a virtual monopoly of the Atlantic slave trade. By 1600 nearly 300,000 African slaves had been transported by sea to plantations—25,000 to Madeira, 50,000 to Europe, 75,000 to Cape Sao Tome, and the rest to America. By this date, indeed, four out of five slaves were heading for the New World.”

It is important to appreciate that this system of plantation slavery, organized by the Portuguese and patronized by the Spanish for their mines as well as their sugar—fields, had been in place, expanding steadily, long before other European powers got a footing in the New World. But the prodigious fortunes made by the Spanish from mining American silver, and by both Spanish and Portuguese in the sugar trade, attracted adventurers from all over Europe…

Paul Johnson, A History of the American People (New York, NY: Harper Perenial, 1997), 3-9.


[APA] Johnson, P. (1997). A History of the American People. New York, NY: Harper Perenial.

[MLA] Johnson, Paul. A History of the American People. New York: Harper Perenial. 1997. Print.

[Chicago] Johnson, Paul. A History of the American People. New York: Harper Perenial, 1997.

 

[p. 15>] Early Black Economic Achievement

The portrayal of blacks as helpless victims of slavery and later gross dis­crimination has become part of the popular wisdom. But the facts of the matter do not square with that portrayal.

Despite the brutal and oppressive nature of slavery, slaves did not qui­etly acquiesce. Many found ways to lessen slavery’s hardships and attain a measure of independence. During colonial days, slaves learned skills and found that they could earn a measure of independence by servicing ships as rope makers, coopers, and shipwrights. Some entered more skilled trades, such as silversmithing, gold beating, and cabinetmaking.

Typically, slaves turned over a portion of their earnings to their owners in exchange for de facto freedom. This practice, called self-hire, generated criticism. “As early as 1733-34, a Charles Town, South Carolina, grand jury criticized slaveholders for allowing their slaves ‘to work out by the Week,’ and ‘bring in a certain Hire’ which was not only Contrary to a Law subsist­ing, but a Great Inlet to Idleness, Drunkenness and other Enormities!’ Later, a group of Virginia planters said, “Many persons have suffered their slaves to go about to hire themselves and pay their masters for their hire,’ and as a result ‘certain’ slaves lived free from their master’s control.” “Two ambitious Charles Town bricklayers, Tony and Primus, who spent their days building a church under the supervision of their master, secretly rented themselves to local builders at night and on weekends.”

[p. 16>] Many slaves exhibited great entrepreneurial spirit despite their handi­caps. Even slave women were often found growing and selling produce in the South Carolina and Georgia Low Country. After putting in a day’s work, some slaves were allowed to raise their own crops and livestock. These efforts allowed them to gain a presence in much of the marketing network on the streets and docks of port cities. Ultimately, the South Carolina General Assembly passed a law requiring that slave-grown crops and livestock be sold only to the master. However, the law was very dif­ficult to enforce, particularly among blacks who had gained knowledge of the marketplace. Market activity by slaves was so great that North Carolina whites mounted a campaign to stop slave “dealing and Trafficking” alto­gether. In 1741, that state passed a law prohibiting slaves from buying, sell­ing, trading, or bartering “any Commodities whatsoever” or to raise hogs, cattle, or horses “on any Pretense whatsoever.”

During the colonial period, some slaves bought their freedom and acquired property. In Virginia’s Northampton County, 44 out of 100 blacks had gained their freedom by 1664, and some had become land­owners. During the late eighteenth century, blacks could boast of own­ing land. James Pendarvis owned 3,250 acres in St. Paul’s Parish in the Charleston District of South Carolina. Pendarvis also possessed 113 slaves. Cabinetmaker John Gough owned several buildings in Charleston and others in the coastal South. During the late eighteenth and early nine­teenth centuries, free blacks in Charleston had established themselves as relatively independent from an economic standpoint. As early as 1819, they comprised thirty types of workers, including ten tailors, eleven car­penters, twenty-two seamstresses, six shoemakers, and one hotel owner. Thirty years later, there were fifty types, including fifty carpenters, forty-three tailors, nine shoemakers, and twenty-one butchers.

New Orleans had the largest population of free blacks in the Deep South. Though they could not vote, they enjoyed more rights than blacks in other parts of the South—such as the right to travel freely and to tes­tify in court against white people. “They owned some $2 million worth of property and dominated skilled crafts like bricklaying, cigar making, carpentry, and shoe making.” New Orleans blacks also created privately supported benevolent societies, schools, and orphanages to assist their impoverished brethren.

[p. 17>] Black entrepreneurs in New Orleans owned small businesses like liquor, grocery, and general stores capitalized with a few hundred dollars. There were also some larger businesses, for example, grocers like Francis Snaer, A. Blandin, and G. N. Ducroix, each of whom was worth over $10,000 ($209,000 in today’s currency). One of the best-known black businesses was owned by Cecee Macarty, who inherited $12,000 and parlayed it into a business worth $155,000 at the time of her death in 1845. Another was Thorny Lafon, who started out with a small dry-goods store and later became a real estate dealer, amassing a fortune valued over $400,000 ($8 million today) by the time he died. Black control of the cigar indus­try enabled men like Lucien Mansion and Georges Alces to own sizable factories, with Alces hiring as many as 200 men. Twenty-two black men listed themselves as factory owners in the New Orleans registry of free Negroes, though it is likely that most of these were one-man shops.

Pierre A. D. Casenave, an immigrant from Santo Domingo, was among New Orleans’ more notable businessmen. Having inherited $10,000, as a result of being a confidential clerk of a white merchant-philanthropist, Casenave was in the “commission” business by 1853. By 1857, he was worth $30,000 to $40,000, and he had built an undertaking business, catering mostly to whites, that was worth $2 million in today’s dollars.

Most free blacks in New Orleans were unskilled laborers. Males were employed on steamboats and as dockworkers and domestic servants, while females found work largely as domestic servants or washwomen. However, the ratio of skilled to unskilled workers among blacks was greater than among Irish and German workers. Indeed, free blacks dominated certain skilled crafts. According to J. D. B. DeBow, director of the 1850 census, in New Orleans that year there were 355 carpenters, 325 masons, 156 cigar makers, ninety-two shoemakers, sixty-one clerks, fifty-two mechanics, forty-three coopers, forty-one barbers, thirty-nine carmen, and twenty-eight painters.

In addition, there were free Negro blacksmiths (fifteen), butchers (eigh­teen), cabinetmakers (nineteen), cooks (twenty-five), overseers (eleven), ship carpenters (six), stewards (nine), and upholsterers (eight).” Robert C. Reinders, a historian, says that DeBow may have exaggerated the data to show that New Orleans had more skilled blacks than elsewhere; however, other evidence points to free-black prominence in skilled trades—for [p. 18>] example, 540 skilled blacks signing a register to stay in the state between 1842 and 1861. Plus, travelers spoke of “Negro artisans being served by Irish waiters and free Negro masons with Irish hod carriers:’ A few black skilled workers were relatively prosperous. Peter Howard, a porter, and C. Cruisin, an engraver, were each worth between $10,000 and $20,000. A. Tescault, a bricklayer, owned personal and real property valued at nearly $40,000.

By the end of the antebellum era, there was considerable property own­ership among slaves in both the Upper and Lower South. Many amassed their resources through the “task” (or “hiring-out”) system. In Richmond and Petersburg, Virginia, slaves worked in tobacco factories and earned $150 to $200 a year, plus all expenses. By 1850, slave hiring was com­mon in hemp manufacturing and in the textile and tobacco industries. In Richmond, 62 percent of the male slave force was hired; in Lynchburg, 52 percent, in Norfolk, more than 50 percent, and in Louisville, 24 percent. Across the entire South, at least 100,000 slaves were hired out each year.

Self-hiring was another practice with a long tradition. It benefited both the slave and slave owner. The latter did not have to pay for the slave’s lodg­ing and clothing. Slaves, although obligated to pay their masters a monthly or yearly fee, could keep for themselves what they earned above that amount. Frederick Douglass explained that while employed as a Baltimore ship’s caulker, “I was to be allowed all my time; to make bargains for work; to find my own employment, and collect my own wages; and in return for this liberty, I was to pay him [Douglass’ master] three dollars at the end of each week, and to board and clothe myself, and buy my own calking [sic] tools.” Self-hire, Douglass noted, was “another step in my career toward freedom:’

Not every self-hire slave fared so well. Some were offered the prospect of buying themselves only to see the terms of the contract change. Slaves who earned larger sums than originally expected were required to pay the extra money to the master. Sometimes slaves who made agreements with their masters to pay a certain price for their freedom were sold shortly before the final payment was due.

So intense was the drive to earn money that some slaves were willing to work all day in the fields, then steal away under cover of darkness to work for wages, returning to the fields the next morning. Catahoula Parish (Louisiana) plantation owner John Liddell sought legal action, telling his [p. 19>] lawyer, “I request that you would forthwith proceed to prosecute John S. Sullivan of Troy, Parish of Catahoula, for Hiring four of my Negro men, secretly, and without my knowledge or permission, at midnight on the 12th of August last 1849 (or between midnight and day).”

So common was the practice of self-hire that historians have described the people so employed as “Quasi-Free Negroes” or “Slaves Without Masters:’ In 1802, a French visitor to New Orleans noticed “a great many loose negroes about:’ Officials in Savannah, Mobile, Charleston, and other cities talked about “nominal slaves,” “quasi f.n. [free Negroes],” and “virtually free negroes,” who were seemingly oblivious to any law or regulation. In the Upper South—Baltimore, Washington, Norfolk, Louisville, Richmond, and Lexington, Virginia, for example—large num­bers of quasi-free slaves contracted with white builders as skilled carpen­ters, coopers, and mechanics, while the less skilled worked as servants, hack drivers, and barbers. The quasi-free individuals, more entrepreneur­ial, established market stalls where they traded fish, produce, and other goods with plantation slaves and sold various commodities to whites. Historian Ira Berlin said, in describing the pre-staple crop period in the Low Country of South Carolina, “The autonomy of the isolated cow pen and the freedom of movement of stock raising allowed made a mockery of the total dominance that chattel bondage implied.”

William Rosoe operated a small pleasure boat on the Chesapeake Bay. Ned Hyman, a North Carolina slave, amassed an estate “consisting of Lands chiefly, Live Stock, Negroes and money worth between $5,000 and $6,000 listed in his free Negro wife’s name:’ Whites in his neighbor­hood said “he was a remarkable, uncommon Negro” and was “remarkably industrious, frugal & prudent…. In a word, his character as fair and as good—for honesty, truth, industry, humility, sobriety & fidelity—as any they (your memoralists) have ever seen or hear of:’

Thomas David, a slave, owned a construction business in Bennettsville, South Carolina, where he built houses as well as “several larger buildings.” He hired laborers, many of whom were slaves themselves, and taught them the necessary skills. This practice of slaves entering the market and compet­ing successfully with whites became so prevalent that a group of the latter in New Hanover County, North Carolina, petitioned the state legislature to ban the practice. But despite statutes to the contrary, slaves continued to work as mechanics (as such workers were then called), contracting on [p. 20>] their own “sometimes less than one half the rate that a regular bred white Mechanic could afford to do it.”

In Tennessee, it was illegal for a slave to practice medicine; however, “Doctor Jack” did so with “great & unparalleled success,” even though he was forced to give a sizable portion of his earnings to his owner, William Macon. After Macon died, Doctor Jack set up his practice in Nashville. Patients thought so much of his services that they appealed to the state legislature: “The undersigned citizens of Tennessee respectfully petition the Honourable Legislature of the State to repeal, amend or so modify the Act of 1831, chap. 103, S [ect]. 3, which prohibits Slaves from practicing medicine, as to exempt from its operation a Slave named Jack…”

Women were also found among slave entrepreneurs. They established stalls and small stores selling various products. They managed modest businesses as seamstresses, laundresses, and weavers. A Maryland slave recalled, “After my father was sold, my master gave my mother permission to work for herself, provided she gave him one half [of the profits].” She ran two businesses, a coffee shop at an army garrison, and a secondhand store selling trousers, shoes, caps, and other items. Despite protests by poor whites, she “made quite a respectable living.”

With the increasing number of self-hire and quasi-free blacks came many complaints and attempts at restricting their economic activities. In 1826, Georgia prohibited blacks from trading “any quantity or amount whatever of cotton, tobacco, wheat, rye, oats, corn, rice or poultry or any other articles, except such as are known to be usually manufactured or vended by slaves.” Tennessee applied similar restrictions to livestock. Virginia enacted legislation whereby an individual who bought or received any commodity from a slave would be given thirty-nine lashes “well laid on” or fined four times the value of the commodity.

Similar measures were enacted elsewhere. In addition to statutes against trading with slaves, there were laws governing master-slave rela­tionships. North Carolina decreed in 1831 that a master who allowed a slave to “go at large as a freeman, exercising his or her own discre[t]ion in the employment of his or her time… shall be fined in the discretion of the court” In 1835, the North Carolina General Assembly enacted a measure “for the better regulation of the slave labourers in the town and Port of Wilmington….  That if any slave shall hereafter be permitted to go at large, and make his own contracts to work, and labour in said town, by [p. 21>] consent, and with the knowledge of his or her owner or master, the owner of the said slave shall forfeit and pay one hundred dollars . . . said slave shall receive such punishment as said commissioners or town magistrate shall think proper to direct to be inflicted, not exceeding twenty-five lashes.”

Similar statutes were enacted in most slave states. In the 1830s, a South. Carolina court of appeals ruled as follows: “if the owner without a for­mal act of emancipation permit his slave to go at large and to exercise all the rights and enjoy all the privileges of a free person of color, the slave becomes liable to seizure as a derelict”

A New Orleans newspaper, the Daily Picayune, complained that hired-out slaves had the liberty “to engage in business on their own account, to live according to the suggestions of their own fancy, to be idle or industri­ous, as the inclination for one or the other prevailed, provided only the monthly wages are regularly gained.” In 1855, Memphis’ Daily Appeal demanded the strengthening of an ordinance prohibiting slaves from hir­ing themselves out without a permit. One citizen complained that “to permit the negro to hire his own time sends a slave to ruin as property, debauches a slave, and makes him a strolling agent of discontent, disorder, and immorality among our slave population.”

Much of the restrictive legislation was prompted or justified by the charge that some slaves were trafficking in stolen goods. But there was also concern that the self-hired and quasi-free would undermine the slavery system itself by breeding discontent and rebellion among slaves in gen­eral. Despite all the legal prohibitions, the self-hire and quasi-free practices prospered and expanded. Some slave owners who had sired children felt that, although they might not set those offspring free, they would allow them to be quasi-free and to own property. Other owners considered it simply sound policy to permit slaves a degree of freedom as a reward for good work. Even owners with a strong ideological commitment to the institution of slavery found it profitable to permit self-hire, particularly for their most talented and trusted bondsmen.

By the 1840s and ’50s, many masters were earning good returns on slaves who found employment in Baltimore, Nashville, St. Louis, Savannah, Charleston, and New Orleans. In 1856, white builders in Smithfield, North Carolina, complained that they were being underbid by quasi-free blacks in the construction of houses and boats, and criticized white con­tractors who pursued such hiring practices. Whites in the Sumter District [p. 22>] of South Carolina protested that “The law in relation to Slaves hiring their own time is not enforced with sufficient promptness and efficiency as to accomplish the object designed by its enactment.”

The fact that self-hire became such a large part of slavery simply reflects the economics of the matter. Faced with fluctuating demands for the labor of slaves, it sometimes made sense for owners to let a slave hire himself out rather than to sit idle, in return for securing a portion of his outside earn­ings. Slaves favored hiring out because it gave them a measure of freedom; it also provided some income to purchase goods that would be otherwise unattainable.

Free Blacks in the North

Free blacks played a significant economic role in northern cities. In 1838, a pamphlet titled ‘A Register of Trades of Colored People in the City of Philadelphia and Districts” listed fifty-seven different occupations total­ing 656 persons: bakers (eight), blacksmiths (twenty-three), brass found­ers (three), cabinetmakers and carpenters (fifteen), confectioners (five), and tanners (thirty-one). Black females engaged in businesses were also included in the register: dressmakers and tailoresses (eighty-one), dyers and scourers (four), and cloth fullers and glass/papermakers (two each).

Philadelphia was home to several very prosperous black businesses. Stephen Smith and William Whipper had one of the largest wood and coal yards in the city. As an example of the size of their business, they had, in 1849, “several thousand bushels of coal, 250,000 feet of lumber, 22 merchantmen cars running between Philadelphia and Baltimore, and $9,000 worth of stock in the Columbia bridge:’ At his death, Smith left an estate worth $150,000; he had earlier given an equal amount to establish the Home for the Aged and Infirm Colored Persons in Philadelphia and had also donated the ground for the Mount Olive Cemetery for Colored People.

Another prosperous enterprise among early Philadelphia blacks was sail-making. Nineteen black sail-making businesses were recorded in the 1838 Register. James Forten (1766-1841), the most prominent of them, employed forty black and white workers in his factory in 1829. Stephen Smith was another black entrepreneur, a lumber merchant who was [p. 23>] grossing $100,000 annually in sales by the 1850s. By 1854, Smith’s net worth was estimated at $500,000, earning him a credit entry as the “King of the Darkies w. 100m. [with $100,000]”‘

Blacks dominated Philadelphia’s catering business. Peter Augustine and Thomas Dorsey were the most prominent among them. Both men earned worldwide fame for their art, with Augustine often sending his terrapin as far away as Paris. Robert Bogle was a waiter who conceived of the cater­ing idea in Philadelphia by contracting formal dinners for those who enter­tained in their homes. Nicolas Biddle, a leading Philadelphia financier and president of the Bank of United States, honored him by writing an “Ode to Ogle [sic].” Philadelphia blacks “…owned fifteen meeting houses and burial grounds adjacent, and one public hall.” Their real estate holdings were estimated at $600,000 ($12 million today) and their personal prop­erty at more than $677,000.” Henry and Sarah Gordon, two other black caterers, became so prosperous that they were able to contribute $66,000 to the Home for the Aged and Infirm Colored Persons.

Blacks made their business presence felt in other northern cities as well. In 1769, ex-slave Emmanuel established Providence, Rhode Island’s first oyster-and-ale house. In New York, Thomas Downing operated a successful restaurant to serve his Wall Street clientele before facing com­petition from two other blacks, George Bell and George Alexander, who opened similar establishments nearby. In 1865, Boston’s leading catering establishment was owned and operated by a black. Thomas Dalton, also of Boston, was the proprietor of a prosperous clothing store valued at a half-million dollars at the time of his death. John Jones of Chicago, who owned one of the city’s leading tailoring establishments, left behind a fortune of $100,000.

Most blacks of course labored at low-skilled tasks. They nonetheless encountered opposition from whites. When the two races competed, or threatened to do so, violence often resulted. A commission looking into the causes of the 1834 Philadelphia riot, concluded as follows:

An opinion prevails, especially among white laborers, that certain por­tions of our community, prefer to employ colored people, whenever they can be had, to the employing of white people; and in consequence of this preference, many whites, who are able and willing to work, are left [p. 24>] without employment, while colored people are provided with work, and enabled comfortably to maintain their families; thus many white labor­ers, anxious for employment, are kept idle and indigent. Whoever mixed in the crowds and groups, at the late riots, must so often have heard those complaints, as to convince them, that . . . they… stimulated many of the most active among the rioters.

Racism and the fear of similar violence prompted New York City authorities to refuse licenses to black carmen and porters, warning, “it would bring them into collision with white men of the same calling, and they would get their horses and carts ‘dumped’ into the dock and them­selves abused and beaten.”

The growth of the black labor force, augmented by emancipated and fugitive slaves, also contributed to white fears of black competition. In 1834, a group of Connecticut petitioners declared:

The white man cannot labor upon equal terms with the negro. Those who have just emerged from the state of barbarism or slavery have few artificial wants. Regardless of the decencies of life, and improvement of the future, the black can afford to offer his services at lower prices than the white man.

The petitioners warned the legislature that if entry restrictions were not adopted, the (white) sons of Connecticut would be soon driven from the state by black porters, truckmen, sawyers, mechanics, and laborers of every description.

For their part, blacks soon faced increased competition from the nearly five million Irish, German, and Scandinavian immigrants who reached our shores between 1830 and 1860. Poverty-stricken Irish crowded into shantytowns and sought any kind of employment, regardless of pay and work conditions. One black observer wrote:

These impoverished and destitute beings, transported from transatlantic shores are crowding themselves into every place of business and of labor, and driving the poor colored American citizen out. Along the wharves, where the colored man once done the whole business of shipping and [p. 25>] unshipping—in stores where his services were once rendered, and in families where the chief places were filled by him, in all these situations there are substituted foreigners or white Americans.

Irish immigrants did not immediately replace black workers, because employers initially preferred black “humility” to Irish “turbulence?’ “Help Wanted” ads often read like this one in the New York Herald of May 13, 1853: “A Cook, Washer and Ironer: who perfectly understands her busi­ness; any color or country except Irish?’ The New York Daily Sun (May 11, 1853) carried: “Woman Wanted—To do general housework… English, Scotch, Welsh, German, or any country or color will answer except Irish?’ The New York Daily Tribune, on May 14, 1852, advertised: “Coachman Wanted—A Man who understands the care of horses and is willing to make himself generally useful, on a small place six miles from the city. A colored man preferred. No Irish need apply?’

Indicative of racial preferences was the fact that, in 1853, black waiters in New York earned more than their white counterparts: $16 per month compared to $12. To increase their bargaining power and to dupe their white counterparts out of jobs, black waiters tricked them into striking for $18 a day. When the strike ended, only the best white waiters were retained; the rest were replaced by blacks.

The mid-nineteenth century saw the early growth of the labor union movement. As I will discuss in more detail in a later chapter, the new unions directed considerable hostility at blacks and often excluded them from membership. When New York longshoremen struck in 1855 against wage cuts, black workers replaced them and violent clashes ensued. The Frederick Douglass Paper expressed little sympathy for white strikers: “[C]olored men can feel no obligation to hold out in a ‘strike’ with the whites, as the latter have never recognized them.”

Abolitionist William Lloyd Garrison and many of his followers had similarly little sympathy with white attempts to form labor unions. They felt that employer desire for profit would override racial preferences. Garrison declared, “Place two mechanics by the side of each other, one colored and one white, he who works the cheapest and the best will get the most custom. In making a bargain, the color of the man will never be consulted?’ Demonstrating an economic understanding that’s lost on [p. 26>] many of today’s black advocates, abolitionists urged blacks to underbid white workers rather than to combine with them. New England Magazine remarked:

After all the voice of interest is louder, and speaks more to the purpose, than reason or philanthropy. When a black merchant shall sell his goods cheaper than his white neighbor, he will have the most customers…. When a black mechanic shall work cheaper and better than a white one, he will be more frequently employed.

During this period, black leadership exhibited a vision not often observed today, namely, lowering the price of goods or services is one of the most effective tools to compete. At a black convention in 1848, it was declared, “To be dependent is to be degraded. Men may pity us, but they cannot respect us.” Black conventions repeatedly called upon blacks to learn agricultural and mechanical pursuits, to form joint-stock companies, mutual savings banks, and county associations in order to pool resources to purchase land and capital. In 1853, Frederick Douglass warned, “Learn trades or starve!”

Many blacks absorbed the lessons of competition. Virginia’s Robert Gordon sold slack (fine screenings of coal) from his white father’s coal yard, making what was then a small fortune of $15,000. By 1846, Gordon had purchased his freedom and moved to Cincinnati, where he invested those earnings in a coal yard and built a private dock on the waterfront. White competitors tried to run him out of business through ruthless price-cutting. Gordon cleverly responded by hiring fair-complexioned mulattos to purchase coal from price-cutting competitors, then used that coal to fill his own customers’ orders. Gordon retired in 1865, invested his profits in real estate, and eventually passed his fortune to his daughter.

While still a slave, Frank McWorter set up a saltpeter factory in Kentucky’s Pulaski County at the start of the War of 1812. After the war, he expanded his factory to meet the growing demand for gunpowder by westward-bound settlers. As a result of his enterprise, McWorter purchased his wife’s freedom in 1817 and his own in 1819 for a total cost of $1,600.

Born a slave in Kentucky, Junius G. Graves went to Kansas in 1879. He worked on a farm for forty cents a day and by 1884 had amassed the sum of $2,200. Six years later, he owned 500 acres of land valued at $100,000.

[p. 27>] “Because of his success in producing a-greater-than-average-yield of pota­toes per acre and because of his being the largest individual grower of potatoes, he was called ‘The Negro Potato King:”

Other examples of nineteenth-century black enterprise abound: William W. Browne founded the first black bank in Virginia; H. C. Haynes invented the Haynes Razor Strop in Chicago; A. C. Howard manufactured shoe polish (7,200 boxes per day) in Chicago.

[….]

[p. 29>] The relative color blindness of the market accounts for much of the hostility towards it. Markets have a notorious lack of respect for privilege, race, and class structures. White customers patronized black-owned busi­nesses because their prices were lower or their product quality or service better. Whites hired black skilled and unskilled labor because their wages were lower or they made superior employees.

Walter E. Williams, Race & Economics: How Much Can Be Blamed On Discrimination? (Stanford, CA: Hoover Institution Press, 2011), 15-27, 29.


[APA] Williams, W.E. (2011). Race & Economics: How Much Can Be Blamed On Discrimination? Stanford, CA: Hoover Institution Press.

[MLA] Williams, Walter E. Race & Economics: How Much Can Be Blamed On Discrimination? Stanford: Hoover Institution Press, 2011. Print.

[Chicago] Williams, Walter E. Race & Economics: How Much Can Be Blamed On Discrimination? Stanford: Hoover Institution Press, 2011.


IN  TEXT  CITATIONS  PER  STYLE


Short quotations

If you are directly quoting from a work, you will need to include the author, year of publication, and the page number for the reference (preceded by “p.”). Introduce the quotation with a signal phrase that includes the author’s last name followed by the date of publication in parentheses.

According to Jones (1998), “Students often had difficulty using APA style, especially when it was their first time” (p. 199).

Jones (1998) found “students often had difficulty using APA style” (p. 199); what implications does this have for teachers?

If the author is not named in a signal phrase, place the author’s last name, the year of publication, and the page number in parentheses after the quotation.

She stated, “Students often had difficulty using APA style” (Jones, 1998, p. 199), but she did not offer an explanation as to why.

Long quotations

Place direct quotations that are 40 words, or longer, in a free-standing block of typewritten lines, and omit quotation marks. Start the quotation on a new line, indented 1/2 inch from the left margin, i.e., in the same place you would begin a new paragraph. Type the entire quotation on the new margin, and indent the first line of any subsequent paragraph within the quotation 1/2 inch from the new margin. Maintain double-spacing throughout. The parenthetical citation should come after the closing punctuation mark.

Jones’s (1998) study found the following:

Students often had difficulty using APA style, especially when it was their first time citing sources. This difficulty could be attributed to the fact that many students failed to purchase a style manual or to ask their teacher for help. (p. 199)

[….]

More Than One Work by the Same Author

If you are citing more than one work by the same author, include enough information so that your reader can differentiate between them. For instance, if you have used two studies by the same authors (from different years), you simply need to include their dates of publication:

(Jones, Crick, & Waxson, 1989); (Jones, Crick, & Waxson, 1998)

or, if you are citing both at once:

(Jones, Crick, & Waxson, 1989, 1998)

If you are citing more than one work from the same year, use the suffixes “a,” “b,” “c” etc., so that your reader can differentiate between them (these suffixes will correspond to the order of entries in your references page):

(Jones, Crick, & Waxson, 1999a); (Jones, Crick, & Waxson, 1999b)

Multiple Authors Cited Together

Order the authors in alphabetical order by last name. Semicolons are used to differentiate between the entries:

(Heckels, 1996; Jones, 1998; Stolotsky, 1992)

In-text citations: Author-page style

MLA format follows the author-page method of in-text citation. This means that the author’s last name and the page number(s) from which the quotation or paraphrase is taken must appear in the text, and a complete reference should appear on your Works Cited page. The author’s name may appear either in the sentence itself or in parentheses following the quotation or paraphrase, but the page number(s) should always appear in the parentheses, not in the text of your sentence. For example:

Wordsworth stated that Romantic poetry was marked by a “spontaneous overflow of powerful feelings” (263).

Romantic poetry is characterized by the “spontaneous overflow of powerful feelings” (Wordsworth 263).

Wordsworth extensively explored the role of emotion in the creative process (263).

Both citations in the examples above, (263) and (Wordsworth 263), tell readers that the information in the sentence can be located on page 263 of a work by an author named Wordsworth. If readers want more information about this source, they can turn to the Works Cited page, where, under the name of Wordsworth, they would find the following information:

Wordsworth, William. Lyrical Ballads. London: Oxford UP, 1967. Print.

[….]

Citing two books by the same author:

Murray states that writing is “a process” that “varies with our thinking style” (Write to Learn 6). Additionally, Murray argues that the purpose of writing is to “carry ideas and information from the mind of one person into the mind of another” (A Writer Teaches Writing 3).

Additionally, if the author’s name is not mentioned in the sentence, you would format your citation with the author’s name followed by a comma, followed by a shortened title of the work, followed, when appropriate, by page numbers:

Visual studies, because it is such a new discipline, may be “too easy” (Elkins, “Visual Studies” 63).

[….]

In-text Citations

When citing two different books by the same author in the body of the text, include the author’s last name, then a comma and then the first word or several words of the title of the book in quotation marks followed by the page number. For example, if you were citing two books by John Smith, you might do the following: It’s important to recognize your strengths and talents. Knowing what you are good at is what moves you from “mediocre to masterful” (Smith, “Passions” 45). Ultimately, your success is based on how you feel about what you’re doing. Studies show that those who are passionately invested in their work make three times more money than those who dread their work (Smith, “Work Week” 75).

An exerpt from a sentence in the text of a paper written using the author-date would look like this:

While some assert that the essential qualities a politician must possess are, “passion, a feeling of responsibility, and a sense of proportion” (Weber 1946, 33), others think that …

A William F. Buckley Quote Regarding Dante’s Inferno

In a conversation with someone many yearn ago, I was reminded of a quote that connects to some current issues in the gay community.


Ha, Ha!  You just brought to my memory a portion of a book I read from one of my top-twenty authors.  Dinesh D’ Souza wrote of his early years on the campus of Dartmouth, which was the breeding ground for today’s conservatism (The Dartmouth Review). Speaking of the politically incorrect atmosphere and a professor (Hart) he had, and a contest:

“In part because of his political incorrectness, Hart was one of the few people I have met whose jokes made people laugh out loud.  His sense of humor can be illustrated by a contest that National Review privately held among its editors following the publication of the controversial Bill Buckley column on the issue of AIDS.  People were debating whether AIDS victims should be quarantined as syphilis victims had in the past.  Buckley said no: The solution was to have a small tattoo on their rear ends to warn potential partners.  Buckley’s suggestion caused a bit of a public stir, but the folks at national Review were animated by a different question: What should the tattoo say?  A contest was held, and when the entries were reviewed, the winner by unanimous consent was Hart.  He suggested the lines emblazoned on the gates to Dante’s Inferno: ‘Abandon all hope, ye who enter here’.”

Dinesh D’ Souza, Letters to a Young Conservative: Art of Mentoring (New York, NY: Basic Books, 2002), 20.


This is connected more seriously with this post: Another Example of Science Being Politicized by the Left ~ AIDS/HIV

Divine Feet ~ Philosophical Demarcations

Atheists reject evidence as illusory…

Why?

Because they “have to.”

I put these two ideas from separate fields of study together. Why I didn’t before is a mystery… but like with any field of study, you can go over the same topic again-and-again — you continue to learn. The first example come from biology and the natural sciences. Here are three examples of the beginning of my thinking:

  • “The illusion of design is so successful that to this day most Americans (including, significantly, many influential and rich Americans) stubbornly refuse to believe it is an illusion. To such people, if a heart (or an eye or a bacterial flagellum) looks designed, that’s proof enough that it is designed.” ~ Richard Dawkins in the Natural History Magazine;
  • “So powerful is the illusion of design, it took humanity until the mid-19th century to realize that it is an illusion.” ~ New Scientist Magazine (h/t, Uncommon Dissent)
  • “Biology is the study of complicated things that give the appearance of having been designed for a purpose.” Richard Dawkins enlarges on this thought: “We may say that a living body or organ is well designed if it has attributes that an intelligent and knowledgeable engineer might have built into it in order to achieve some sensible purpose… any engineer can recognize an object that has been designed, even poorly designed, for a purpose, and he can usually work out what that purpose is just by looking at the structure of the object.” ~ Richard Dawkins, The Blind Watchmaker, 1996, pp. 1, and 21.
  • “We can’t make sense of an organ like the eye without considering it to have a function, or a purpose – not in a mystical, teleological sense, but in the sense of an illusion of engineering. That illusion, we now know, is a consequence of Darwin’s process of natural selection. Everyone agrees that the eye is a remarkable bit of natural “engineering,” and that may now be explained as a product of natural selection rather than as the handiwork of a cosmic eye-designer or as a massive coincidence in tissue formation.” ~ Steven Pinker, via Edge’s “Is Science Killing the Soul.”

The important point here is that the Judeo-Christian [theistic] view would posit that we (and nature) is designed, and would notice it in ourselves and in nature. The atheist MUST reject design as an illusion because their worldview demands that chance cobbled together what we see… so dumb luck needs to be seen as opposed to design.

Steven Pinker summation:

Pinker’s newer book, The Blank Slate, revised his views on free will, in that he no longer thinks it’s a necessary fiction. The chapter on “The Fear of Determinism” takes an explicitly deterministic stance, and usefully demonstrates the absurdity of contra-causal free will and why we shouldn’t worry about being fully caused creatures. However, Pinker remains conservative in not drawing any conclusions about how not having free will might affect our attitudes towards punishment, credit, and blame,; that is, he doesn’t explore the implications of determinism for ethical theory. This, despite the fact that in How the Mind Works he claimed that “ethical theory requires idealizations like free, sentient, rational, equivalent agents whose behavior is uncaused” … We await further progress by Pinker. (Via Naturlism)

Daniel Dennett:

Dennett worries that there is good evidence that promulgating the idea that free will is an illusion undermines just that sense of responsibility many scientists and philosophers are worried about losing. Critics maintain that Dennett’s kind of free will, with its modest idea of “enough” responsibility, autonomy and control, is not really enough after all.

[….]

“It’s important because of the longstanding tradition that free will is a prerequisite for moral responsibility,” he says. “Our system of law and order, of punishment, and praise and blame, promise keeping, promise making, the law of contracts, criminal law – all of this depends on one notion or another of free will. And then you have neuroscientists, physicists and philosophers saying that ‘science has shown us that free will is an illusion’ and then not shrinking from the implication that our systems of law are built on foundations of sand.” (Via The Guardian)

Richard Dawkins, Lawrence Kruass, Christopher Hitchens:

Sam Harris:

Stephen Hawkings:

One of the most intriguing aspects mentioned by Ravi Zacharias of a lecture he attended entitled “Determinism – Is Man a Slave or the Master of His Fate,” given by Stephen Hawking, who is the Lucasian Professor of Mathematics at Cambridge, Isaac Newton’s chair, was this admission by Dr. Hawking’s, was Hawking’s admission that if “we are the random products of chance, and hence, not free, or whether God had designed these laws within which we are free.”[1] In other words, do we have the ability to make choices, or do we simply follow a chemical reaction induced by millions of mutational collisions of free atoms? Michael Polyni mentions that this “reduction of the world to its atomic elements acting blindly in terms of equilibrations of forces,” a belief that has prevailed “since the birth of modern science, has made any sort of teleological view of the cosmos seem unscientific…. [to] the contemporary mind.”[2]

[1] Ravi Zacharias, The Real Face of Atheism (Grand Rapids, MI: Baker Books, 2004), 118, 119.

[2] Michael Polanyi and Harry Prosch, Meaning (Chicago, IL: Chicago university Press, 1977), 162.

The bottom line is that free-will, self, freedom to be above and distinguish between actions, is all an illusion.

Why?

BECUASE if free-will existed… then this would be an argument f-o-r theism. F-o-r God’s existence. Like the founding director of NASA’s Goddard Institutes, Robert Jastrow’s description in his book of a disturbing reaction among his colleagues to the big-bang theory—irritation and anger.

Why, he asked, would scientists, who are supposed to pursue truth and not have an emotional investment in any evidence, be angered by the big-bang theory?

They had an aversion to the Big-Bang.

Because it argued F-O-R theism. F-O-R God’s existence.

Jastrow noted that many scientists do not want to acknowledge anything that may even suggest the existence of God. The big-bang theory, by positing a beginning of the universe, suggests a creator and therefore annoys many astronomers.

This anti-religious bias is hardly confined to astronomers.

As we see, the above persons in rejecting evidence of design in nature and consciousness, are doing so based on an aversion to “God evidence.” Another well-known philosopher John Searle notes this illusion as well:

All these people are misusing science and remaking it into “scientism.” AND, they are “not allowing a divine foot in the door,” as Dinesh D’Souza notes:

Scientism, materialism, empiricism, existentialism, naturalism, and humanism – whatever you want to call it… it is still a metaphysical position as it assumes or presumes certain things about the entire universe.  D’Souza points this a priori commitment out:

Naturalism and materialism are not scientific conclusions; rather, they are scientific premises. They are not discovered in nature but imposed upon nature. In short, they are articles of faith. Here is Harvard biologist Richard Lewontin: “We take the side of science in spite of the patent absurdity of some of its constructs, in spite of its failure to fulfill many of its extravagant promises of health and life, in spite of the tolerance of the scientific community for unsubstantiated just-so stories, because we have [an] a priori commitment… a commitment to materialism. It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is an absolute, for we cannot allow a Divine Foot in the door.”

Dinesh D’Souza, What’s So Great about Christianity (Washington, DC: Regnery Publishing, 2007), 161 (emphasis added).

“Minds fit into an theistic world, not an atheistic one”

What are intentional states of consciousness? Are states of consciousness plausible on either a theistic or atheistic worldview? This clip shows the exchange between Dr William Lane Craig and Dr Alex Rosenberg on intentional states of consciousness in the world. On February 1st, 2013 at Purdue University, Dr Craig participated in a debate with Dr Rosenberg on the topic, “Is Faith In God Reasonable?” Over 5,000 people watched the event on the Purdue University campus along with tens of thousands streaming it live online from around the world.

For more on this, see my “quotefest” here: Evolution Cannot Account for: Logic, Reasoning, Love, Truth, or Justice

Dinesh D’Souza vs. the “Christian” Hitler (Nuremberg Day 28)

In Mein Kampf, he presented a social Darwinist view of life, life as a struggle, and presented national socialism as an antidote to both Judaism and communism. His party attempted to develope a new form of religion with elements of de-Judaised Christianity infused with German and Nordic pagan myths, but this was resisted by the Christians. ~ Professor Thies

  • “I freed Germany from the stupid and degrading fallacies of conscience and morality…. We will train young people before whom the world will tremble. I want young people capable of violence — imperious, relentless and cruel.” ~ Hitler

On a plaque hung on the wall at Auschwitz (Ravi Zacharias, Can Man Live Without God, p. 23)

  • “The stronger must dominate and not mate with the weaker, which would signify the sacrifice of its own higher nature.  Only the born weakling can look upon this principle as cruel, and if he does so it is merely because he is of a feebler nature and narrower mind; for if such a law [natural selection] did not direct the process of evolution then the higher development of organic life would not be conceivable at all….  If Nature does not wish that weaker individuals should mate with the stronger, she wishes even less that a superior race should intermingle with an inferior one; because in such a case all her efforts, throughout hundreds of thousands of years, to establish an evolutionary higher stage of being, may thus be rendered futile.” ~ Hitler

Adolf Hitler, Mein Kampf, translator/annotator, James Murphy (New York: Hurst and Blackett, 1942), pp. 161-162.

  • “Everything I have said and done in these last years is relativism by intuition….  If relativism signifies contempt for fixed categories and men who claim to be bearers of an objective, immortal truth… then there is nothing more relativistic than fascistic attitudes and activity….  From the fact that all ideologies are of equal value, that all ideologies are mere fictions, the modern relativist infers that everybody has the right to create for himself his own ideology and to attempt to enforce it with all the energy of which he is capable.” ~ Mussolini

Mussolini, Diuturna (1924) pp. 374-77, quoted in A Refutation of Moral Relativism: Interviews with an Absolutist (Ignatius Press; 1999), by Peter Kreeft, p. 18.

The Above Video Description:

Nuremberg Day 28 Church Suppression

Colonel Leonard Wheeler, Assistant American Trial Counsel, on Jan. 7, 1946, submitted the case regarding the Oppression of the Christian Churches and other Religious Groups in Germany and the Occupied Countries. He stated that the Nazi conspirators found the Christian churches to be an “obstacle to their complete domination of the German people and contrary to their master race dogma”.

The Indictment charged that “the Nazi conspirators, by promoting beliefs and practices incompatible with Christian teaching, sought to subvert the influence of the churches over the people and in particular the youth of Germany”.

For further information, see www.roberthjackson.org


Dinesh D’Souza’s “Atrocities Compared” Zinger

The above took place at Caltech on December 9th, 2007 ~ between Dinesh D’Souza and Michael Shermer. I am posting this anew for a discussion I am in where I received the following challenge:

  • So you deny inquisitions which really happened?

I respond:

One doesn’t have to deny something in order to k-n-o-w about the truth of something. Firstly, I bet you do not know this but the inquisitions were started to stop “monkey courts” sentencing people to death. Secondly, here are some stats you probably are not aware of. And this isn’t to belittle you… reading history is a hobby of mine. Here they are:

✦ The Inquisition was originally welcomed to bring order to Europe because states saw an attack on the state’s faith as an attack on the state as well.
✦ The Inquisition technically had jurisdiction only over those professing to be Christians.
✦ The courts of the Inquisition were extremely fair compared to their secular counterparts at the time.
✦ The Inquisition was responsible for less than 100 witch-hunt deaths, and was the first judicial body to denounce the trials in Europe.
✦ Though torture was commonly used in all the courts of Europe at the time, the Inquisition used torture very infrequently.
✦ During the 350 years of the Spanish Inquisition, between 3,000-5,000* people were sentenced to death (about 1 per month).
✦ The Church executed no one.

See more at my site: http://religiopoliticaltalk.com/spanish-inquisition/

There was a change of subject immediately following the above posted historical facts/items, to which I asked for the following minimalist admission:

Before I go on… I must ask for some honesty. And it requires a minimum admission from you — at least. The reason being is that I do not mind talking about topics in a dialogue fashion. But often times I find people move from one topic to the next, never slowing down to let new information sink in. And so the learning process is hampered in lieu of trying to “win” a point or position held for many years. [In many cases, wrongly.]

So while I have a decent sized home library and love [to discuss] politics, religion, philosophy, science, history, and theology… I also do not want to waste my time in conversation where people do not add new items of understanding to their thinking. In your case, I think it is the historical bullet points to follow that would offer a modicum of reasonableness allowing me to continue.

At the minimum would you say that you did not know that the Inquisitions…

➤ killed 0.769 people a month [yes, that is a point] over it’s 300-year period;
➤ that it was primarily secular;
➤ and was implemented to stop kangaroo courts.

If you would a-t l-e-a-s-t admit that you thought these were different but now can see that maybe, just maybe, you heard this information through word-of-mouth and just ran with it instead of testing your own position to the panoply of history… then we can continue the conversation. (Pride is an S.O.B.)

What Does It Mean To Be a “Super Mexican”? SooperMexican Tells Us

Not via amnesty Mexicans, or La Raza Mexicans, or Lazy Mexicans. Via SOOPERMEXICAN!

Sooper says: “We’ll be rolling out new videos with Dinesh D’Souza, so if there’s anything you want to be sooper-esplained let me know in the comments, and for your mexy reparations I demand you share and tweet this video!”

Taking Physicist Stephen Barr to Task Over St. Augustine (Edited)

(Originally posted in February of 2011) In a recent interview by Dinesh D’Souza (President of Kings College as well as being a favorite author of mine) of physicist Stephen Barr (Professor of Particle Physics at the Bartol Research Institute and the Department of Physics and Astronomy at the University of Delaware). What was otherwise a good interview and overview of philosophical naturalism’s metaphysical positions in contradistinction to true science and religion’s metaphysical outlook, took a historical turn for the worse when Augustine was used as defense in the “old-earth/young-earth” debate.


(Thank you for Uncommon Descent’s link to my story!)


In this next portion you will hear the portion of the interview I wish to weigh in on. We pick up the conversation as it happens coming in from the break:

The problem with Dr. Barr’s summation is that he has failed to take into account that people’s views on matters change over time. For instance, R.C. Sproul (evangelical scholar, professor, and President of Ligonier Ministries) mentioned that through most of his teaching career he accepted the old-age position. However, late in his career he changed his position to that of the young earth creationists.

For most of my teaching career, I considered the framework hypothesis to be a possibility. But I have now changed my mind. I now hold to a literal six-day creation, the fourth alternative and the traditional one. Genesis says that God created the universe and everything in it in six twenty-four–hour periods. According to the Reformation hermeneutic, the first option is to follow the plain sense of the text. One must do a great deal of hermeneutical gymnastics to escape the plain meaning of Genesis 1–2. The confession makes it a point of faith that God created the world in the space of six days. [emphasis in original, indicating these words are part of the Confession] (pp. 127–128).[1]

Similarly, Augustine, early in his life, was very allegorical[2] in his attempt to interpret and define Scripture and events in it. Later however, he changed his position in much the same way Dr. Sproul did. Therefore, to quote Sproul or Augustine as old-earth creationists supporting the views of professor Barr would not do the position justice.

As his theology matured, Augustine abandoned his earlier allegorizations of Genesis that old-earth creationists and theistic evolutionists have latched onto in an attempt to justify adding deep time to the Bible. Furthermore, he always believed in a young earth (painting by Sandro Botticelli, c. 1480)

An example of Augustine’s allegorical uses comes from the journal Church History by way of Mervin Monroe Deems (Ph.D., past Samuel Harris Lecturer on Literature and Life at Bangor Theological Seminary, Maine) in which he points out Augustine’s use of allegory in interpreting “paradise” in Genesis:

But let us get back to the Paradise of Genesis. As Augustine put it, “. .. some allegorize all that concerns Paradise itself”: the four rivers are the four virtues; the trees, all knowledge, and so on. But to Augustine these things are better connected with Christ and his Church. Thus, Paradise is the Church; the four rivers, the four gospels; the fruit-trees, the saints; the tree of life, Christ; and the tree of knowledge, one’s free choice. And he closes the paragraph thus:

These and similar allegorical interpretations may be suitably put upon Paradise without giving offense to anyone, while yet we believe the strict truth of the history, confirmed by its circumstantial narrative of facts.[3]

To put this closing remark in slightly updated English, it reads as follows:

No one should object to such reflections and others even more appropriate that might be made concerning the allegorical interpretation of the Garden of Eden, so long as we believe in the historical truth manifest in the faithful narrative of these events.[4]

To be clear, Augustine was still holding to the literal meaning in the Genesis narrative even during his use of allegory in rendering extra meaning to the idea of paradise in Genesis. Again, professor Deems:

Augustine’s approach to the scriptures was gradual. At the time that he came across the Hortensius he turned to the Scriptures, only to turn away again, for in his estimation they could not compare with the writings of Cicero. Later at Milan following the advice of Ambrose he started to read Isaiah but found this too difficult and turned to the Psalms. The period of retirement and the months immediately following, which produced the philosophic treatises, were devoted to the classics rather than to the Bible. But increasingly Augustine studied and meditated upon the Scriptures, with the result that his writings are filled with Scriptural quotation and references…. The use of allegory by Augustine was not only a means of making Scripture say something, it was also a technique for bringing Scripture down to date, by forcing ancient words to minister, through prophecy, to the weaving of present patterns of behavior or through the summoning to higher ideals. But it was also dangerous for it came close to making Scripture say what he wanted it to say (through multiplicity of allegories of identical Scripture), and it prepared the way for Catholic or Protestant, later, to find in Scripture what he would.[5]

And this is key, as Professor Benno Zuiddam (Benno Zuiddam is research professor [extraordinary associate] for New Testament Studies, Greek and Church History at the faculty of Divinity at North West University, Potchefstroom, South Africa) points out,

As Augustine became older, he gave greater emphasis to the underlying historicity and necessity of a literal interpretation of Scripture. His most important work is De Genesi ad litteram. The title says it: On the Necessity of Taking Genesis Literally. In this later work of his, Augustine says farewell to his earlier allegorical and typological exegesis of parts of Genesis and calls his readers back to the Bible. He even rejected allegory when he deals with the historicity and geographic locality of Paradise on earth.[6]

The professor points out as well that from Augustine’s City of God, we can begin to see this literalism in the evolution of his responses to pagans. Dr. Zuiddam asks:

3) Isn’t it obvious from his City of God (De Civitate Dei) that Augustine believed that God created Man 6000 years ago?

Not quite, but a young earth definitely. Augustine wrote in De Civitate Dei that his view of the chronology of the world and the Bible led him to believe that Creation took place around 5600 BC [Ed. note: he used the somewhat inflated Septuagint chronology—see Biblical chronogenealogies for more information.]. One of the chapters in his City of God bears the title “On the mistaken view of history that ascribes many thousands of years to the age of the earth.” Would you like it clearer? Several pagan philosophers at the time believed that the earth was more or less eternal. Countless ages had preceded us, with many more to come. Augustine said they were wrong. This goes to show that theistic evolutionists who call in Augustine’s support do so totally out of context. All they allow themselves to see is his symbolic use of “day” in Genesis, and a very difficult philosophical doctrine of creation with ideas that develop. “Wonderful!” they think, “Augustine really supports our post-Darwinian theories!” It takes a superficial view of Genesis and Augustine to arrive at such conclusions. His instant creation, his young earth and immediate formation of Adam and Eve rule out Augustine’s application for this purpose.[7]

An example of this can be seen here with Augustine himself saying:

“They are also still being led astray by some false writings according to their claim to the history of the times many thousands of years to take, as we do from the Bible to calculate that since the creation of man, not quite six thousand years have expired ” (XII, 11).[8]

Non-literalist Professor James Barr (Professor of Hebrew Bible at Vanderbilt University and former Regius Professor of Hebrew at Oxford University in England) in a letter to David C.C. Watson, 23 April 1984 wrote this:

Probably, so far as I know, there is no professor of Hebrew or Old Testament at any world-class university who does not believe that the writer(s) of Genesis 1-11 intended to convey to their readers the ideas that (a) creation took place in a series of six days which were the same as the days of 24 hours we now experience; . . . Or, to put it negatively, the apologetic arguments which suppose the “days” of creation to be long eras of time, the figures of years not to be chronological, and the flood to be a merely local Mesopotamian flood, are not taken seriously by any such professors, as far as I know.[9]

As one can see from here and following the links to the larger articles, Dr. Stephen Barr may want to revise his position on some of the church fathers and their views in regards to the age of the earth and hence creation. A good resource for reading their thoughts on the matter — the early church fathers that is — can be FOUND HERE.


[1] Tas Walker, “Famous evangelical apologist changes his mind: RC Sproul says he is now a six-day, young-earth creationist,” Creation Ministries International, published May 21st, 2008, found at URL:

http://creation.com/famous-evangelical-apologist-changes-his-mind-rc-sproul

[2] Allegory:

Allegory is primarily a method of reading a text by assuming that its literal sense conceals a hidden meaning, to be deciphered by using a particular hermeneutical key. In a secondary sense, the word “allegory” is also used to refer to a type of litera­ture that is expressly intended to be read in this nonliteral way. John Bunyan’s Pilgrim’s Progress is a well-known example of allegorical literature, but it is doubtful whether any part of the Bible can be regarded as such. The parables of Jesus come closest, but they are not allegories in the true sense. The apostle Paul actually used the word allegoria, but arguably this was to describe what would nowadays be called “typology” (Gal. 4:24). The difference between typology and allegory is that the former attaches additional meaning to a text that is accepted as having a valid meaning in the “literal” sense, whereas the latter ignores the literal sense and may deny its usefulness al­together. Paul never questioned the historical accuracy of the Genesis accounts of Hagar and Sarah, even though he regarded them as having an additional, spiritual meaning as well. Other interpreters, however, were often embarrassed by anthropomorphic accounts of God in the Bible, and sought to explain away such language by saying that it is purely symbolic, with no literal meaning at all. It is in this latter sense that the word “allegory” is generally used today.

Kevin J. Vanhoozer, Gen Ed., Dictionary for Theological Interpretation of the Bible, Grand Rapids, MI: Baker Books, 2005), cf. Allegory, 34-35.

[3] Mervin Monroe Deems, “Augustine’s Use of Scripture,” Church History Vol. 14, No. 3 (Sep., 1945), 196. Published by: Cambridge University Press on behalf of the American Society of Church History (Stable URL: http://www.jstor.org/stable/3160307) (emphasis added).

[4] Saint Augustine, City of God (New York, NY: Image Books, 1958), 288; or, Book XIII, 21 (emphasis added).

[5] Mervin Monroe Deems, “Augustine’s Use of Scripture,” Church History Vol. 14, No. 3 (Sep., 1945), 188-189 (emphasis added).

[6] Benno Zuiddam, “Augustine: young earth creationist — [How] theistic evolutionists take Church Father out of context,” Creation Ministries International, published October 8th, 2009, found at URL (emphasis added):

http://creation.com/augustine-young-earth-creationist

[7] Ibid.

[8] Benno Zuiddam, “Adjust the lighting was dark,” Reformatorisch Dagblad (Reformed Daily), published April 15th, 2009, found at URL:

http://www.refdag.nl/nieuws/pas_met_de_verlichting_werd_het_donker_1_324668

 (will need Google to translate).

[9] Henry Morris, “The Literal Week of Creation,” ICR, found at URL:

http://www.icr.org/article/literal-week-creation/

...POSTSCRIPT

May I remind those who may not understand this critique that it [the critique] has nothing to do with said physicists faith. This is merely a challenge to his understanding of a historical figure and where he [Stephen Barr] separates his understanding of Augustine  and what Augustine believed. We know Augustine, from his later writings specifically, rejected the spiritualistic aspect he once placed on the Genesis account and accepted the plain understanding as paramount. This critique neither places young-earth creationism as a litmus test for faith or some standard one must reach to be “holier” than thee. One may wish to read my footnote #18 to understand my position on this.