Friday, April 20, 2018

The Burning Platform - WINTER IS COMING (PART THREE)

“The very survival of the nation will feel at stake. Sometime before the year 2025, America will pass through a great gate in history, commensurate with the American Revolution, Civil War, and twin emergencies of the Great Depression and World War II.” – Strauss & Howe  The Fourth Turning 
In Part One of this article I laid out the reasons for Gray Champions arising to meet challenges during crisis periods in history. In Part Two of this article I assessed the configuration of Gray Champions throughout the world and the potential impact on the course of this Fourth Turning.
The swirling fog of confusion enveloping the globe as the high lords of the universe play their game of thrones has even the most critical thinking individuals baffled by the course of events. The desperation and blatant lawlessness of the Deep State players in their endeavor to preserve their hegemony over the course of global affairs is palpable with every attack, false flag, accusation, and ratcheting up of their propaganda media machine.
Like Game of Thrones, the behind the scenes machinations, subterfuge, and deceptions taking place outside the purview of the common folk are designed to only benefit the rich and powerful players undertaking these traitorous actions. Open warfare will not happen until it is thought to be in the best interests of those manipulating the levers of society and the narrative produced by their perpetual propaganda media machine. But, in the end, it will be the innocent common people who will suffer the consequences, while the lords reap the riches, glory and power.
“Why is it always the innocents who suffer most, when you high lords play your game of thrones?” – Lord Varys – George R.R. Martin, Game of Thrones
The common people have always been blind to the next turning until after it fully arrives. Even now, the average person has no idea we are in the midst of a crisis period which will change the course of history. The overwhelming majority of the 325 million Americans, and billions around the globe, go about their daily lives oblivious to the intrigues, conspiracies, and treachery playing out at the highest levels of government and in smoky backrooms, where deals are made, wars plotted, and billions dispersed to the oligarchical lords running our world.
The common people get up, go to work, try to earn enough to survive or get ahead in life, raise their children, and endeavor to attain the lifestyle sold to them by their overlords based on delusion and debt. They are easily distracted by technological baubles, watching sporting events, enslaved by government handouts, and told what to believe by their keepers. They don’t want to experience the challenges of winter, but a never ending summer. They don’t want to think and be responsible for their lives. They want to be left in peace on Twitter and Facebook, but that isn’t how it works during a Fourth Turning winter.
“The common people pray for rain, healthy children, and a summer that never ends,” Ser Jorah told her. “It is no matter to them if the high lords play their game of thrones, so long as they are left in peace.” He gave a shrug. “They never are.” – George R.R. Martin, A Game of Thrones
There is no telling how the next ten or so years will play out; which alliances prove to be successful or disastrous; whether Trump is compromised by the Deep State or wins this internal struggle; and the outcomes of the fast approaching civil and global wars, which are inevitable given the current state of affairs in the world. The Fourth Turning isn’t a prediction. It’s a period of crisis driven by the generational alignment which happens like clockwork every 80 to 100 years. It predicts nothing. The course of events is up to the individuals driving those events.
Those who attempt to dismiss this generational theory by calling it doom porn or saying it is impossible to predict the future are revealing their fears rather than arguing based on facts or substance. A man who fears the coming trials and tribulations has already lost. Fear works far better than swords in keeping the masses controlled. Take the Russian bogeyman scenario being utilized at the present time to keep the ignorant masses distracted and bemused. Praying for a lone wolf to save the day and restore the world to its summer like condition is irrational and again based upon fear. Winter winds are already blowing at gale force.
“When the snows fall and the white winds blow, the lone wolf dies but the pack survives.” – Ned Stark – George R.R. Martin, Game of Thrones
It’s fear that appears to be pushing people over the edge. The common people are being manipulated by the “powers that be” though propaganda, mistruths, distractions, iGadgets, hero worship, irrelevant social justice warrior issues, the illusion of political choices, and being lured into debt servitude by the banking cabal and their mega-corporation co-conspirators. They have successfully divided us into angry subsets of lone wolves unwilling or unable to unite and fight the true enemies.
The common people will again do the dying and get the short end of the stick, just as they did during the Civil War, Great Depression and World War II. In order to change the dynamics of this Fourth Turning from one where the lords determine our fate, it would require the majority to open their eyes to see the truth and be led by truly just men to overcome the forces of darkness currently in control. Based on history, this is an unlikely scenario, but still possible.
“Opening your eyes is all that is needing. The heart lies and the head plays tricks with us, but the eyes see true.” – Syrio Forel – George R.R. Martin, Game of Thrones
An extremely important question on which hinges the future course of history, will need to be answered in the near future. Is Trump a moral, just, honorable leader who has the best interests of the American people as his sole priority or will he continue to represent the interests of the vested interests (aka Deep State)? Words are not enough. It’s his deeds by which he will be judged. Is he a wolf in sheep’s clothing, or a noble warrior doing battle with Deep State enemies?
His contradictory and baffling actions over his first fifteen months in office have given hope to many, infuriated others, and confused the majority. Does he have principles or is everything negotiable? His decision making, relationships with foreign adversaries, ability to defeat his domestic enemies, and courage to do what is right whether it is popular, will determine his place in history. Failure could be catastrophic for the nation.
While Trump, Putin, and Jinping play their game of thrones for world dominance, we the people still have to do our part at this crucial time in history. While the vast majority of Americans may not be intellectually capable of independent thought or critical thinking due to decades of dumbing down through the government education gulags and a steady diet of government propaganda, there are a minority of patriotic people who respect the Constitution and will need to man the wall.
We know the existing social order will be demolished by the end of this Fourth Turning and courageous acts will matter, sacrifice required, and defeating enemies from within and without will be compulsory. There will be no glory for common men who make the ultimate sacrifice and die for a better tomorrow for their children and grandchildren. Everyone has the potential to make a difference. Danger is omnipresent.
“I shall wear no crowns and win no glory. I shall live and die at my post. I am the sword in the darkness. I am the watcher on the walls. I am the fire that burns against the cold, the light that brings the dawn, the horn that wakes the sleepers, the shield that guards the realms of men.” – The Oath of the Brothers of the Night’s Watch – Game of Thrones
You don’t have to be a fan of the Game of Thrones or a believer in the Fourth Turning to realize the world is in the midst of a crisis. Denial and willful ignorance will not turn back time to better days. Whether it be a fictional battle for control of the seven kingdoms or a real battle for control of petro-currencies, gas pipelines, natural resources, and military dominance, the humans locked in these battles never change.
Human nature has remained the same throughout history. The shortcomings of men across centuries have remained consistent: greed, power seeking, arrogance, cruelty, immorality, and hubris.  Even Aleksandr Solzhenitsyn, a man of true courage, knew “the line dividing good and evil cuts through the heart of every human being”.
The coming storms will bring out the best and the worst in humanity. The nation could be snuffed out or be elevated to new glorious heights. If good wins out over evil the heroic deeds of the winners will become the stuff of myths and legends. If evil wins out over good the final shocking scene in The Planet of the Apes may be our future. The choices we make will matter.
“The risk of catastrophe will be very high. The nation could erupt into insurrection or civil violence, crack up geographically, or succumb to authoritarian rule. If there is a war, it is likely to be one of maximum risk and effort–in other words, a total war…
History’s howling storms can bring out the worst and best in people. The next Fourth Turning can literally destroy us as a nation and people, leaving us cursed in the histories of those who endure and remember. Alternatively, it can ennoble our lives, elevate us as a community, and inspire acts of consummate heroism–deeds that will grow into mythlike legends recited by our heirs far into the future.” – Strauss & Howe – The Fourth Turning

Vox Popoli: Re-opening the closed door (On immigration conspiracy)

How immigrants and their allies conspired to end the national origins system that made America great in the 20th century.
The demographic consequences of ending the open door cannot be known with certainty, since no one can be sure what immigration would have been in the absence of restriction. Demographer Leon Bouvier has estimated that, assuming no restriction and pre-war levels of one million a year for the rest of the century, the American population would have reached 400 million by the year 2000. This would have meant 120 million more American high-consumption lifestyles piled upon the roughly 280 million reported in the census of 2000, making far worse the dismal figures on species extinction, wetland loss, soil erosion, and the accumulation of climate-changing and health-impairing pollutants that are being tallied up as the new century unfolds.

The chief goals of the national origins system, shrinking the incoming numbers and tilting the sources of the immigration stream back toward northern Europe, were less decisively achieved. Numbers entering legally but outside the quotas (“non-quota immigrants,” mostly relatives of those recently arrived and Europeans entering through Latin American and Caribbean countries) surprised policymakers by matching and in time exceeding those governed by quotas. Yet with overall numbers so low, ethnic composition did not agitate the public.

International economic maladies, war, and the new American system of restriction had thus combined to reduce immigration numbers to levels more in line with the long course of American history, and to some observers seemed to have ended the role of immigration as a major force in American life. Apparently the nation would henceforth grow and develop, as Thomas Jefferson had preferred, from natural increase and the cultural assets of its people.

The curbing of the Great Wave created a forty-year breathing space of relatively low immigration, with effects favorable to assimilation. The pressures toward joining the American mainstream did not have to contend with continual massive replenishment of foreigners.

The new immigration system was widely popular, and the immigration committees of Congress quickly became backwaters of minor tinkering or inactivity. The 1930s arrived with vast and chronic unemployment, and the American people wanted nothing from immigration. War in Europe would bring unprecedented refugee issues, but dealing with these — or avoiding them — did not require any rethinking of the basic system for deciding on the few thousand people who would be given immigration papers.

But American immigration policy in the postwar years attracted a small but growing body of opponents. The political core of a coalition pressing for a new, more “liberalized” policy regime was composed of ethnic lobbyists (“professional immigrant-handlers,” Rep. Francis Walter called them) claiming to speak for nationalities migrating prior to the National Origins Act of 1924, the most effective being Jews from central and eastern Europe who were deeply concerned with the rise of fascism and anti-semitism on the continent and eternally interested in haven. Unable by themselves to interest many politicians or the media in the settled issue of America’s immigration law, these groups hoped for new circumstances in which restrictions could be discredited and the old regime of open doors restored. The arrival of the Civil Rights Movement thrust (racial) “discrimination” into the center of national self-examination. The enemy everywhere at the bottom of virtually every national blemish seemed to be Discrimination, the historic, now intolerable subordinating classification of groups on the basis of inherited characteristics. The nation’s national origins-grounded immigration laws could not escape an assault by these reformist passions, and critics of the national origins system found the liberal wing of the Democratic Party receptive to their demand that immigration reform should be a part of the civil rights agenda.

Who would lead, and formulate what alternatives? Massachusetts Senator John F. Kennedy cautiously stepped out on the issue in the 1950s, sensing that a liberalization stance would gather vital ethnic voting blocs for his long-planned run for the presidency. His work on a refugee bill caught the attention of officials of the Anti-Defamation League of B’nai B’rith, who convinced Kennedy to become an author of a pamphlet on immigration, with the help of an ADL supplied historian, Arthur Mann, and Kennedy’s staff. The result was A Nation of Immigrants, a 1958 bouquet of praise for the contributions of immigrants and a call for an end to the racist, morally embarrassing national origins system. The little book was initially ignored, but its arguments would dominate the emerging debate. The ADL, part of a Jewish coalition whose agenda included opening wider the American gates so that increasing U.S. ethnic heterogeneity would reduce the chances of a populist mass movement embracing anti-semitism, had made a golden alliance. John F. Kennedy was no crusader on immigration (or anything else), but he was an activist young President by 1961, comfortable with immigration reform as part of his agenda, elected on a party platform that pledged elimination of the national origins system.

What comes next? The USA is again on course to reach 400 million imperial subjects sometime between 2043 and 2051, depending upon which UN report you credit. Its population is unlikely to ever reach that size, of course, but it should be apparent that the forty-year breathing space created by the national origins system is the primary reason the empire has not collapsed already.

Barring a mass repatriation program for all post-1965 immigrants and their descendants, which appears extremely unlikely at the moment, the political breakup should begin by the early 2030s. Every empire is destroyed by immigration of one sort or another in the end, but it is the cultural decadence and lack of confidence that permits such immigration to take place that is the true cause of the collapse.

Had American politicians possessed the wisdom to arrest and deport the seditious ethnic lobbyists who agitated for ending the national origins system, the collapse of the empire would not be rapidly approaching. Now the necessary surgery is even more difficult and considerably less politically palatable. So, we can safely conclude that it will not be performed.

Vox Popoli: The nonexistent principles of Never Trump (And the failure of Conservatism.)

Kurt Schlichter tears into the pious frauds who, despite their proclamations of high principle, have proven to be every bit as unprincipled as we always figured they were:
Where are your principles in the face of the gross injustices of the last few days? A federal judge who was nearly appointed Bill Clinton’s attorney general and who officiated at Soros’s wedding ordered Hannity’s information disclosed, but that was cool with you. After all, Sean Hannity is so…oh well, I never!

Principles that depend on who is asserting them aren’t principles. They are poses.

If you actually adhered to them, your principles would have you shrieking, not cheering. A bunch of Hillary-donating feds should not be allowed to randomly pillage through privileged materials looking for a crime. No, the crime-fraud exception does not mean that the feds can just take all your stuff, read through it, and decide if some happens to fall into that narrow exception and leak the rest. But hey, why let some principles get in the way of a good laugh at the expense of one of those Trump people?

Gosh, it’s almost like your talk of principles was just…talk.

Schlichter is correct. There are no Never Trump principles. As a matter of fact, there are no conservative principles, because conservatism is not, and has never been, a coherent ideology. It is, ultimately, a reactive, defensive pose.

That's the strategic problem with conservatism. It literally can't win. It can't go on the offense, because it has no objectives. And Never Trump is conservatism with cancer.

UPDATE: They were always frauds from the start.
Former presidential candidate Evan McMullin owes his former campaign staff members tens of thousands of dollars and most believe he has no intention of ever paying them, a former campaign worker tells The Daily Caller News Foundation.

Right before McMullin’s failed bid for president in 2016 as the conservative alternative to President Donald Trump, the campaign was inundated with debt. The disastrous fiscal situation was a combination of frivolous spending by McMullin and his campaign manager Joel Searby, according to the former staffer.

McMullin received news weeks before Election Day 2016 about how dire the campaign’s finances were, and he had “no remorse” and said “I have qualms about this thing ending badly in debt,” the former staffer claimed. McMullin’s cavalier attitude towards the campaign’s spending struck many as a surprise, particularly because he billed himself as a fiscal conservative, he added.

It is simply delicious to think of all the harrumphing bow-ties shedding furious tears over the way they were stiffed by their fine, principled fiscally conservative candidate who was only running out of his deep sense of outraged honor.

Vox Popoli: The 11 percent metric (Modern science is actually less reliable than flipping a coin.)

Modern science is actually less reliable than flipping a coin. The Wall Street Journal reports on scientific efforts to address the reproducibility crisis:
Half the results published in peer-reviewed scientific journals are probably wrong. John Ioannidis, now a professor of medicine at Stanford, made headlines with that claim in 2005. Since then, researchers have confirmed his skepticism by trying—and often failing—to reproduce many influential journal articles. Slowly, scientists are internalizing the lessons of this irreproducibility crisis. But what about government, which has been making policy for generations without confirming that the science behind it is valid?

The biggest newsmakers in the crisis have involved psychology. Consider three findings: Striking a “power pose” can improve a person’s hormone balance and increase tolerance for risk. Invoking a negative stereotype, such as by telling black test-takers that an exam measures intelligence, can measurably degrade performance. Playing a sorting game that involves quickly pairing faces (black or white) with bad and good words (“happy” or “death”) can reveal “implicit bias” and predict discrimination.

All three of these results received massive media attention, but independent researchers haven’t been able to reproduce any of them properly. It seems as if there’s no end of “scientific truths” that just aren’t so. For a 2015 article in Science, independent researchers tried to replicate 100 prominent psychology studies and succeeded with only 39% of them.

Further from the spotlight is a lot of equally flawed research that is often more consequential. In 2012 the biotechnology firm Amgen tried to reproduce 53 “landmark” studies in hematology and oncology. The company could only replicate six. Are doctors basing serious decisions about medical treatment on the rest? Consider the financial costs, too. A 2015 study estimated that American researchers spend $28 billion a year on irreproducible preclinical research.

The chief cause of irreproducibility may be that scientists, whether wittingly or not, are fishing fake statistical significance out of noisy data. If a researcher looks long enough, he can turn any fluke correlation into a seemingly positive result. But other factors compound the problem: Scientists can make arbitrary decisions about research techniques, even changing procedures partway through an experiment. They are susceptible to groupthink and aren’t as skeptical of results that fit their biases. Negative results typically go into the file drawer. Exciting new findings are a route to tenure and fame, and there’s little reward for replication studies.

It's always ironic how the IFLS crowd isn't even remotely up to speed on current science while simultaneously pointing and shrieking about how everyone with substantive and valid criticism of scientistry simply "doesn't understand science". You can see this in the comments of the most recent Voxiversity on Christianity and Western Civilization. Richard Dawkins has repeatedly argued that eyewitness testimony should not be used in the courtroom because it is insufficiently reliable, but by his own metric, the expert testimony of a scientist should barred from the courtroom as well because science is considerably less statistically reliable.

As for the idea that science can even theoretically serve as a basis for moral guidance, the grand windmill at which Sam Harris has been jousting in futility for the last 10 years, that has become even more obviously ridiculous than even his most brutal critics believed at the start. One would do nearly four times better to simply flip a coin; indeed, statistically speaking, one's optimal strategy is to listen to what scientists advise, then do precisely the opposite.

Of course, in retrospect, this should have always been obvious. Look at the average scientist. Do you think following his advice on women or doing the precise opposite is more likely to lead to a desirable outcome? Do you trust his philosophy on fitness, or on any other aspect of life? These are individuals whose entire perspectives on life, the universe, and everything are constructed on an illusion of a nonexistent solidity.

And the great irony is that scientistry now stands condemned by its beloved scientodific metric. The New Atheists reasoned that religious faith must be false on the basis of presuming the eyewitness testimony and documentary evidence to the contrary being false, but now we actually know, we do not merely reason, that it is faith in science that is false due to irreproducibility.

Thursday, April 19, 2018

A Conversation on Race -- Paul Craig Roberts - (A history lesson on race)

We often hear that we need a conversation on race. Considering that Americans are a brainwashed people living in a false history, such a conversation would resemble the one the Russians were expected to have with the British in regard to the Skripal poisoning: “Yes, we are guilty. We will pay reparations. Where would you like us to send Putin for trial?” In other words, the only acceptable race conversation in the US is one in which white people accept the accusation that they are racist and offer to make amends.
Considering that the only slavery experienced by any living black or white person is income tax slavery, race is an issue only because it has been orchestrated as an issue along with gender and sexual preference. These divisive issues are the products of Identity Politics spawned by cultural Marxism.
In real Marxism, conflict is class conflict. Workers and capitalists have different interests, and history is a struggle between material interests. The capitalist is the villain and the workers are the victims.
In the pseudo Marxism of Identity Politics, the white race is the villain, especially the white heterosexual male, and racial minorities, women, and homosexuals are the victims.
There is, of course, no such thing as a white or black race. There are many different nationalities of whites, and they have done a good job throughout history of killing each other. Similarly, there are many different black tribes and Asian ethnicities who also have fought more among themselves than with others. But all of this goes by the wayside, along with the fact that in the world the “racial minorities” are actually majorities and the “white majority” is actually a minority. There are more Chinese or Indians alone than there are white people.
But orchestrated histories are not fact-based.
The working class, designated by Hillary Clinton as “the Trump deplorables,” is now the victimizer, not the victim. Marxism has been stood on its head.
The American ruling class loves Identity Politics, because Identity Politics divides the people into hostile groups and prevents any resistance to the ruling elite. With blacks screaming at whites, women screaming at men, and homosexuals screaming at heterosexuals, there is no one left to scream at the rulers.
The ruling elite favors a “conversation on race,” because the ruling elite know it can only result in accusations that will further divide society. Consequently, the ruling elite have funded “black history,” “women’s studies,” and “transgender dialogues,” in universities as a way to institutionalize the divisiveness that protects them. These “studies” have replaced real history with fake history.
For example, it was once universally known that black slavery originated in slave wars between black African tribes. Slaves were a status symbol, but they accumulated beyond the capacity of tribes to sustain. The surplus was exported first to Arabs and then to English, Spanish, and French who founded colonies in the new world that had resources but no work force. The socialist scholar Karl Polanyi, brother of my Oxford professor Michael Polanyi, told the story of the origin of the African slave trade in his famous book, Dahomey and the Slave Trade.
The first slaves in the new world were white. When real history was taught, this was widely understood. Movies were even made that showed that in King George III’s England, the alternative to criminal punishment was to be sold as a slave in the colonies. See, for example:
Among the first New World lands to be exploited by the Europeans were the Carribean Islands, which were suitable for sugar and rice production. The problem was that the white slaves died like flies from malaria and yellow fever. The Spanish lack of success with a work force of natives of the lands they conquered led those in search of a work force to the slave export business of the black Kingdom of Dahomey. The demand for black workers rose considerably when it was discovered that many had immunity to malaria and resistance to yellow fever. This meant that a plantation’s investment in a work force was not wiped out by disease.
The resistance of blacks to malaria is due to the protective feature of the sickle cell trait that, apparently, only blacks have. See:
Slavery existed in the New World long before the United States came into existence. George Washington and Thomas Jefferson are today written off by Identity Politics as racists simply because they were born when slavery was a pre-existing institution.
Slavery had existed for many centuries prior to the Confederacy. Yet, in some accounts today one comes away with the impression that the South invented slavery. As the tale sometimes goes, Southern racists so hated blacks that they went to Africa, captured blacks at great expense, only to return them to the South where they whipped and abused their investments to the point of death and demoralized their work force by breaking up black families, selling children in one direction and wives and husbands in the other. This tale is not told as an occasional abuse but as the general practice. Economically, of course, it makes no sense whatsoever. But facts are no longer part of American history.
Northern states held slaves as well. However, the predominance of slaves were in the South. This was not because Southerners hated blacks. It was because the land in the South supported large agricultural cultivation, and there was no other work force. The South, like the United States, inherited slavery from the work force that European colonists purchased from the black Kingdom of Dahomey.
Why wasn’t there an alternative work force to slaves? The reason is that new immigrants by moving West could take land from the native Americans and be independent as opposed to being wage earners working on someone else’s land. The Western frontier did not close until about 1900. At the time of the War of Northern Aggression the Plains Indians still ruled west of the Mississippi River. It was Lincoln’s Northern war criminals, Sherman and Sheridan, who were sent to exterminate the Plains Indians. Ask the American natives, or what is left of them, who the racists are: the Northerners or the Southerners.
Black studies has even corrupted other aspects of history. Consider the so-called “civil war.” The name itself is an orchestration. There was no civil war. There was a War of Northern Aggression. A civil war is when two sides fight for control of the government. The South had left the union and had no interest whatsoever in controlling the government in Washington. The only reason the South fought was that the South was invaded by the North.
Why did the North invade the South? As was once understood by every historian and every student, Abraham Lincoln invaded the South in order, in Lincoln’s own words, expressed time and time again, “to preserve the Union.”
Why did the South leave the Union? Because it was being economically exploited by the North, which, once the North gained the ability to outvote the Southern states, imposed tariffs that benefited the North at the expense of the South. The North needed protection from British manufactures in order for the economic rise of the North. In contrast, the South’s economy was based on cotton exports to England and on cheap manufactures imported from England. Tariffs would bring the South higher cost of manufactured goods and retaliation against their cotton exports. The economic interests of the North and South did not coincide.
Slavery had nothing whatsoever to do with the war. Lincoln himself said so over and over. Prior to his invasion of the South, Lincoln and the Northern Congress promised the South Constitutional protection of slavery for all time if the Southern states would stay in the Union. Historians who have read and recorded the war correspondence of both Union and Confederacy soldiers to relatives and friends at home can find no one fighting for or against slavery. The Northern troops are fighting to preserve the union. The Southern ones are fighting because they are invaded.
Nothing could be clearer. Yet, the myth has been established that Abraham Lincoln went to war in order to free the slaves. In fact, Lincoln said that blacks were not capable of living with whites, who he said were superior, and that his intention was to send the blacks back to Africa. If America ever had a “white supremacist,” it was Abraham Lincoln.
What about the Emancipation Proclamation? Didn’t this order by Lincoln free the blacks? No. It was a war measure on which hopes were placed that, as almost every able-bodied Southern male was in the front lines, the slaves would revolt and rape the Southern soldiers’ wives and daughters, forcing the soldiers to desert the army and return home to protect their families. As Lincoln’s own Secretary of State said, the president has freed the slaves in the territories that the Union does not control and left them in slavery in the territory that the Union does control.
Why did Lincoln resort to such a dishonorable strategy? The reason is that Lincoln had run through all the Union generals and could not find one that could defeat Robert E. Lee’s vastly outnumbered Army of Northern Virginia.
The character and generalship of Robert E. Lee, who is dismissed by Identity Politics as a white racist, is so highly admired by the United States Army that the Barracks at West Point are named in Lee’s honor. Not even “America’s first black president” was able to change that. Black history also covers up the fact that Robert E. Lee was offered command of the Union Army. In those days Americans still saw themselves as citizens of their state, not as citizens of the US. Lee refused the offer on the grounds that he could not go to war against his native country of Virginia and resigned his US Army commission.
If Lee had been in command of the Confederacy at the First Battle of Bull Run when the Union Army broke and ran all the way back to Washington, Lee would have followed and the war would have ended with the South’s victory.
But Lee wasn’t there. Instead, the Southern generals concluded, watching the fleeing Union Army, that the Northerns could neither fight, retreat in order, or ride horses, and were no threat whatsoever. This conclusion overlooked the superior manpower of the North, the constant inflow of Irish immigrants who became the Union’s cannon fodder, the Northern manufacturing capability, and the navy that could block Southern ports and starve the South of resources.
During the first two years of the War of Northern Aggression the Union Army never won a battle against Lee’s vastly outgunned army. The North had everything. All the South had was valor. Lincoln was desperate. Opposition to his war was rising in the North. He had to imprison 300 Northern newspaper editors, exile a US Congressman, and was faced with the North’s most famous general running against him on a peace platform in the next election. Thus, Lincoln’s vain attempt to provoke a slave rebellion in the South. Why didn’t such allegedly horribly treated and oppressed slaves revolt when there was no one to prevent it but women and children?
Everything I have written in this column was once understood by everyone. But it has all been erased and replaced with a false history that serves the ruling elite. It is not only the ruling elite that has a vested interest in the false history of “white racism,” but also the universities and history departments in which the false history is institutionalized and the foundations that have financed black history, women’s studies, and transgender dialogues.
It was Reconstruction that ruined relations between blacks and whites in the South. The North stuffed blacks down the throats of the defeated South. Blacks were placed in charge of Southern governments in order to protect the Northern carpet baggers who looted and stole from the South. The occupying Union Army encouraged the blacks to abuse the Southern people, especially the women, as did the Union soldiers. The Klu Klux Klan arose as a guerrilla force to stop the predations. Robert E. Lee himself said that if he had realized how rapacious the North would prove to be, he would have led a guerrilla resistance.
The generations of Americans who have been propagandized instead of educated need to understand that Reconstruction did not mean rebuilding southern infrastructure, cities, and towns destroyed by the Union armies. It did not mean reconstructing southern food production. It meant reconstructing southern society and governance. Blacks, who were unprepared for the task, were put in control of governments so that carpetbaggers could loot and steal. Whites lost the franchise and protection of law as their property was stolen. Some areas suffered more than others from the Reconstruction practices, which often differed from, and were worse than, the policies themselves.
Reconstruction was a contentious issue even within the Republican Party. Neither president Lincoln nor Johnson would go along with the more extreme Republican elements. The extremism of the Reconstruction policies lost support among the northern people. When the Democrats regained control of the House of Representatives in the 1870s, Reconstruction was brought to an end.
In the South, and most certainly in Atlanta, where I grew up, schools were neighborhood schools. We were segregated by economic class. I went to school with middle class kids from my middle class neighborhood. I did not go to school with rich kids or with poor kids. This segregation was not racial.
When the North again got on its high moral horse and imposed school integration on the South, it disrupted the neighborhood school system. Now kids spent hours riding in school busses to distant locations. This destroyed the parent-teacher associations that had kept parental involvement and displinine in the schools. The South, being a commonsense people, saw all of this coming. The South also saw Reconstruction all over again. That, and not hatred of blacks, is the reason for the South’s resistance to school integration.
All of America, indeed of the entire West, lives in The Matrix, a concocted reality, except for my readers and the readers of a handful of others who cannot be compromised. Western peoples are so propagandized, so brainwashed, that they have no understanding that their disunity was created in order to make them impotent in the face of a rapacious ruling class, a class whose arrogance and hubris has the world on the brink of nuclear Armageddon.
History as it actually happened is disappearing as those who tell the truth are dismissed as misogynists, racists, homophobes, Putin agents, terrorist sympathizers, anti-semites, and conspiracy theorists. Liberals who complained mightily of McCarthyism now practice it ten-fold.
The brainwashing about the Russian and Muslim threats works for a number of reasons. The superpatriots among the Trump deplorables feel that their patriotism requires them to believe the allegations against Russia, Syria, Iran, and China. Americans employed in the vast military/security complex understand that the budget that funds the complex in which they have their careers is at stake. Those who want a wall to keep out foreigners go along with the demonization of Muslims as terrorists who have to be killed “over there before they come over here.” The Democrats want an excuse for having lost the presidential election. And so on. The agendas of various societal elements come together to support the official propaganda.
The United States with its brainwashed and incompetent population—indeed, the entirety of the Western populations are incompetent—and with its absence of intelligent leadership has no chance against Russia and China, two massive countries arising from their overthrow of police states as the West descends into a gestapo state. The West is over and done with. Nothing remains of the West but the lies used to control the people. All hope is elsewhere.

The IQ trap: how the study of genetics could transform education - By Philip Ball

The study of the genes which affect intelligence could revolutionise education. But, haunted by the spectre of eugenics, the science risks being lost in a political battle.
The appointment – followed, eight days later, by the resignation – of Toby Young to the board of the government’s new Office for Students in January was only the latest in a series of controversial interventions in education for the self-styled Toadmeister (Young’s Twitter handle). Having established his media profile on a platform of comments guaranteed to rile the “politically correct” (sexism, homophobia, that sort of thing), he began to reinvent himself as an educationalist through his initiatives on free schools – and he has been raising hackles in that sphere too. Things came to a head late last year when an article that Young wrote for the charity Teach First on intelligence and genetics was withdrawn from the organisation’s website on the grounds that it was “against what we believe is true and against our values and vision”. Young’s article summarised – rather accurately – the current view on how genes affect children’s IQ and academic attainment, and concluded that there is really not much that schools can do at present to alter these seemingly innate differences.
That affair is now coloured by the disclosure that Young had advocated “progressive eugenics” as a way to boost intelligence in a 2015 article in the Australian magazine Quadrant. The flames were fanned by Private Eye’s account of how Young attended what was widely labelled a “secret eugenics conference” at University College London that featured speakers with extremist views.
All this is viewed with dismay by scientists who are researching the role of genes in intelligence and considering the implications for education. They are already labouring under a cloud of suspicion, if not outright contempt, from some educationalists, and interventions by grandstanders such as Young will do nothing to soften the tenor of the debate. Such polarisation and conflict should trouble us all, though. Because, like it or not, genetics is going to enter the educational arena, and we need to have a sober, informed discussion about it.
Researchers are now becoming confident enough to claim that the information available from sequencing a person’s genome – the instructions encoded in our DNA that influence our physical and behavioural traits – can be used to make predictions about their potential to achieve academic success. “The speed of this research has surprised me,” says the psychologist Kathryn Asbury of the University of York, “and I think that it is probable that pretty soon someone – probably a commercial company – will start to try to sell it in some way.” Asbury believes “it is vital that we have regulations in place for the use of genetic information in education and that we prepare legal, social and ethical cases for how it could and should be used.”
If that sounds frightening, however, it might be because of a wide misapprehension about what genes are and what they do.
It’s sometimes said that the whole notion that intelligence has a genetic component is anathema to the liberals and left-wingers who dominate education. Young reliably depicts the extreme version here, saying “liberal educationalists… reject the idea that intelligence has a genetic basis [and] prefer to think of man as a tabula rasa, forged by society rather than nature”. He’s not alone, though. The psychologist Jill Boucher of City, University of London has lambasted what she calls “the unthinkingly self-righteous, hypocritical and ultimately damaging political correctness of those who deny that genetic inheritance contributes to academic achievement and hence social status”. Teach First’s suppression of Young’s article contributed to that impression: it was a clumsy and poorly motivated move. (The organisation has since apologised to Young.)
Despite this rhetoric, however, you’d be hard pushed to find a teacher who would question that children arrive at school with differing intrinsic aptitudes and abilities. Some kids pick things up in a flash, others struggle with the basics. This doesn’t mean it’s all in their genes: no one researching genes and intelligence denies that a child’s environment can play a big role in educational attainment. Of course kids with supportive, stimulating families and motivated peers have an advantage, while in some extreme cases the effects of trauma or malnutrition can compromise brain development. But the idea of the child as tabula rasa seems to be something of a straw man.
That’s backed up by a 2005 study by psychologist Robert Plomin of King’s College London, one of the leading experts on the genetic basis of intelligence, and his colleague Sheila Walker. They surveyed almost 2,000 primary school teachers and parents about their perceptions of genetic influence on a number of traits, including intelligence, and found that on the whole, both teachers and parents rated genetics as being just as important as the environment. This was despite the fact that 80 per cent of the teachers said there was no mention of genetics in their training. Plomin and Walker concluded that educators do seem to accept that genes influence intelligence.
Kathryn Asbury supports that view. When her PhD student Madeline Crosswaite investigated teachers’ beliefs about intelligence, Asbury says she found that “teachers, on average, believe that genetic factors are at least as important as environmental factors” and say they are “open to a role for genetic information in education one day, and that they would like to know more”.
Why, then, has there been this insistence from conservative commentators that liberal educationalists are in denial? It’s just one reflection of how the whole discussion has become highly politicised as left versus right, political correctness versus realism. There’s more of that to come.
It may be that people’s readiness to accept innate difference decreases when it is couched in terms of genes. If so, one reason could be a lingering association of genes with eugenics – the notion of improving traits in a population by selective breeding, and perhaps sterilisation, to promote “good” genes and drive out “bad”.
That bitter stew gets stirred by media-fuelled fantasies about designer babies and a genetic underclass (see the 1997 movie Gattaca). But I have a hunch, too, that many detect a whiff of determinism in the current discourse on genetics: that your genes fix from conception what kind of person you will become.
The intended counter-piece to Young’s on the Teach First website was written by Sonia Blandford, dean of education at Canterbury Christ Church University College and author of Born to Fail?. Blandford was silent about genes but wrote only about the inequities of a disadvantaged or lower-class background. So Young and Blandford would have been talking past each other, while leaving hanging in the air the idea that your genetics could also leave you “born to fail”.
All too often genes are read as destiny. But in truth there’s rather little in your genetic make-up that fixes traits or behaviour with any clarity. There are some genetic diseases that particular gene mutations will give you if you’re unlucky enough to inherit them. But most traits (including diseases) that are influenced by genes manifest only as tendencies. If you’re a woman with a certain variant of the BRCA1 gene, you have an increased risk of developing breast cancer. But there’s nothing to say that you will.
Partly this is because a lot of traits are influenced by many genes, interacting and correlating with one another in complex ways that are hard, perhaps impossible, to anticipate. But it’s also because genes are themselves influenced by environmental factors, which can cause them to be activated or suppressed. When it comes to behavioural traits such as intelligence, prediction from genes is unclear. Brain development is sensitive to genetic influence, but it’s not completely determined by it. The way the brain gets “wired” depends on early experience in the womb, childhood and adolescence, and remains susceptible to environmental influences throughout life.
Quite why genes have acquired this deterministic, and therefore ominous, aura isn’t clear. I strongly suspect that the rhetoric used to advertise the human genome project played a big part, promoting the notion that your genes are “the real you”. DNA sequencing companies such as 23andMe now use this line to sell their wares. Talking of genes “for” this or that trait reinforces the impression – there are no genes “for” intelligence, height, breast cancer and so on, although some genes affect those things. Genetics is now trying to backpedal out of a hole that, without such hype, it need never have got into. The result is that the tone of a discussion of innate versus environmental factors in intelligence is likely to plummet once genes are mentioned. “People worry about the motives that researchers have for asking these sorts of questions [about nature and nurture],” says Asbury. “I think eugenics still casts a long shadow.”
Whatever the reasons, the fact is that almost all research on education and genes is done within departments not of education but of psychology or genetics, a point made by the psychologist Stuart Ritchie of Edinburgh University. As a result, he says, while the science is fairly settled, “the debate in education is lagging behind”.
What does the science tell us about genes and intelligence? For geneticists, the challenge with any behavioural trait is to distinguish inherited influences from environmental ones. Are you smart (or not) because of your genes, or your home and school environment? For many years, the only way to separate these factors was through twin studies. This is a somewhat coarse way of controlling for genetic similarity, which entails looking at how the traits of identical and non-identical twins (who are 100 per cent or 50 per cent genetically identical, respectively) differ when they share or don’t share the same background – for example, when they are adopted into different family environments.
But now it’s possible to look directly at people’s genomes: to read the molecular code (sequence) of large proportions of an individual’s DNA. Over the past decade the cost of genome sequencing has fallen sharply, making it possible to look more directly at how genes correlate with intelligence. The data both from twin studies and DNA analysis are unambiguous: intelligence is strongly heritable. Typically around 50 per cent of variations in intelligence between individuals can be ascribed to genes, although these gene-induced differences become markedly more apparent as we age. As Ritchie says: like it or not, the debate about whether genes affect intelligence is over.
If that’s so, we should be able to see which genes are involved. But it has proved extremely difficult to find them. For many years, extensive efforts to zero in on the genes underpinning intelligence produced only a few candidates. Over the past year or so, however, the picture changed dramatically, partly because of better methods of searching but also because the spread of genome sequencing has made much bigger population samples available: that’s the key to spotting very small effects.
None of the genes identified this way are in any meaningful sense “for intelligence”. They tend to have highly specialised functions in embryo development – mostly connected to the brain. The influence of a particular gene might manifest in one or more aspects of intelligence, such as spatial sense, vocabulary or memory. There may well be hundreds, even thousands of such genes that make a contribution to intelligence. And people show so many different cognitive skills, ranging from imagination to an ability to remember historical dates or do calculus, that it could seem ludicrous to collapse them all to the single dimension of, say, IQ (see box, overleaf).
Recently, the introduction of a new way of adding up the influences of many genes, known as a genome-wide polygenic score (GPS), has hugely boosted our ability to identify the specific genetic variants that contribute to the heritable component of intelligence. But if so many genes are involved, can we meaningfully predict anything from someone’s genes about their likely intelligence? Well, even if we don’t know quite how all those genes function or integrate their effects, we can search for patterns – just as, although we can’t know exactly what led some individuals to vote for Brexit, we can make a fair prediction of how they voted from their age and demographic profile.
GPSs can now be used to make such predictions about intelligence. They’re not really reliable at the moment, but will surely become better as the sample sizes for genome-wide studies increase. They will always be about probabilities, though: “Mrs Larkin, there is a 67 per cent chance that your son will be capable of reaching the top 10 per cent of GCSE grades.” Such exam results were indeed the measure Plomin and colleagues used for one recent study of genome-based prediction. They found that there was a stronger correlation between GPS and GCSE results for extreme outcomes – for particularly high or low marks.
We could never forecast anything for sure. In Plomin’s study, the young person with the second-highest GPS for intelligence achieved results only slightly above average. That’s not surprising, though: environmental factors still play an important role. There might be, say, a family problem holding the child back. Or it may be that the GPS is not in this case an accurate indicator of potential at all, and the child gets burdened with unrealistic expectation and disappointment from teachers and parents. So using such measures for individual prediction could be fraught.
Whatever the uncertainties, though, you can be sure some people will want this information, just as they currently get their genomes analysed for medical and genealogical data by private companies. “We predict,” Plomin and behavioural psychologist Sophie von Stumm wrote in a paper published this January, “that IQ GPSs will become routinely available from direct-to-consumer companies.” They say that a GPS analysis – not just for intelligence but for other traits – can be conducted at a cost of less than $100 per person.
The era of genetic forecasting of intelligence and ability is, then, already upon us. We now need to grapple with what that might mean for educational policy. “I believe that GPSs will be a real game-changer for education and provide a realistic and practical way of using genetics in the classroom,” says Emily Smith-Woolley, a researcher with Plomin at King’s College. But how? Nothing here is obvious, for the same reason that no scientific discovery implies moral inevitabilities: as David Hume put it, there is a difference between is and ought. “Genetic research has no necessary policy implications,” says Smith-Woolley. “What policymakers wish to do with the research is a judgement based on values they do or do not class as important.”
That helps presumably to explain why those with left-leaning inclinations, such as Plomin and Asbury, want to see our understanding of genes and intelligence used to level the playing field by applying a knowledge of children’s genetic potential to tailor their educational regimes, rather than persisting with a one-size-fits-all approach.
Toby Young, on the other hand, rejects such notions and favours a sink-or-swim approach that will (he believes) let the most able rise to the top: a philosophy far more suited to the instincts of the right. The correct approach, he argues, is simply to introduce “all children to the best that has been thought and said” and teach them “to value logic and reason”. And, one supposes, to pull their socks up.
I’ll hazard a guess that most people, at least among New Statesman readers, will feel sympathetic to the idea of finding ways to maximise every child’s potential. This would not be about the vague and contested notion of “learning styles”, but a more rigorous analysis of how certain genetic profiles respond better to particular types of problem or environment.
“At the moment we are detecting ‘problems’ only when they are visible, and at that point they can be detrimental for the child and hard to treat,” says Smith-Woolley. “Genetics offers the potential for predicting and preventing. For example, from birth we might be able to tell if a child has many genetic variants associated with having dyslexia. So why not intervene straight away, with proven strategies, before a problem emerges?” Whether such a scheme could work for more subtle aspects of intelligence and learning – whether we could realistically and reliably use genes alone to predict them, and then tailor learning strategies to have an impact – remains far from clear.
Moreover, educationalists already know a great deal about what works in education and what doesn’t, just as good teachers are attuned to the needs of a child. In their 2014 book G is for Genes, Asbury and Plomin make several sensible suggestions on education policy; but all of them – giving struggling children support without belabouring labels, teaching “thinking skills”, personalising and broadening the curriculum – could have been made without recourse to gene-based arguments. Might a fixation on genes be a red herring when there’s much more in education that we could fix now to far greater effect? Do we really need yet another way of testing and classifying children?
Asbury and Plomin say that eventually we will have a device that cheaply and quickly analyses a child’s DNA – what they call a “Learning Chip” – to make a reliable genetic prediction of “heritable differences between children in terms of their cognitive ability and academic achievement”. This idea will send a chill down the spines of many parents, who might fear that children will be branded for success or failure from birth.
Yet, according to Stuart Ritchie, some studies have shown that when IQ tests are used in this way they may identify more bright children among disadvantaged and ethnic minorities than teachers do. Even with the best will in the world, teachers may have cognitive biases that could influence the assessment of such groups. An objective test of academic potential based on a readout of a child’s genes might help to avoid such ingrained prejudices. And discrepancies between prediction and outcome could flag up cases where children are being held back by circumstance, or could help us learn from children who excel despite apparently unexceptional genetic endowment.
Plomin, Asbury, Smith-Woolley and their co-workers – Toby Young is a co-author on the paper too – have recently caused a stir with another demonstration of how genetic analysis may inform educational practice. Using GPSs from nearly 5,000 pupils, the report assesses how exam results from different types of school – non-selective state, selective state grammar, and private – are correlated with gene-based estimates of ability for the different pupil sets. The results might offer pause for thought among parents stumping up eyewatering school fees: the distribution of exam results at age 16 could be almost wholly explained by heritable differences, with less than 1 per cent being due to the type of schooling received. In other words, as far as academic achievement is concerned, selective schools seem to add next to nothing to the inherent abilities of their pupils. Again, politics informs conclusions. For the Conservative peer and science writer Matt Ridley this research affirms the futility of the left’s desire to “wish away” the role of genes in ability. For Asbury it shows that there is nothing to commend grammar schools, which merely cream off the best pupils without enhancing their innate capabilities.
All the same, Asbury avers that genetic assessment will only ever be an accessory to, and not a replacement for, existing methods of teaching and evaluation. “While genetic information can’t tell us everything,” she says, “it can indicate risk and might catch some kids that other indices, focused on more economic measures, miss.”
Those “economic measures” alert us to one of the most controversial issues: whether the well-established correlation between socioeconomic status (SES) and measures of intelligence or achievement have a genetic component. Obviously there’s a strong environmental influence – rich kids go to the best schools, middle-class families have the resources to help with homework and go on cultural visits – but is that the whole story? To put it bluntly, might some children remain socially immobile because of their intelligence-linked genes?
It’s an uncomfortable thought, but the evidence seems clear: “SES is partly heritable,” Asbury and Plomin say. Genes can explain 40 per cent of the variability in people’s job-related status, and 30 per cent of income differences. In a 2016 study using GPSs, Plomin and colleague Eva Krapohl found that about half of the correlation between educational achievement and SES of British 16-year-olds could be ascribed to genetic factors.
If we put it in everyday terms this isn’t seem surprising. People with genetic learning disabilities face bigger obstacles than the rest of us to becoming socially and economically secure, while very smart people from poor families have a better chance of climbing the ladder. Still, it’s disturbing to see it spelt out in hard data: social mobility is not all a question of inequality
of opportunity. Our social structures may well exacerbate these genetic influences – for example, in terms of how we choose to award status.
“We prioritise academic goals such as university entry to such an extent that good goals that are less ‘intelligence-loaded’ are not encouraged,” says Asbury, “and the children for whom they would be a good fit, leading to life satisfaction, pride, fulfilment, happiness, are under-nurtured.” Psychologist Wendy Johnson, a sceptic about how useful genetics can be in education, concurs with that sentiment: “A big reason intelligence test scores are so associated with all the ‘good things’ in life are because we reward its display.”
With unerring instinct, Toby Young seized on the most inflammatory way to frame this discussion: Bring up the E-word. But he did so not in quite in the way you might think. To read some media reports of his 2015 article on “progressive eugenics”, you might imagine he was advocating eradication of the IQ-deficient poor. On the contrary, he was pointing to the possibility that de facto eugenics might arrive soon in the form of people using genetic screening of embryos in IVF to select for those with the best intelligence profile. When such technology arrives, said Young, it should be made available freely to poorer people to avoid a widening divide in intelligence between the haves and have-nots. Indeed, he said, it should then be welcomed as a means of raising the intelligence of the whole of society – surely a morally valid goal?
Is that scientifically possible, though? With intelligence thinly spread across so many genes, many of which have other functions too, is it realistic to think of selecting for intelligence? That’s not clear. “Any form of eugenics is nonsensical from a scientific view, as well as being abhorrent from a social, ethical and moral point of view,” says Asbury. But Ritchie points out that some intelligence-linked genes also relate to other characteristics we might consider beneficial, such as reduced chance of depression, obesity and schizophrenia. He also says that some rough-and-ready estimates suggest “you could get a pretty good benefit” in intelligence (on average)
from selection.
Embryo selection for intelligence is illegal in the UK under current regulations. But it’s unlikely to be made illegal everywhere in the world. Besides, Ritchie adds, in the West we already permit some degree of intelligence selection in reproduction – for example by licensing sperm or egg banks stocked by Ivy League graduates, and conversely by allowing for Down’s syndrome screening.
The irony with the furore over Young’s eugenics musings, says Ritchie, is that moral philosophers and bioethicists have already been discussing these issues for a long time. That’s not to exonerate Young but to say that the debate would be better served by turning to more serious minds than those of incontinently provocative liberal-goaders. It’s a debate we can’t shirk. “I feel a sense of anxiety that we’re not having it already,” Ritchie says.
In this fraught arena, we each need to place our cards on the table. As I watch my daughters’ local state schools work wonders with a pupil intake of hugely mixed ability and background, I can plainly see how significantly a child’s environment, such as the family circumstances and teacher’s skills, can impact on his or her attainment. So I believe that educational outcomes are partly determined by circumstances. At the same time, as a child of an unprivileged lower-middle-class family who found himself with an anomalously high IQ – for which, unearned and unsought, I feel neither pride nor embarrassment – I can see what advantages a lucky roll of the DNA dice can bring.
If research on genes and intelligence helps both to reduce the injustices of environment and release the full potential of every child, I would welcome its consequences. It is by no means certain that it will do either; a possible outcome is that it becomes an unwelcome distraction from addressing immediate, soluble problems in education, and that it might even exacerbate inequality. I do believe, though, that collectively we can and must decide which outcomes we want – and that the first step is to look without prejudice at the facts. 
How is intelligence measured?
The notion of an “intelligence quotient” (IQ) was introduced over a century ago as the ratio of mental age (in terms of intelligence) to chronological age. A ten-year-old child with an IQ of 120 has a mental age of 12, say. But there are all sorts of questions about what that means.
After all, IQ testing can be coached, IQ changes over time, and average IQ has been increasing over time. “Intelligence” is here in any case a somewhat emotive, prejudicial and, arguably, narrow term for what IQ is meant to measure, which is general cognitive ability. Yet what the notion of IQ reflects is the well-established fact that people who score well in one type of cognitive test tend to do well in others: there’s something generalised about such abilities.
The flaws of IQ testing have been wellrehearsed, not least the accusation that it is culturally biased. And it hasn’t yet fully expunged the stain of its use to guide ideas about eugenic sterilisation in the UK and the US in the early 20th century. But IQ seems to measure something meaningful. There are, for example, clear correlations between people’s IQ scores and their academic attainment, as well as their success in later life and their general well-being. One response is: big deal. Our culture, you might argue, has simply elected to reward those aspects of intelligence that IQ measures, so it’s a self-fulfilling prophesy.
IQ tests might tap a host of cognitive abilities, but not qualities such as empathy or loyalty that carry less guarantee of reward. Studies of genes and intelligence should not, then, be divorced from a much wider debate about what gets valued and nurtured in school and in life. The University of York psychologist Kathryn Asbury agrees with those criticisms, but she believes nevertheless that IQ is a worthwhile metric. “To my mind it is the jewel in psychology’s currently rather tarnished crown. It is reliable, robust, stable over decades and predictive of most of the things we care about.”
And it’s not just about measuring how good you are at spatial puzzles and mental arithmetic. “IQ correlates with other aspects of a person such as personality or motivation, and these factors are likely to make a difference to education and life outcomes, too.” The problem is not the use of IQ testing but how it is interpreted. IQ, Asbury and Robert Plomin say, is “just one predictor of achievement – albeit a strong one”.
Philip Ball’s most recent book is “Beyond Weird: Why Everything You Thought You Knew About Quantum Physics is Different” (Bodley Head)