History Podcasts

The Age of Confusion: Mass Manipulation & Propaganda - Part One

The Age of Confusion: Mass Manipulation & Propaganda - Part One


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Ancient Origins is dedicated to breaking through the miasma of encrusted misconceptions that stand between any researcher and the truth concerning our real origins. Books such as 'Forbidden Archeology' and many others have recently emerged to clear the clutter of assumptions made by tenured establishment scholars with certain agendas.

The idea of manipulating public opinion, beliefs, and habitual consumption is very old. Using the fear of death, religions have been duping us for centuries. The Jesuits were experts. We should not be surprised that more recent and intrusive ways have been found to influence and herd the masses.

Controlling our minds with Propaganda

In 1928, Edward L. Bernays (nephew of Sigmund Freud), wrote a small book entitled, 'PROPAGANDA' (free download here), in which he delineates how to organize mass chaos, how propaganda can be used by an elite few to start wars and influence business, politics, literally every aspect of our lives. The book is available online free as a PDF. The final chapter is entitled: The Mechanics of Propaganda. Here is an example:

The conscious and intelligent manipulation of the organized habits and opinions of the masses is an important element in democratic society. Those who manipulate this unseen mechanism of society constitute an invisible government, which is the true ruling power of our country.
...We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of.
Chapter I: ORGANIZING CHAOS

And from an introduction to the published edition:

That propaganda easily seduces even those whom it most horrifies is a paradox that Bernays grasped completely; and it is one that we must try at last to understand, if we want to change the world that Edward Bernays, among others, made for us.
- Mark Crispin Miller, NYC 2004; Ig Publishing edition.

An important book on propaganda was published in 1997: Taking the Risk out of Democracy: Corporate Propaganda versus Freedom and Liberty by Alex Carey, 1997; University of Illinois Press

While this might not seem exactly metaphysical, my usual area of study, I believe it is very important for us all to understand the mechanics of propaganda and manipulation. Alex Carey’s book is key to learning how Americans have been lied to and manipulated — and thus fits into the predictions from the ancient Sanskrit text the Linga Purana concerning our current Age of Confusion, the Kali Yuga :

*People will prefer to choose false ideas.

*Base [low minded] men who have gained a certain amount of learning (without having the virtues necessary for its use) will be esteemed as sages.

*Thieves will become kings, and kings will be the thieves.

*Rulers will confiscate property and use it badly. They will cease to protect the people.

'Taking the Risk out of Democracy, Corporate Propaganda versus Freedom and Liberty' by Alex Carey, sheds some light on the demons in the closet of the United States of America.

As Mr. Carey informs us, "The common man…has never been so confused, mystified and baffled; his most intimate conceptions of himself, of his needs, and indeed the very nature of human nature, have been subject to skilled manipulation and construction in the interests of corporate efficiency and profit."

This book tells us that we the American people have been subjected to a 75-year long multi-billion dollar intentional assault on our freedom to think and to choose.

... propaganda techniques have been developed and deployed (in the United States)... to control and deflect the purposes of the domestic electorate in a democratic country in the interests of the privileged segments of that society.

What is propaganda?

Propaganda is the management of collective attitudes by the manipulation of significant symbols... Collective attitudes are amenable to many modes of alteration… intimidation... economic coercion... drill. But their arrangement and rearrangement occurs principally under the impetus of significant symbols; and the technique of using significant symbols for this purpose is propaganda.
(Laswell, Bradson, and Janowitz 1953:776-80)

These significant symbols are the catch phrases by which we human beings can be aroused to anger, to go to war, or merely to consume. Phrases like the American Way, the Free Enterprise System, the American Dream, and the global economy are meant to empower our faith — as opposed to creeping socialism, the red menace, and a national threat.

Significant symbols are "symbols with real power over emotional reactions, ideally symbols of the Sacred and the Satanic."

People are polarized by these symbols. They see life in terms of good and bad, black and white, and thus are more easily manipulated. The 'enemy' out there may indeed seem evil. But in the solitude of our own hearts, we know that we are all a mix of both. None of us is so clearly saint or sinner. Instead of emotionally polarizing we could have a dialogue, a discussion; and yet, it seems we can be manipulated by propaganda into thinking almost anything.

Alex Carey suggests that we Americans might be the most brain washed country on the planet! One professor in Carey's book, Professor Harwood Childs, states, "Americans are the most propagandized people of any nation."

I know you are thinking, "Hey! Only the bad guys use propaganda. Only our enemies use propaganda." And you are right, they do. But as we are now learning, the 'bad' guys are sometimes right in our own back yard and we don't even see them.

Who are these 'geniuses' that believe they have the right to manipulate our thinking?

In the early days of World War I, we the America people - like any intelligent group of human beings - didn't want to go to war. So we had to be convinced, or coerced, and this was done very effectively by a campaign launched by President Woodrow Wilson, Walter Lippman (an eminent journalist), and Edward Bernays (who just happened to be the nephew of Sigmund Freud). I can always remember the Bernays name because it sounds like that really fattening butter sauce.

Lippman and Bernays were brilliant at brainwashing.

Bernays is famous for saying, "If we understand the mechanisms and motives of the group mind, it is now possible to control and regiment the masses according to our will without their knowing it." [from 'Toxic Sludge is Good for You!' Common Courage Press]. Bernays called this 'engineering consent.'

I don't know about you, but I sure like to spend my weekends thinking about controlling the group mind. What are these weirdos thinking? The reason none of us have known that these things are going on is because none of us is capable of thinking like this! We have better ways to occupy our time.

The World War I propaganda campaign of Mr. Lippman and Mr. Bernays "produced within six months so intense an anti-German hysteria as to permanently impress American business (and Adolph Hitler, among others) with the potential of large scale propaganda to control public opinion."

Bernays found a very practical use for his Uncle Sigmund's science of psychology. "When the war ended, Bernays later wrote, business realized that the general public could now be harnessed to their cause as it had been harnessed to the war, to the national cause." (Alex Carey).

This has been going on in the USA since before most of us were born, and propaganda has been used since the beginning of written history. Before metaphysics, I studied art and art history and I have concluded that much of what we consider to be art is in fact propaganda, wonderful beauty designed to intimidate and instruct. Think about those big statues of gods, rulers and tyrants.

Propaganda has become a profession

Today we have far more efficient means of distributing propaganda than ever before. The television, the Internet, and the media in general have made it easy for the masters of ‘spin’ and the public relations firms with their armies of lawyers, lobbyists, and paid-for-scientists to tell us what to think so that we will all be good little consumers. The true religion of the West is consumerism, and we are working night and day to spread that religion to our brothers and sisters around the world.

Every man, woman and child on the planet now has a right to shop at a Mall and eat burgers & fries, whether they want to or not. Was that in the Bill of Rights? Oh, yes, of course "the pursuit of happiness" is — having more things! Or as one journalist noted, "When the going gets tough, the tough go shopping?"

Contempt for us...

In 1927, Harold Lasswell wrote "Propaganda Techniques in World War I" and suggested that, "familiarity with the behavior of the ruling public (meaning those who had so easily succumbed to the propaganda) has bred contempt... as a consequence, despondent democrats turned elitist, no longer trusting intelligent public opinion, and therefore should themselves determine how to make up the public mind, how to bamboozle and seduce in the name of the public good..."

As Alex Carey points out, "propaganda has become a profession. The modern world is busy developing a corps of men & women who do nothing but study the ways and means of changing minds or binding minds to their convictions."


The Era of Fake Video Begins

The digital manipulation of video may make the current era of “fake news” seem quaint.

In a dank corner of the internet, it is possible to find actresses from Game of Thrones or Harry Potter engaged in all manner of sex acts. Or at least to the world the carnal figures look like those actresses, and the faces in the videos are indeed their own. Everything south of the neck, however, belongs to different women. An artificial intelligence has almost seamlessly stitched the familiar visages into pornographic scenes, one face swapped for another. The genre is one of the cruelest, most invasive forms of identity theft invented in the internet era. At the core of the cruelty is the acuity of the technology: A casual observer can’t easily detect the hoax.

This development, which has been the subject of much hand-wringing in the tech press, is the work of a programmer who goes by the nom de hack “deepfakes.” And it is merely a beta version of a much more ambitious project. One of deepfakes’s compatriots told Vice’s Motherboard site in January that he intends to democratize this work. He wants to refine the process, further automating it, which would allow anyone to transpose the disembodied head of a crush or an ex or a co-worker into an extant pornographic clip with just a few simple steps. No technical knowledge would be required. And because academic and commercial labs are developing even more-sophisticated tools for non-pornographic purposes—algorithms that map facial expressions and mimic voices with precision—the sordid fakes will soon acquire even greater verisimilitude.


Propaganda Is Directing Us Leftward

American conservatives are by and large clueless about propaganda methods and tactics. And it shows. There are virtually no conservative social psychologists around. You’d think once a liberal social psychologist hits the public over the head with this fact some on the Right would take notice and at least try to get clued in.

Meanwhile, the Left has been employing social psychology and depth psychology on the masses for decades. President Obama’s campaign staff was filled with social psychologists. In this context, those who believe conservatives can subsist on reason and logic alone are kidding themselves. It’s no wonder GOP leaders are caving on so many principles, and being absorbed so easily into the Left’s machine.

A lot of people are scratching their heads today, wondering how life got to be so surreal, so fast in the United States of America. Based on the silencing tactics revealed by the LGBT lobby, many observers are likely now thinking: “Gee, I thought marriage equality was merely a gay rights movement. I didn’t realize that fascism was part of that package.” The Great Unraveling continues at a rapid clip when slipping on a pronoun in these days of transgender rule could cost you your career or earn you massive social media rallies chanting “hater” at you.

Even benign reminders of the First Amendment—embodied in Religious Freedom Restoration Acts—are quickly dispatched by mob hysteria. One day a supposedly principled leader like Indiana Gov. Mike Pence promotes the RFRA, and the next day he folds and essentially signs on with the mob.

There seem to be few independent thinkers left. But even they don’t seem to know what hit them. A woman gets banned by her gym and labelled a bigot because she told management that a man—who she only later learned “identified as female”—entered the locker room while she was getting undressed. Comedians who dare tread into trans territory are shut down. Never before have the media and pop culture dictated in such a draconian manner how each and every one of us is supposed to think about identity. Our own identity.

The list goes on. The unrest and rioting from Ferguson to Baltimore seem to be happening on cue also, with media propaganda that urges it on. There is no real debate on the merits of policies that depend on a blind faith in man-made global warming: those who disagree are labelled “deniers.”


The Future of Truth and Misinformation Online

In late 2016, Oxford Dictionaries selected “post-truth” as the word of the year, defining it as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

The 2016 Brexit vote in the United Kingdom and the tumultuous U.S. presidential election highlighted how the digital age has affected news and cultural narratives. New information platforms feed the ancient instinct people have to find information that syncs with their perspectives: A 2016 study that analyzed 376 million Facebook users’ interactions with over 900 news outlets found that people tend to seek information that aligns with their views.

This makes many vulnerable to accepting and acting on misinformation. For instance, after fake news stories in June 2017 reported Ethereum’s founder Vitalik Buterin had died in a car crash its market value was reported to have dropped by $4 billion.

Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to.
Tom Rosenstiel

When BBC Future Now interviewed a panel of 50 experts in early 2017 about the “grand challenges we face in the 21 st century” many named the breakdown of trusted information sources. “The major new challenge in reporting news is the new shape of truth,” said Kevin Kelly, co-founder of Wired magazine. “Truth is no longer dictated by authorities, but is networked by peers. For every fact there is a counterfact and all these counterfacts and facts look identical online, which is confusing to most people.”

Americans worry about that: A Pew Research Center study conducted just after the 2016 election found 64% of adults believe fake news stories cause a great deal of confusion and 23% said they had shared fabricated political stories themselves – sometimes by mistake and sometimes intentionally.

The question arises, then: What will happen to the online information environment in the coming decade? In summer 2017, Pew Research Center and Elon University’s Imagining the Internet Center conducted a large canvassing of technologists, scholars, practitioners, strategic thinkers and others, asking them to react to this framing of the issue:

The rise of “fake news” and the proliferation of doctored narratives that are spread by humans and bots online are challenging publishers and platforms. Those trying to stop the spread of false information are working to design technical and human systems that can weed it out and minimize the ways in which bots and other schemes spread lies and misinformation.

The question: In the next 10 years, will trusted methods emerge to block false narratives and allow the most accurate information to prevail in the overall information ecosystem? Or will the quality and veracity of information online deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas?

Respondents were then asked to choose one of the following answer options:

The information environment will improve – In the next 10 years, on balance, the information environment will be IMPROVED by changes that reduce the spread of lies and other misinformation online.

The information environment will NOT improve – In the next 10 years, on balance, the information environment will NOT BE improved by changes designed to reduce the spread of lies and other misinformation online.

Some 1,116 responded to this nonscientific canvassing: 51% chose the option that the information environment will not improve, and 49% said the information environment will improve. (See “About this canvassing of experts” for details about this sample.) Participants were next asked to explain their answers. This report concentrates on these follow-up responses.

Their reasoning revealed a wide range of opinions about the nature of these threats and the most likely solutions required to resolve them. But the overarching and competing themes were clear: Those who do not think things will improve felt that humans mostly shape technology advances to their own, not-fully-noble purposes and that bad actors with bad motives will thwart the best efforts of technology innovators to remedy today’s problems.

And those who are most hopeful believed that technological fixes can be implemented to bring out the better angels guiding human nature.

More specifically, the 51% of these experts who expect things will not improve generally cited two reasons:

The fake news ecosystem preys on some of our deepest human instincts: Respondents said humans’ primal quest for success and power – their “survival” instinct – will continue to degrade the online information environment in the next decade. They predicted that manipulative actors will use new digital tools to take advantage of humans’ inbred preference for comfort and convenience and their craving for the answers they find in reinforcing echo chambers.

Our brains are not wired to contend with the pace of technological change: These respondents said the rising speed, reach and efficiencies of the internet and emerging online applications will magnify these human tendencies and that technology-based solutions will not be able to overcome them. They predicted a future information landscape in which fake information crowds out reliable information. Some even foresaw a world in which widespread information scams and mass manipulation cause broad swathes of public to simply give up on being informed participants in civic life.

The 49% of these experts who expect things to improve generally inverted that reasoning:

Technology can help fix these problems: These more hopeful experts said the rising speed, reach and efficiencies of the internet, apps and platforms can be harnessed to rein in fake news and misinformation campaigns. Some predicted better methods will arise to create and promote trusted, fact-based news sources.

It is also human nature to come together and fix problems: The hopeful experts in this canvassing took the view that people have always adapted to change and that this current wave of challenges will also be overcome. They noted that misinformation and bad actors have always existed but have eventually been marginalized by smart people and processes. They expect well-meaning actors will work together to find ways to enhance the information environment. They also believe better information literacy among citizens will enable people to judge the veracity of material content and eventually raise the tone of discourse.

The majority of participants in this canvassing wrote detailed elaborations on their views. Some chose to have their names connected to their answers others opted to respond anonymously. These findings do not represent all possible points of view, but they do reveal a wide range of striking observations.

Respondents collectively articulated several major themes tied to those insights and explained in the sections below the following graphic. Several longer additional sets of responses tied to these themes follow that summary.

The following section presents an overview of the themes found among the written responses, including a small selection of representative quotes supporting each point. Some comments are lightly edited for style or length.

Theme 1: The information environment will not improve: The problem is human nature

Most respondents who expect the environment to worsen said human nature is at fault. For instance, Christian H. Huitema, former president of the Internet Architecture Board, commented, “The quality of information will not improve in the coming years, because technology can’t improve human nature all that much.”

These experts predicted that the problem of misinformation will be amplified because the worst side of human nature is magnified by bad actors using advanced online tools at internet speed on a vast scale.

The quality of information will not improve in the coming years, because technology can’t improve human nature all that much.
Christian H. Huitema

Tom Rosenstiel, author, director of the American Press Institute and senior fellow at the Brookings Institution, commented, “Whatever changes platform companies make, and whatever innovations fact checkers and other journalists put in place, those who want to deceive will adapt to them. Misinformation is not like a plumbing problem you fix. It is a social condition, like crime, that you must constantly monitor and adjust to. Since as far back as the era of radio and before, as Winston Churchill said, ‘A lie can go around the world before the truth gets its pants on.’”

Michael J. Oghia, an author, editor and journalist based in Europe, said he expects a worsening of the information environment due to five things: “1) The spread of misinformation and hate 2) Inflammation, sociocultural conflict and violence 3) The breakdown of socially accepted/agreed-upon knowledge and what constitutes ‘fact.’ 4) A new digital divide of those subscribed (and ultimately controlled) by misinformation and those who are ‘enlightened’ by information based on reason, logic, scientific inquiry and critical thinking. 5) Further divides between communities, so that as we are more connected we are farther apart. And many others.”

Leah Lievrouw, professor in the department of information studies at the University of California, Los Angeles, observed, “So many players and interests see online information as a uniquely powerful shaper of individual action and public opinion in ways that serve their economic or political interests (marketing, politics, education, scientific controversies, community identity and solidarity, behavioral ‘nudging,’ etc.). These very diverse players would likely oppose (or try to subvert) technological or policy interventions or other attempts to insure the quality, and especially the disinterestedness, of information.”

Subtheme: More people = more problems. The internet’s continuous growth and accelerating innovation allow more people and artificial intelligence (AI) to create and instantly spread manipulative narratives

While propaganda and the manipulation of the public via falsehoods is a tactic as old as the human race, many of these experts predicted that the speed, reach and low cost of online communication plus continuously emerging innovations will magnify the threat level significantly. A professor at a Washington, D.C.-area university said, “It is nearly impossible to implement solutions at scale – the attack surface is too large to be defended successfully.”

Jerry Michalski, futurist and founder of REX, replied, “The trustworthiness of our information environment will decrease over the next decade because: 1) It is inexpensive and easy for bad actors to act badly 2) Potential technical solutions based on strong ID and public voting (for example) won’t quite solve the problem and 3) real solutions based on actual trusted relationships will take time to evolve – likely more than a decade.”

It is nearly impossible to implement solutions at scale – the attack surface is too large to be defended successfully.
Anonymous professor

An institute director and university professor said, “The internet is the 21st century’s threat of a ‘nuclear winter,’ and there’s no equivalent international framework for nonproliferation or disarmament. The public can grasp the destructive power of nuclear weapons in a way they will never understand the utterly corrosive power of the internet to civilized society, when there is no reliable mechanism for sorting out what people can believe to be true or false.”

Bob Frankston, internet pioneer and software innovator, said, “I always thought that ‘Mein Kampf’ could be countered with enough information. Now I feel that people will tend to look for confirmation of their biases and the radical transparency will not shine a cleansing light.”

David Harries, associate executive director for Foresight Canada, replied, “More and more, history is being written, rewritten and corrected, because more and more people have the ways and means to do so. Therefore there is ever more information that competes for attention, for credibility and for influence. The competition will complicate and intensify the search for veracity. Of course, many are less interested in veracity than in winning the competition.”

Glenn Edens, CTO for technology reserve at PARC, a Xerox company, commented, “Misinformation is a two-way street. Producers have an easy publishing platform to reach wide audiences and those audiences are flocking to the sources. The audiences typically are looking for information that fits their belief systems, so it is a really tough problem.”

Subtheme: Humans are by nature selfish, tribal, gullible convenience seekers who put the most trust in that which seems familiar

The respondents who supported this view noted that people’s actions – from consciously malevolent and power-seeking behaviors to seemingly more benign acts undertaken for comfort or convenience – will work to undermine a healthy information environment.

People on systems like Facebook are increasingly forming into ‘echo chambers’ of those who think alike. They will keep unfriending those who don’t, and passing on rumors and fake news that agrees with their point of view.
Starr Roxanne Hiltz

An executive consultant based in North America wrote, “It comes down to motivation: There is no market for the truth. The public isn’t motivated to seek out verified, vetted information. They are happy hearing what confirms their views. And people can gain more creating fake information (both monetary and in notoriety) than they can keeping it from occurring.”

Serge Marelli, an IT professional who works on and with the Net, wrote, “As a group, humans are ‘stupid.’ It is ‘group mind’ or a ‘group phenomenon’ or, as George Carlin said, ‘Never underestimate the power of stupid people in large groups.’ Then, you have Kierkegaard, who said, ‘People demand freedom of speech as a compensation for the freedom of thought which they seldom use.’ And finally, Euripides said, ‘Talk sense to a fool and he calls you foolish.’”

Starr Roxanne Hiltz, distinguished professor of information systems and co-author of the visionary 1970s book “The Network Nation,” replied, “People on systems like Facebook are increasingly forming into ‘echo chambers’ of those who think alike. They will keep unfriending those who don’t, and passing on rumors and fake news that agrees with their point of view. When the president of the U.S. frequently attacks the traditional media and anybody who does not agree with his ‘alternative facts,’ it is not good news for an uptick in reliable and trustworthy facts circulating in social media.”

Nigel Cameron, a technology and futures editor and president of the Center for Policy on Emerging Technologies, said, “Human nature is not EVER going to change (though it may, of course, be manipulated). And the political environment is bad.”

Ian O’Byrne, assistant professor at the College of Charleston, replied, “Human nature will take over as the salacious is often sexier than facts. There are multiple information streams, public and private, that spread this information online. We can also not trust the businesses and industries that develop and facilitate these digital texts and tools to make changes that will significantly improve the situation.”

Greg Swanson, media consultant with ITZonTarget, noted, “The sorting of reliable versus fake news requires a trusted referee. It seems unlikely that government can play a meaningful role as this referee. We are too polarized. And we have come to see the television news teams as representing divergent points of view, and, depending on your politics, the network that does not represent your views is guilty of ‘fake news.’ It is hard to imagine a fair referee that would be universally trusted.”

Richard Lachmann, professor of sociology at the State University of New York at Albany, replied, “Even though systems [that] flag unreliable information can and will be developed, internet users have to be willing to take advantage of those warnings. Too many Americans will live in political and social subcultures that will champion false information and encourage use of sites that present such false information.”

There were also those among these expert respondents who said inequities, perceived and real, are at the root of much of the misinformation being produced.

A professor at MIT observed, “I see this as problem with a socioeconomic cure: Greater equity and justice will achieve much more than a bot war over facts. Controlling ‘noise’ is less a technological problem than a human problem, a problem of belief, of ideology. Profound levels of ungrounded beliefs about things both sacred and profane existed before the branding of ‘fake news.’ Belief systems – not ‘truths’ – help to cement identities, forge relationships, explain the unexplainable.”

Julian Sefton-Green, professor of new media education at Deakin University in Australia, said, “The information environment is an extension of social and political tensions. It is impossible to make the information environment a rational, disinterested space it will always be susceptible to pressure.”

A respondent affiliated with Harvard University’s Berkman Klein Center for Internet & Society wrote, “The democratization of publication and consumption that the networked sphere represents is too expansive for there to be any meaningful improvement possible in terms of controlling or labeling information. People will continue to cosset their own cognitive biases.”

Subtheme: In existing economic, political and social systems, the powerful corporate and government leaders most able to improve the information environment profit most when it is in turmoil

A large number of respondents said the interests of the most highly motivated actors, including those in the worlds of business and politics, are generally not motivated to “fix” the proliferation of misinformation. Those players will be a key driver in the worsening of the information environment in the coming years and/or the lack of any serious attempts to effectively mitigate the problem.

Scott Shamp, a dean at Florida State University, commented, “Too many groups gain power through the proliferation of inaccurate or misleading information. When there is value in misinformation, it will rule.”

Big political players have just learned how to play this game. I don’t think they will put much effort into eliminating it.
Zbigniew Łukasiak

Alex “Sandy” Pentland, member of the U.S. National Academy of Engineering and the World Economic Forum, commented, “We know how to dramatically improve the situation, based on studies of political and similar predictions. What we don’t know is how to make it a thriving business. The current [information] models are driven by clickbait, and that is not the foundation of a sustainable economic model.”

Stephen Downes, researcher with the National Research Council of Canada, wrote, “Things will not improve. There is too much incentive to spread disinformation, fake news, malware and the rest. Governments and organizations are major actors in this space.”

An anonymous respondent said, “Actors can benefit socially, economically, politically by manipulating the information environment. As long as these incentives exist, actors will find a way to exploit them. These benefits are not amenable to technological resolution as they are social, political and cultural in nature. Solving this problem will require larger changes in society.”

A number of respondents mentioned market capitalism as a primary obstacle to improving the information environment. A professor based in North America said, “[This] is a capitalist system. The information that will be disseminated will be biased, based on monetary interests.”

Seth Finkelstein, consulting programmer and winner of the Electronic Freedom Foundation’s Pioneer Award, commented, “Virtually all the structural incentives to spread misinformation seem to be getting worse.”

A data scientist based in Europe wrote, “The information environment is built on the top of telecommunication infrastructures and services developed following the free-market ideology, where ‘truth’ or ‘fact’ are only useful as long as they can be commodified as market products.”

Zbigniew Łukasiak, a business leader based in Europe, wrote, “Big political players have just learned how to play this game. I don’t think they will put much effort into eliminating it.”

A vice president for public policy at one of the world’s foremost entertainment and media companies commented, “The small number of dominant online platforms do not have the skills or ethical center in place to build responsible systems, technical or procedural. They eschew accountability for the impact of their inventions on society and have not developed any of the principles or practices that can deal with the complex issues. They are like biomedical or nuclear technology firms absent any ethics rules or ethics training or philosophy. Worse, their active philosophy is that assessing and responding to likely or potential negative impacts of their inventions is both not theirs to do and even shouldn’t be done.”

Patricia Aufderheide, professor of communications and founder of the Center for Media and Social Impact at American University, said, “Major interests are not invested enough in reliability to create new business models and political and regulatory standards needed for the shift. … Overall there are powerful forces, including corporate investment in surveillance-based business models, that create many incentives for unreliability, ‘invisible handshake’ agreements with governments that militate against changing surveillance models, international espionage at a governmental and corporate level in conjunction with mediocre cryptography and poor use of white hat hackers, poor educational standards in major industrial countries such as the U.S., and fundamental weaknesses in the U.S. political/electoral system that encourage exploitation of unreliability. It would be wonderful to believe otherwise, and I hope that other commentators will be able to convince me otherwise.”

James Schlaffer, an assistant professor of economics, commented, “Information is curated by people who have taken a step away from the objectivity that was the watchword of journalism. Conflict sells, especially to the opposition party, therefore the opposition news agency will be incentivized to push a narrative and agenda. Any safeguards will appear as a way to further control narrative and propagandize the population.”

Subtheme: Human tendencies and infoglut drive people apart and make it harder for them to agree on “common knowledge.” That makes healthy debate difficult and destabilizes trust. The fading of news media contributes to the problem

Many respondents expressed concerns about how people’s struggles to find and apply accurate information contribute to a larger social and political problem: There is a growing deficit in commonly accepted facts or some sort of cultural “common ground.” Why has this happened? They cited several reasons:

  • Online echo chambers or silos divide people into separate camps, at times even inciting them to express anger and hatred at a volume not seen in previous communications forms.
  • Information overload crushes people’s attention spans. Their coping mechanism is to turn to entertainment or other lighter fare.
  • High-quality journalism has been decimated due to changes in the attention economy.

They said these factors and others make it difficult for many people in the digital age to create and come to share the type of “common knowledge” that undergirds better and more-responsive public policy. A share of respondents said a lack of commonly shared knowledge leads many in society to doubt the reliability of everything, causing them to simply drop out of civic participation, depleting the number of active and informed citizens.

Jamais Cascio, distinguished fellow at the Institute for the Future, noted, “The power and diversity of very low-cost technologies allowing unsophisticated users to create believable ‘alternative facts’ is increasing rapidly. It’s important to note that the goal of these tools is not necessarily to create consistent and believable alternative facts, but to create plausible levels of doubt in actual facts. The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the *wrong* thing as it is on the ability to get people to *doubt* the right thing. The success of Donald Trump will be a flaming signal that this strategy works, alongside the variety of technologies now in development (and early deployment) that can exacerbate this problem. In short, it’s a successful strategy, made simpler by more powerful information technologies.”

Philip J. Nickel, lecturer at Eindhoven University of Technology in the Netherlands, said, “The decline of traditional news media and the persistence of closed social networks will not change in the next 10 years. These are the main causes of the deterioration of a public domain of shared facts as the basis for discourse and political debate.”

Kenneth Sherrill, professor emeritus of political science at Hunter College, City University of New York, predicted, “Disseminating false rumors and reports will become easier. The proliferation of sources will increase the number of people who don’t know who or what they trust. These people will drop out of the normal flow of information. Participation will decline as more and more citizens become unwilling/unable to figure out which information sources are reliable.”

The crisis we face about ‘truth’ and reliable facts is predicated less on the ability to get people to believe the *wrong* thing as it is on the ability to get people to *doubt* the right thing.
Jamais Cascio

What is truth? What is a fact? Who gets to decide? And can most people agree to trust anything as “common knowledge”? A number of respondents challenged the idea that any individuals, groups or technology systems could or should “rate” information as credible, factual, true or not.

An anonymous respondent observed, “Whatever is devised will not be seen as impartial some things are not black and white for other situations, facts brought up to come to a conclusion are different that other facts used by others in a situation. Each can have real facts, but it is the facts that are gathered that matter in coming to a conclusion who will determine what facts will be considered or what is even considered a fact.”

A research assistant at MIT noted, “‘Fake’ and ‘true’ are not as binary as we would like, and – combined with an increasingly connected and complex digital society – it’s a challenge to manage the complexity of social media without prescribing a narrative as ‘truth.’”

An internet pioneer and longtime leader at ICANN said, “There is little prospect of a forcing factor that will emerge that will improve the ‘truthfulness’ of information in the internet.”

A vice president for stakeholder engagement said, “Trust networks are best established with physical and unstructured interaction, discussion and observation. Technology is reducing opportunities for such interactions and disrupting human discourse, while giving the ‘feeling’ that we are communicating more than ever.”

Subtheme: A small segment of society will find, use and perhaps pay a premium for information from reliable sources. Outside of this group “chaos will reign” and a worsening digital divide will develop

Some respondents predicted that a larger digital divide will form. Those who pursue more-accurate information and rely on better-informed sources will separate from those who are not selective enough or who do not invest either the time or the money in doing so.

There will be a sort of ‘gold standard’ set of sources, and there will be the fringe.
Anonymous respondent.

Alejandro Pisanty, a professor at UNAM, the National University of Mexico, and longtime internet policy leader, observed, “Overall, at least a part of society will value trusted information and find ways to keep a set of curated, quality information resources. This will use a combination of organizational and technological tools but above all, will require a sharpened sense of good judgment and access to diverse, including rivalrous, sources. Outside this, chaos will reign.”

Alexander Halavais, associate professor of social technologies at Arizona State University, said, “As there is value in accurate information, the availability of such information will continue to grow. However, when consumers are not directly paying for such accuracy, it will certainly mean a greater degree of misinformation in the public sphere. That means the continuing bifurcation of haves and have-nots, when it comes to trusted news and information.”

An anonymous editor and publisher commented, “Sadly, many Americans will not pay attention to ANY content from existing or evolving sources. It’ll be the continuing dumbing down of the masses, although the ‘upper’ cadres (educated/thoughtful) will read/see/know, and continue to battle.”

An anonymous respondent said, “There will be a sort of ‘gold standard’ set of sources, and there will be the fringe.”

Theme 2: The information environment will not improve because technology will create new challenges that can’t or won’t be countered effectively and at scale

Many who see little hope for improvement of the information environment said technology will not save society from distortions, half-truths, lies and weaponized narratives. An anonymous business leader argued, “It is too easy to create fake facts, too labor-intensive to check and too easy to fool checking algorithms.’’ And this response of an anonymous research scientist based in North America echoed the view of many participants in this canvassing: “We will develop technologies to help identify false and distorted information, BUT they won’t be good enough.”

In the arms race between those who want to falsify information and those who want to produce accurate information, the former will always have an advantage.
David Conrad

Paul N. Edwards, Perry Fellow in International Security at Stanford University, commented, “Many excellent methods will be developed to improve the information environment, but the history of online systems shows that bad actors can and will always find ways around them.”

Vian Bakir, professor in political communication and journalism at Bangor University in Wales, commented, “It won’t improve because of 1) the evolving nature of technology – emergent media always catches out those who wish to control it, at least in the initial phase of emergence 2) online social media and search engine business models favour misinformation spreading 3) well-resourced propagandists exploit this mix.”

Many who expect things will not improve in the next decade said that “white hat” efforts will never keep up with “black hat” advances in information wars. A user-experience and interaction designer said, “As existing channels become more regulated, new unregulated channels will continue to emerge.”

Subtheme: Those generally acting for themselves and not the public good have the advantage, and they are likely to stay ahead in the information wars

Many of those who expect no improvement of the information environment said those who wish to spread misinformation are highly motivated to use innovative tricks to stay ahead of the methods meant to stop them. They said certain actors in government, business and other individuals with propaganda agendas are highly driven to make technology work in their favor in the spread of misinformation, and there will continue to be more of them.

There are a lot of rich and unethical people, politicians, non-state actors and state actors who are strongly incentivized to get fake information out there to serve their selfish purposes.
Jason Hong

A number of respondents referred to this as an “arms race.” David Sarokin of Sarokin Consulting and author of “Missed Information,” said, “There will be an arms race between reliable and unreliable information.” And David Conrad, a chief technology officer, replied, “In the arms race between those who want to falsify information and those who want to produce accurate information, the former will always have an advantage.”

Jim Hendler, professor of computing sciences at Rensselaer Polytechnic Institute, commented, “The information environment will continue to change but the pressures of politics, advertising and stock-return-based capitalism rewards those who find ways to manipulate the system, so it will be a constant battle between those aiming for ‘objectiveness’ and those trying to manipulate the system.”

John Markoff, retired journalist and former technology reporter for The New York Times, said, “I am extremely skeptical about improvements related to verification without a solution to the challenge of anonymity on the internet. I also don’t believe there will be a solution to the anonymity problem in the near future.”

Scott Spangler, principal data scientist at IBM Watson Health, said technologies now exist that make fake information almost impossible to discern and flag, filter or block. He wrote, “Machine learning and sophisticated statistical techniques will be used to accurately simulate real information content and make fake information almost indistinguishable from the real thing.”

Jason Hong, associate professor at the School of Computer Science at Carnegie Mellon University, said, “Some fake information will be detectable and blockable, but the vast majority won’t. The problem is that it’s *still* very hard for computer systems to analyze text, find assertions made in the text and crosscheck them. There’s also the issue of subtle nuances or differences of opinion or interpretation. Lastly, the incentives are all wrong. There are a lot of rich and unethical people, politicians, non-state actors and state actors who are strongly incentivized to get fake information out there to serve their selfish purposes.”

A research professor of robotics at Carnegie Mellon University observed, “Defensive innovation is always behind offensive innovation. Those wanting to spread misinformation will always be able to find ways to circumvent whatever controls are put in place.”

A research scientist for the Computer Science and Artificial Intelligence Laboratory at MIT said, “Problems will get worse faster than solutions can address, but that only means solutions are more needed than ever.”

Subtheme: Weaponized narratives and other false content will be magnified by social media, online filter bubbles and AI

Some respondents expect a dramatic rise in the manipulation of the information environment by nation-states, by individual political actors and by groups wishing to spread propaganda. Their purpose is to raise fears that serve their agendas, create or deepen silos and echo chambers, divide people and set them upon each other, and paralyze or confuse public understanding of the political, social and economic landscape.

We live in an era where most people get their ‘news’ via social media and it is very easy to spread fake news. … Given that there is freedom of speech, I wonder how the situation can ever improve.
Anonymous project leader for a science institute

This has been referred to as the weaponization of public narratives. Social media platforms such as Facebook, Reddit and Twitter appear to be prime battlegrounds. Bots are often employed, and AI is expected to be implemented heavily in the information wars to magnify the speed and impact of messaging.

A leading internet pioneer who has worked with the FCC, the UN’s International Telecommunication Union (ITU), the General Electric Co. (GE) and other major technology organizations commented, “The ‘internet-as-weapon’ paradigm has emerged.”

Dean Willis, consultant for Softarmor Systems, commented, “Governments and political groups have now discovered the power of targeted misinformation coupled to personalized understanding of the targets. Messages can now be tailored with devastating accuracy. We’re doomed to living in targeted information bubbles.”

An anonymous survey participant noted, “Misinformation will play a major role in conflicts between nations and within competing parties within nation states.”

danah boyd, principal researcher at Microsoft Research and founder of Data & Society, wrote, “What’s at stake right now around information is epistemological in nature. Furthermore, information is a source of power and thus a source of contemporary warfare.”

Peter Lunenfeld, a professor at UCLA, commented, “For the foreseeable future, the economics of networks and the networks of economics are going to privilege the dissemination of unvetted, unverified and often weaponized information. Where there is a capitalistic incentive to provide content to consumers, and those networks of distribution originate in a huge variety of transnational and even extra-national economies and political systems, the ability to ‘control’ veracity will be far outstripped by the capability and willingness to supply any kind of content to any kind of user.”

These experts noted that the public has turned to social media – especially Facebook – to get its “news.” They said the public’s craving for quick reads and tabloid-style sensationalism is what makes social media the field of choice for manipulative narratives, which are often packaged to appear like news headlines. They note that the public’s move away from more-traditional mainstream news outlets, which had some ethical standards, to consumption of social newsfeeds has weakened mainstream media organizations, making them lower-budget operations that have been forced to compete for attention by offering up clickbait headlines of their own.

An emeritus professor of communication for a U.S. Ivy League university noted, “We have lost an important social function in the press. It is being replaced by social media, where there are few if any moral or ethical guidelines or constraints on the performance of informational roles.”

A project leader for a science institute commented, “We live in an era where most people get their ‘news’ via social media and it is very easy to spread fake news. The existence of clickbait sites make it easy for conspiracy theories to be rapidly spread by people who do not bother to read entire articles, nor look for trusted sources. Given that there is freedom of speech, I wonder how the situation can ever improve. Most users just read the headline, comment and share without digesting the entire article or thinking critically about its content (if they read it at all).”

Subtheme: The most-effective tech solutions to misinformation will endanger people’s dwindling privacy options, and they are likely to limit free speech and remove the ability for people to be anonymous online

The rise of new and highly varied voices with differing agendas and motivations might generally be considered to be a good thing. But some of these experts said the recent major successes by misinformation manipulators have created a threatening environment in which many in the public are encouraging platform providers and governments to expand surveillance. Among the technological solutions for “cleaning up” the information environment are those that work to clearly identify entities operating online and employ algorithms to detect misinformation. Some of these experts expect that such systems will act to identify perceived misbehaviors and label, block, filter or remove some online content and even ban some posters from further posting.

Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world.
Retired professor

An educator commented, “Creating ‘a reliable, trusted, unhackable verification system’ would produce a system for filtering and hence structuring of content. This will end up being a censored information reality.”

An eLearning specialist observed, “Any system deeming itself to have the ability to ‘judge’ information as valid or invalid is inherently biased.” And a professor and researcher noted, “In an open society, there is no prior determination of what information is genuine or fake.”

In fact, a share of the respondents predicted that the online information environment will not improve in the next decade because any requirement for authenticated identities would take away the public’s highly valued free-speech rights and allow major powers to control the information environment.

A distinguished professor emeritus of political science at a U.S. university wrote, “Misinformation will continue to thrive because of the long (and valuable) tradition of freedom of expression. Censorship will be rejected.” An anonymous respondent wrote, “There is always a fight between ‘truth’ and free speech. But because the internet cannot be regulated free speech will continue to dominate, meaning the information environment will not improve.”

But another share of respondents said that is precisely why authenticated identities – which are already operating in some places, including China – will become a larger part of information systems. A professor at a major U.S. university replied, “Surveillance technologies and financial incentives will generate greater surveillance.” A retired university professor predicted, “Increased censorship and mass surveillance will tend to create official ‘truths’ in various parts of the world. In the United States, corporate filtering of information will impose the views of the economic elite.”

The executive director of a major global privacy advocacy organization argued removing civil liberties in order to stop misinformation will not be effective, saying, “‘Problematic’ actors will be able to game the devised systems while others will be over-regulated.”

Several other respondents also cited this as a major flaw of this potential remedy. They argued against it for several reasons, including the fact that it enables even broader government and corporate surveillance and control over more of the public.

Emmanuel Edet, head of legal services at the National Information Technology Development Agency of Nigeria, observed, “The information environment will improve but at a cost to privacy.”

Bill Woodcock, executive director of the Packet Clearing House, wrote, “There’s a fundamental conflict between anonymity and control of public speech, and the countries that don’t value anonymous speech domestically are still free to weaponize it internationally, whereas the countries that do value anonymous speech must make it available to all, [or] else fail to uphold their own principle.”

James LaRue, director of the Office for Intellectual Freedom of the American Library Association, commented, “Information systems incentivize getting attention. Lying is a powerful way to do that. To stop that requires high surveillance – which means government oversight which has its own incentives not to tell the truth.”

Tom Valovic, contributor to The Technoskeptic magazine and author of “Digital Mythologies,” said encouraging platforms to exercise algorithmic controls is not optimal. He wrote: “Artificial intelligence that will supplant human judgment is being pursued aggressively by entities in the Silicon Valley and elsewhere. Algorithmic solutions to replacing human judgment are subject to hidden bias and will ultimately fail to accomplish this goal. They will only continue the centralization of power in a small number of companies that control the flow of information.”

Theme 3: The information environment will improve because technology will help label, filter or ban misinformation and thus upgrade the public’s ability to judge the quality and veracity of content

Most of the respondents who gave hopeful answers about the future of truth online said they believe technology will be implemented to improve the information environment. They noted their faith was grounded in history, arguing that humans have always found ways to innovate to overcome problems. Most of these experts do not expect there will be a perfect system – but they expect advances. A number said information platform corporations such as Google and Facebook will begin to efficiently police the environment to embed moral and ethical thinking in the structure of their platforms. They hope this will simultaneously enable the screening of content while still protecting rights such as free speech.

If there is a great amount of pressure from the industry to solve this problem (which there is), then methodologies will be developed and progress will be made … In other words, if there’s a will, there’s way.
Adam Lella

Larry Diamond, senior fellow at the Hoover Institution and the Freeman Spogli Institute (FSI) at Stanford University, said, “I am hopeful that the principal digital information platforms will take creative initiatives to privilege more authoritative and credible sources and to call out and demote information sources that appear to be propaganda and manipulation engines, whether human or robotic. In fact, the companies are already beginning to take steps in this direction.”

An associate professor at a U.S. university wrote, “I do not see us giving up on seeking truth.” And a researcher based in Europe said, “Technologies will appear that solve the trust issues and reward logic.”

Adam Lella, senior analyst for marketing insights at comScore Inc., replied, “There have been numerous other industry-related issues in the past (e.g., viewability, invalid traffic detection, cross-platform measurement) that were seemingly impossible to solve, and yet major progress was made in the past few years. If there is a great amount of pressure from the industry to solve this problem (which there is), then methodologies will be developed and progress will be made to help mitigate this issue in the long run. In other words, if there’s a will, there’s way.”

Subtheme: Likely tech-based solutions include adjustments to algorithmic filters, browsers, apps and plug-ins and the implementation of “trust ratings”

Many respondents who hope for improvement in the information environment mentioned ways in which new technological solutions might be implemented.

Bart Knijnenburg, researcher on decision-making and recommender systems and assistant professor of computer science at Clemson University, said, “Two developments will help improve the information environment: 1) News will move to a subscription model (like music, movies, etc.) and subscription providers will have a vested interest in culling down false narratives 2) Algorithms that filter news will learn to discern the quality of a news item and not just tailor to ‘virality’ or political leaning.”

In order to reduce the spread of fake news, we must deincentivize it financially.
Amber Case

Laurel Felt, lecturer at the University of Southern California, “There will be mechanisms for flagging suspicious content and providers and then apps and plugins for people to see the ‘trust rating’ for a piece of content, an outlet or even an IP address. Perhaps people can even install filters so that, when they’re doing searches, hits that don’t meet a certain trust threshold will not appear on the list.”

A longtime U.S. government researcher and administrator in communications and technology sciences said, “The intelligence, defense and related U.S. agencies are very actively working on this problem and results are promising.”

Amber Case, research fellow at Harvard University’s Berkman Klein Center for Internet & Society, suggested withholding ad revenue until veracity has been established. She wrote, “Right now, there is an incentive to spread fake news. It is profitable to do so, profit made by creating an article that causes enough outrage that advertising money will follow. … In order to reduce the spread of fake news, we must deincentivize it financially. If an article bursts into collective consciousness and is later proven to be fake, the sites that control or host that content could refuse to distribute advertising revenue to the entity that created or published it. This would require a system of delayed advertising revenue distribution where ad funds are held until the article is proven as accurate or not. A lot of fake news is created by a few people, and removing their incentive could stop much of the news postings.”

Andrea Matwyshyn, a professor of law at Northeastern University who researches innovation and law, particularly information security, observed, “Software liability law will finally begin to evolve. Market makers will increasingly incorporate security quality as a factor relevant to corporate valuation. The legal climate for security research will continue to improve, as its connection to national security becomes increasingly obvious. These changes will drive significant corporate and public sector improvements in security during the next decade.”

Larry Keeley, founder of innovation consultancy Doblin, predicted technology will be improved but people will remain the same, writing, “Capabilities adapted from both bibliometric analytics and good auditing practices will make this a solvable problem. However, non-certified, compelling-but-untrue information will also proliferate. So the new divide will be between the people who want their information to be real vs. those who simply want it to feel important. Remember that quote from Roger Ailes: ‘People don’t want to BE informed, they want to FEEL informed.’ Sigh.”

Anonymous survey participants also responded:

  • “Filters and algorithms will improve to both verify raw data, separate ‘overlays’ and to correct for a feedback loop.”
  • “Semantic technologies will be able to cross-verify statements, much like meta-analysis.”
  • “The credibility history of each individual will be used to filter incoming information.”
  • “The veracity of information will be linked to how much the source is perceived as trustworthy – we may, for instance, develop a trust index and trust will become more easily verified using artificial-intelligence-driven technologies.”
  • “The work being done on things like verifiable identity and information sharing through loose federation will improve things somewhat (but not completely). That is to say, things will become better but not necessarily good.”
  • “AI, blockchain, crowdsourcing and other technologies will further enhance our ability to filter and qualify the veracity of information.”
  • “There will be new visual cues developed to help news consumers distinguish between trusted news sources and others.”

Subtheme: Regulatory remedies could include software liability law, required identities, unbundling of social networks like Facebook

A number of respondents believe there will be policy remedies that move beyond whatever technical innovations emerge in the next decade. They offered a range of suggestions, from regulatory reforms applied to the platforms that aid misinformation merchants to legal penalties applied to wrongdoers. Some think the threat of regulatory reform via government agencies may force the issue of required identities and the abolition of anonymity protections for platform users.

Sonia Livingstone, professor of social psychology at the London School of Economics and Political Science, replied, “The ‘wild west’ state of the internet will not be permitted to continue by those with power, as we are already seeing with increased national pressure on providers/companies by a range of means from law and regulation to moral and consumer pressures.”

Willie Currie, a longtime expert in global communications diffusion, wrote, “The apparent success of fake news on platforms like Facebook will have to be dealt with on a regulatory basis as it is clear that technically minded people will only look for technical fixes and may have incentives not to look very hard, so self-regulation is unlikely to succeed. The excuse that the scale of posts on social media platforms makes human intervention impossible will not be a defense. Regulatory options may include unbundling social networks like Facebook into smaller entities. Legal options include reversing the notion that providers of content services over the internet are mere conduits without responsibility for the content. These regulatory and legal options may not be politically possible to affect within the U.S., but they are certainly possible in Europe and elsewhere, especially if fake news is shown to have an impact on European elections.”

Sally Wentworth, vice president of global policy development at the Internet Society, warned against too much dependence upon information platform providers in shaping solutions to improve the information environment. She wrote: “It’s encouraging to see some of the big platforms beginning to deploy internet solutions to some of the issues around online extremism, violence and fake news. And yet, it feels like as a society, we are outsourcing this function to private entities that exist, ultimately, to make a profit and not necessarily for a social good. How much power are we turning over to them to govern our social discourse? Do we know where that might eventually lead? On the one hand, it’s good that the big players are finally stepping up and taking responsibility. But governments, users and society are being too quick to turn all of the responsibility over to internet platforms. Who holds them accountable for the decisions they make on behalf of all of us? Do we even know what those decisions are?”

A professor and chair in a department of educational theory, policy and administration commented, “Some of this work can be done in private markets. Being banned from social media is one obvious one. In terms of criminal law, I think the important thing is to have penalties/regulations be domain-specific. Speech can be regulated in certain venues, but obviously not in all. Federal (and perhaps even international) guidelines would be useful. Without a framework for regulation, I can’t imagine penalties.”

Theme 4: The information environment will improve, because people will adjust and make things better

Many of those who expect the information environment to improve anticipate that information literacy training and other forms of assistance will help people become more sophisticated consumers. They expect that users will gravitate toward more reliable information – and that knowledge providers will respond in kind.

When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them.
Irene Wu

Frank Kaufmann, founder and director of several international projects for peace activism and media and information, commented, “The quality of news will improve, because things always improve.” And Barry Wellman, virtual communities expert and co-director of the NetLab Network, said, “Software and people are becoming more sophisticated.”

One hopeful respondent said a change in economic incentives can bring about desired change. Tom Wolzien, chairman of The Video Call Center and Wolzien LLC, said, “The market will not clean up the bad material, but will shift focus and economic rewards toward the reliable. Information consumers, fed up with false narratives, will increasingly shift toward more-trusted sources, resulting in revenue flowing toward those more trusted sources and away from the junk. This does not mean that all people will subscribe to either scientific or journalistic method (or both), but they will gravitate toward material the sources and institutions they find trustworthy, and those institutions will, themselves, demand methods of verification beyond those they use today.”

A retired public official and internet pioneer predicted, “1) Education for veracity will become an indispensable element of secondary school. 2) Information providers will become legally responsible for their content. 3) A few trusted sources will continue to dominate the internet.”

Irene Wu, adjunct professor of communications, culture and technology at Georgetown University, said, “Information will improve because people will learn better how to deal with masses of digital information. Right now, many people naively believe what they read on social media. When the television became popular, people also believed everything on TV was true. It’s how people choose to react and access to information and news that’s important, not the mechanisms that distribute them.”

Charlie Firestone, executive director at the Aspen Institute Communications and Society Program, commented, “In the future, tagging, labeling, peer recommendations, new literacies (media, digital) and similar methods will enable people to sift through information better to find and rely on factual information. In addition, there will be a reaction to the prevalence of false information so that people are more willing to act to assure their information will be accurate.”

Howard Rheingold, pioneer researcher of virtual communities, longtime professor and author of “Net Smart: How to Thrive Online,” noted, “As I wrote in ‘Net Smart’ in 2012, some combination of education, algorithmic and social systems can help improve the signal-to-noise ratio online – with the caveat that misinformation/disinformation versus verified information is likely to be a continuing arms race. In 2012, Facebook, Google and others had no incentive to pay attention to the problem. After the 2016 election, the issue of fake information has been spotlighted.”

Subtheme: Misinformation has always been with us and people have found ways to lessen its impact. The problems will become more manageable as people become more adept at sorting through material

Many respondents agree that misinformation will persist as the online realm expands and more people are connected in more ways. Still, the more hopeful among these experts argue that progress is inevitable as people and organizations find coping mechanisms. They say history validates this. Furthermore, they said technologists will play an important role in helping filter out misinformation and modeling new digital literacy practices for users.

We were in this position before, when printing presses broke the existing system of information management. A new system emerged and I believe we have the motivation and capability to do it again.
Jonathan Grudin

Mark Bunting, visiting academic at Oxford Internet Institute, a senior digital strategy and public policy advisor with 16 years of experience at the BBC and as a digital consultant, wrote, “Our information environment has been immeasurably improved by the democratisation of the means of publication since the creation of the web nearly 25 years ago. We are now seeing the downsides of that transformation, with bad actors manipulating the new freedoms for antisocial purposes, but techniques for managing and mitigating those harms will improve, creating potential for freer, but well-governed, information environments in the 2020s.”

Jonathan Grudin, principal design researcher at Microsoft, said, “We were in this position before, when printing presses broke the existing system of information management. A new system emerged and I believe we have the motivation and capability to do it again. It will again involve information channeling more than misinformation suppression contradictory claims have always existed in print, but have been manageable and often healthy.”

Judith Donath, fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab, wrote, “‘Fake news’ is not new. The Weekly World News had a circulation of over a million for its mostly fictional news stories that are printed and sold in a format closely resembling a newspaper. Many readers recognized it as entertainment, but not all. More subtly, its presence on the newsstand reminded everyone that anything can be printed.”

Joshua Hatch, president of the Online News Association, noted, “I’m slightly optimistic because there are more people who care about doing the right thing than there are people who are trying to ruin the system. Things will improve because people – individually and collectively – will make it so.”

Many of these respondents said the leaders and engineers of the major information platform companies will play a significant role. Some said they expect some other systematic and social changes will alter things.

John Wilbanks, chief commons officer at Sage Bionetworks, replied, “I’m an optimist, so take this with a grain of salt, but I think as people born into the internet age move into positions of authority they’ll be better able to distill and discern fake news than those of us who remember an age of trusted gatekeepers. They’ll be part of the immune system. It’s not that the environment will get better, it’s that those younger will be better fitted to survive it.”

Danny Rogers, founder and CEO of Terbium Labs, replied, “Things always improve. Not monotonically, and not without effort, but fundamentally, I still believe that the efforts to improve the information environment will ultimately outweigh efforts to devolve it.”

Bryan Alexander, futurist and president of Bryan Alexander Consulting, replied, “Growing digital literacy and the use of automated systems will tip the balance towards a better information environment.”

A number of these respondents said information platform corporations such as Google and Facebook will begin to efficiently police the environment through various technological enhancements. They expressed faith in the inventiveness of these organizations and suggested the people of these companies will implement technology to embed moral and ethical thinking in the structure and business practices of their platforms, enabling the screening of content while still protecting rights such as free speech.

Patrick Lambe, principal consultant at Straits Knowledge, commented, “All largescale human systems are adaptive. When faced with novel predatory phenomena, counter-forces emerge to balance or defeat them. We are at the beginning of a largescale negative impact from the undermining of a social sense of reliable fact. Counter-forces are already emerging. The presence of largescale ‘landlords’ controlling significant sections of the ecosystem (e.g., Google, Facebook) aids in this counter-response.”

A professor in technology law at a West-Coast-based U.S. university said, “Intermediaries such as Facebook and Google will develop more-robust systems to reward legitimate producers and punish purveyors of fake news.”

A longtime director for Google commented, “Companies like Google and Facebook are investing heavily in coming up with usable solutions. Like email spam, this problem can never entirely be eliminated, but it can be managed.”

Sandro Hawke, technical staff at the World Wide Web Consortium, predicted, “Things are going to get worse before they get better, but humans have the basic tools to solve this problem, so chances are good that we will. The biggest risk, as with many things, is that narrow self-interest stops people from effectively collaborating.”

Anonymous respondents shared these remarks:

  • “Accurate facts are essential, particularly within a democracy, so this will be a high, shared value worthy of investment and government support, as well as private-sector initiatives.”
  • “We are only at the beginning of drastic technological and societal changes. We will learn and develop strategies to deal with problems like fake news.”
  • “There is a long record of innovation taking place to solve problems. Yes, sometimes innovation leads to abuses, but further innovation tends to solve those problems.”
  • Consumers have risen up in the past to block the bullshit, fake ads, fake investment scams, etc., and they will again with regard to fake news.”
  • “As we understand more about digital misinformation we will design better tools, policies and opportunities for collective action.”
  • “Now that it is on the agenda, smart researchers and technologists will develop solutions.”
  • “The increased awareness of the issue will lead to/force new solutions and regulation that will improve the situation in the long-term even if there are bound to be missteps such as flawed regulation and solutions along the way.”

Subtheme: Crowdsourcing will work to highlight verified facts and block those who propagate lies and propaganda. Some also have hopes for distributed ledgers (blockchain)

A number of these experts said solutions such as tagging, flagging or other labeling of questionable content will continue to expand and be of further use in the future in tackling the propagation of misinformation

The future will attach credibility to the source of any information. The more a given source is attributed to ‘fake news,’ the lower it will sit in the credibility tree.
Anonymous engineer

J. Nathan Matias, a postdoctoral researcher at Princeton University and previously a visiting scholar at MIT’s Center for Civic Media, wrote, “Through ethnography and largescale social experiments, I have been encouraged to see volunteer communities with tens of millions of people work together to successfully manage the risks from inaccurate news.”

A researcher of online harassment working for a major internet information platform commented, “If there are nonprofits keeping technology in line, such as an ACLU-esque initiative, to monitor misinformation and then partner with spaces like Facebook to deal with this kind of news spam, then yes, the information environment will improve. We also need to move away from clickbaity-like articles, and not algorithmically rely on popularity but on information.”

An engineer based in North America replied, “The future will attach credibility to the source of any information. The more a given source is attributed to ‘fake news,’ the lower it will sit in the credibility tree.”

Micah Altman, director of research for the Program on Information Science at MIT, commented, “Technological advances are creating forces pulling in two directions: It is increasingly easy to create real-looking fake information and it is increasingly easy to crowdsource the collection and verification of information. In the longer term, I’m optimistic that the second force will dominate – as transaction cost-reduction appears to be relatively in favor of crowds versus concentrated institutions.”

A past chairman of a major U.S. scientific think tank and former CEO replied, “[The information environment] should improve because there are many techniques that can be brought to bear both human-mediated – such as collective intelligence via user voting and rating – and technological responses that are either very early in their evolution or not or not deployed at all. See spam as an analog.”

Some predicted that digital distributed ledger technologies, known as blockchain, may provide some answers. A longtime technology editor and columnist based in Europe, commented, “The blockchain approach used for Bitcoin, etc., could be used to distribute content. DECENT is an early example.” And an anonymous respondent from Harvard University’s Berkman Klein Center for Internet & Society said, “They will be cryptographically verified, with concepts.”

But others were less confident that blockchain will work. A leading researcher studying the spread of misinformation observed, “I know systems like blockchain are a start, but in some ways analog systems (e.g., scanned voting ballots) can be more resilient to outside influence than digital solutions such as increased encryption. There are always potential compromises when our communication networks are based on human-coded technology and hardware this [is] less the case with analog-first, digital-second systems.”

A professor of media and communication based in Europe said, “Right now, reliable and trusted verification systems are not yet available they may become technically available in the future but the arms race between corporations and hackers is never ending. Blockchain technology may be an option, but every technological system needs to be built on trust, and as long as there is no globally governed trust system that is open and transparent, there will be no reliable verification systems.”

Theme 5: Tech can’t win the battle. The public must fund and support the production of objective, accurate information. It must also elevate information literacy to be a primary goal of education

There was common agreement among many respondents – whether they said they expect to see improvements in the information environment in the next decade or not – that the problem of misinformation requires significant attention. A share of these respondents urged action in two areas: A bolstering of the public-serving press and an expansive, comprehensive, ongoing information literacy education effort for people of all ages.

We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy.
Mike DeVito

A sociologist doing research on technology and civic engagement at MIT said, “Though likely to get worse before it gets better, the 2016-2017 information ecosystem problems represent a watershed moment and call to action for citizens, policymakers, journalists, designers and philanthropists who must work together to address the issues at the heart of misinformation.”

Michael Zimmer, associate professor and privacy and information ethics scholar at the University of Wisconsin, Milwaukee commented, “This is a social problem that cannot be solved via technology.”

Subtheme: Funding and support must be directed to the restoration of a well-fortified, ethical and trusted public press

Many respondents noted that while the digital age has amplified countless information sources it has hurt the reach and influence of the traditional news organizations. These are the bedrock institutions much of the public has relied upon for objective, verified, reliable information – information undergirded by ethical standards and a general goal of serving the common good. These respondents said the information environment can’t be improved without more, well-staffed, financially stable, independent news organizations. They believe that material can rise above misinformation and create a base of “common knowledge” the public can share and act on.

This is a wake-up call to the news industry, policy makers and journalists to refine the system of news production.
Rich Ling

Susan Hares, a pioneer with the National Science Foundation Network (NSFNET) and longtime internet engineering strategist, now a consultant, said, “Society simply needs to decide that the ‘press’ no longer provides unbiased information, and it must pay for unbiased and verified information.”

Christopher Jencks, a professor emeritus at Harvard University, said, “Reducing ‘fake news’ requires a profession whose members share a commitment to getting it right. That, in turn, requires a source of money to pay such professional journalists. Advertising used to provide newspapers with money to pay such people. That money is drying up, and it seems unlikely to be replaced within the next decade.”

Rich Ling, professor of media technology at the School of Communication and Information at Nanyang Technological University, said, “We have seen the consequences of fake news in the U.S. presidential election and Brexit. This is a wake-up call to the news industry, policy makers and journalists to refine the system of news production.”

Maja Vujovic, senior copywriter for the Comtrade Group, predicted, “The information environment will be increasingly perceived as a public good, making its reliability a universal need. Technological advancements and civil-awareness efforts will yield varied ways to continuously purge misinformation from it, to keep it reasonably reliable.”

An author and journalist based in North America said, “I believe this era could spawn a new one – a flight to quality in which time-starved citizens place high value on verified news sources.”

A professor of law at a major U.S. state university commented, “Things won’t get better until we realize that accurate news and information are a public good that require not-for-profit leadership and public subsidy.”

Marc Rotenberg, president of the Electronic Privacy Information Center, wrote, “The problem with online news is structural: There are too few gatekeepers, and the internet business model does not sustain quality journalism. The reason is simply that advertising revenue has been untethered from news production.”

With precarious funding and shrinking audiences, healthy journalism that serves the common good is losing its voice. Siva Vaidhyanathan, professor of media studies and director of the Center for Media and Citizenship at the University of Virginia, wrote, “There are no technological solutions that correct for the dominance of Facebook and Google in our lives. These incumbents are locked into monopoly power over our information ecosystem and as they drain advertising money from all other low-cost commercial media they impoverish the public sphere.”

Subtheme: Elevate information literacy: It must become a primary goal at all levels of education

Many of these experts said the flaws in human nature and still-undeveloped norms in the digital age are the key problems that make users susceptible to false, misleading and manipulative online narratives. One potential remedy these respondents suggested is a massive compulsory crusade to educate all in digital-age information literacy. Such an effort, some said, might prepare more people to be wise in what they view/read/believe and possibly even serve to upgrade the overall social norms of information sharing.

Information is only as reliable as the people who are receiving it.
Julia Koller

Karen Mossberger, professor and director of the School of Public Affairs at Arizona State University, wrote, “The spread of fake news is not merely a problem of bots, but part of a larger problem of whether or not people exercise critical thinking and information-literacy skills. Perhaps the surge of fake news in the recent past will serve as a wake-up call to address these aspects of online skills in the media and to address these as fundamental educational competencies in our education system. Online information more generally has an almost limitless diversity of sources, with varied credibility. Technology is driving this issue, but the fix isn’t a technical one alone.”

Mike DeVito, graduate researcher at Northwestern University, wrote, “These are not technical problems they are human problems that technology has simply helped scale, yet we keep attempting purely technological solutions. We can’t machine-learn our way out of this disaster, which is actually a perfect storm of poor civics knowledge and poor information literacy.”

Miguel Alcaine, International Telecommunication Union area representative for Central America, commented, “The boundaries between online and offline will continue to blur. We understand online and offline are different modalities of real life. There is and will be a market (public and private providers) for trusted information. There is and will be space for misinformation. The most important action societies can take to protect people is education, information and training.”

An early internet developer and security consultant commented, “Fake news is not a product of a flaw in the communications channel and cannot be fixed by a fix to the channel. It is due to a flaw in the human consumers of information and can be repaired only by education of those consumers.”

An anonymous respondent from the Harvard University’s Berkman Klein Center for Internet & Society noted, “False information – intentionally or inadvertently so – is neither new nor the result of new technologies. It may now be easier to spread to more people more quickly, but the responsibility for sifting facts from fiction has always sat with the person receiving that information and always will.”

An internet pioneer and rights activist based in the Asia/Pacific region said, “We as a society are not investing enough in education worldwide. The environment will only improve if both sides of the communication channel are responsible. The reader and the producer of content, both have responsibilities.”

Deirdre Williams, retired internet activist, replied, “Human beings are losing their capability to question and to refuse. Young people are growing into a world where those skills are not being taught.”

Julia Koller, a learning solutions lead developer, replied, “Information is only as reliable as the people who are receiving it. If readers do not change or improve their ability to seek out and identify reliable information sources, the information environment will not improve.”

Ella Taylor-Smith, senior research fellow at the School of Computing at Edinburgh Napier University, noted, “As more people become more educated, especially as digital literacy becomes a popular and respected skill, people will favour (and even produce) better quality information.”

Constance Kampf, a researcher in computer science and mathematics, said, “The answer depends on socio-technical design – these trends of misinformation versus verifiable information were already present before the internet, and they are currently being amplified. The state and trends in education and place of critical thinking in curricula across the world will be the place to look to see whether or not the information environment will improve – cyberliteracy relies on basic information literacy, social literacy and technological literacy. For the environment to improve, we need substantial improvements in education systems across the world in relation to critical thinking, social literacy, information literacy, and cyberliteracy (see Laura Gurak’s book ‘Cyberliteracy’).”

Su Sonia Herring, an editor and translator, commented, “Misinformation and fake news will exist as long as humans do they have existed ever since language was invented. Relying on algorithms and automated measures will result in various unwanted consequences. Unless we equip people with media literacy and critical-thinking skills, the spread of misinformation will prevail.”

Responses from additional key experts regarding the future of the information environment

This section features responses by several of the top analysts who participated in this canvassing. Following this wide-ranging set of comments is a much more expansive set of quotations directly tied to the five primary themes identified in this report.

Ignorance breeds frustration and ‘a growing fraction of the population has neither the skills nor the native intelligence to master growing complexity’

Mike Roberts, pioneer leader at ICANN and Internet Hall of Fame member, replied, “There are complex forces working both to improve the quality of information on the net, and to corrupt it. I believe the outrage resulting from recent events will, on balance, lead to a net improvement, but viewed with hindsight, the improvement may be viewed as inadequate. The other side of the complexity coin is ignorance. The average man or woman in America today has less knowledge of the underpinnings of his or her daily life than they did 50 or a hundred years ago. There has been a tremendous insertion of complex systems into many aspects of how we live in the decades since World War II, fueled by a tremendous growth in knowledge in general. Even among highly intelligent people, there is a significant growth in personal specialization in order to trim the boundaries of expected expertise to manageable levels. Among educated people, we have learned mechanisms for coping with complexity. We use what we know of statistics and probability to compartment uncertainty. We adopt ‘most likely’ scenarios for events of which we do not have detailed knowledge, and so on. A growing fraction of the population has neither the skills nor the native intelligence to master growing complexity, and in a competitive social environment, obligations to help our fellow humans go unmet. Educated or not, no one wants to be a dummy – all the wrong connotations. So ignorance breeds frustration, which breeds acting out, which breeds antisocial and pathological behavior, such as the disinformation, which was the subject of the survey, and many other undesirable second order effects. Issues of trustable information are certainly important, especially since the technological intelligentsia command a number of tools to combat untrustable info. But the underlying pathology won’t be tamed through technology alone. We need to replace ignorance and frustration with better life opportunities that restore confidence – a tall order and a tough agenda. Is there an immediate nexus between widespread ignorance and corrupted information sources? Yes, of course. In fact, there is a virtuous circle where acquisition of trustable information reduces ignorance, which leads to better use of better information, etc.”

The truth of news is murky and multifaceted

Judith Donath, fellow at Harvard University’s Berkman Klein Center for Internet & Society and founder of the Sociable Media Group at the MIT Media Lab, wrote, “Yes, trusted methods will emerge to block false narratives and allow accurate information to prevail, and, yes, the quality and veracity of information online will deteriorate due to the spread of unreliable, sometimes even dangerous, socially destabilizing ideas. Of course, the definition of ‘true’ is sometimes murky. Experimental scientists have many careful protocols in place to assure the veracity of their work, and the questions they ask have well-defined answers – and still there can be controversy about what is true, what work was free from outside influence. The truth of news stories is far murkier and multi-faceted. A story can be distorted, disproportional, meant to mislead – and still, strictly speaking, factually accurate. … But a pernicious harm of fake news is the doubt it sows about the reliability of all news. Donald Trump’s repeated ‘fake news’ smears of The New York Times, Washington Post, etc., are among his most destructive non-truths.”

“Algorithms weaponize rhetoric,” influencing on a mass scale

Susan Etlinger, industry analyst at Altimeter Research, said, “There are two main dynamics at play: One is the increasing sophistication and availability of machine learning algorithms and the other is human nature. We’ve known since the ancient Greeks and Romans that people are easily persuaded by rhetoric that hasn’t changed much in two thousand years. Algorithms weaponize rhetoric, making it easier and faster to influence people on a mass scale. There are many people working on ways to protect the integrity and reliability of information, just as there are cybersecurity experts who are in a constant arms race with cybercriminals, but to put as much emphasis on ‘information’ (a public good) as ‘data’ (a personal asset) will require a pretty big cultural shift. I suspect this will play out differently in different parts of the world.”

There’s no technical solution for the fact that ‘news’ is a social bargain

Clay Shirky, vice provost for educational technology at New York University, replied, “‘News’ is not a stable category – it is a social bargain. There’s no technical solution for designing a system that prevents people from asserting that Obama is a Muslim but allows them to assert that Jesus loves you.”

‘Strong economic forces are incentivizing the creation and spread of fake news’

Amy Webb, author and founder of the Future Today Institute, wrote, “In an era of social, democratized media, we’ve adopted a strange attitude. We’re simultaneously skeptics and true believers. If a news story reaffirms what we already believe, it’s credible – but if it rails against our beliefs, it’s fake. We apply that same logic to experts and sources quoted in stories. With our limbic systems continuously engaged, we’re more likely to pay attention to stories that make us want to fight, take flight or fill our social media accounts with links. As a result, there are strong economic forces incentivizing the creation and spread of fake news. In the digital realm, attention is currency. It’s good for democracy to stop the spread of misinformation, but it’s bad for business. Unless significant measures are taken in the present – and unless all the companies in our digital information ecosystem use strategic foresight to map out the future – I don’t see how fake news could possibly be reduced by 2027.”

Propagandists exploit whatever communications channels are available

Ian Peter, internet pioneer, historian and activist, observed, “It is not in the interests of either the media or the internet giants who propagate information, nor of governments, to create a climate in which information cannot be manipulated for political, social or economic gain. Propaganda and the desire to distort truth for political and other ends have always been with us and will adapt to any form of new media which allows open communication and information flows.”

Expanding information outlets erode opportunities for a ‘common narrative’

Kenneth R. Fleischmann, associate professor at the School of Information at the University of Texas, Austin, wrote, “Over time, the general trend is that a proliferation of information and communications technologies (ICTs) has led to a proliferation of opportunities for different viewpoints and perspectives, which has eroded the degree to which there is a common narrative – indeed, in some ways, this parallels a trend away from monarchy toward more democratic societies that welcome a diversity of perspectives – so I anticipate the range of perspectives to increase, rather than decrease, and for these perspectives to include not only opinions but also facts, which are inherently reductionist and can easily be manipulated to suit the perspective of the author, following the old aphorism about statistics Mark Twain attributed to Benjamin Disraeli [‘There are three kinds of lies: lies, damned lies and statistics.’], which originally referred to experts more generally.”

‘Broken as it might be, the internet is still capable of routing around damage’

Paul Saffo, longtime Silicon-Valley-based technology forecaster, commented, “The information crisis happened in the shadows. Now that the issue is visible as a clear and urgent danger, activists and people who see a business opportunity will begin to focus on it. Broken as it might be, the internet is still capable of routing around damage.”

It will be impossible to distinguish between fake and real video, audio, photos

Marina Gorbis, executive director of the Institute for the Future, predicted, “It’s not going to be better or worse but very different. Already we are developing technologies that make it impossible to distinguish between fake and real video, fake and real photographs, etc. We will have to evolve new tools for authentication and verification. We will probably have to evolve both new social norms as well as regulatory mechanisms if we want to maintain online environment as a source of information that many people can rely on.”

A ‘Cambrian explosion’ of techniques will arise to monitor the web and non-web sources

Stowe Boyd, futurist, publisher and editor-in-chief of Work Futures, said, “The rapid rise of AI will lead to a Cambrian explosion of techniques to monitor the web and non-web media sources and social networks and rapidly identifying and tagging fake and misleading content.”

Well, there’s good news and bad news about the information future …

Jeff Jarvis, professor at the City University of New York’s Graduate School of Journalism, commented, “Reasons for hope: Much attention is being directed at manipulation and disinformation the platforms may begin to recognize and favor quality and we are still at the early stage of negotiating norms and mores around responsible civil conversation. Reasons for pessimism: Imploding trust in institutions institutions that do not recognize the need to radically change to regain trust and business models that favor volume over value.”

A fear of the imposition of pervasive censorship

Jim Warren, an internet pioneer and open-government/open-records/open-meetings advocate, said, “False and misleading information has always been part of all cultures (gossip, tabloids, etc.). Teaching judgment has always been the solution, and it always will be. I (still) trust the longstanding principle of free speech: The best cure for ‘offensive’ speech is MORE speech. The only major fear I have is of massive communications conglomerates imposing pervasive censorship.”

People have to take responsibility for finding reliable sources

Steven Miller, vice provost for research at Singapore Management University, wrote, “Even now, if one wants to find reliable sources, one has no problem doing that, so we do not lack reliable sources of news today. It is that there are all these other options, and people can choose to live in worlds where they ignore so-called reliable sources, or ignore a multiplicity of sources that can be compared, and focus on what they want to believe. That type of situation will continue. Five or 10 years from now, I expect there to continue to be many reliable sources of news, and a multiplicity of sources. Those who want to seek out reliable sources will have no problems doing so. Those who want to make sure they are getting a multiplicity of sources to see the range of inputs, and to sort through various types of inputs, will be able to do so, but I also expect that those who want to be in the game of influencing perceptions of reality and changing the perceptions of reality will also have ample means to do so. So the responsibility is with the person who is seeking the news and trying to get information on what is going on. We need more individuals who take responsibility for getting reliable sources.”


Implications of the Trans Cult Agenda.

Control of thought is more important for governments that are free and popular than for despotic and military states. The logic is straightforward: a despotic state can control its domestic enemies by force, but as the state loses this weapon, other devices are required to prevent the ignorant masses from interfering with public affairs, which are none of their business.

From Noam Chomsky, Deterring Democracy, 1992.

The reader needs to understand that the question raised in this essay is not so much whether the trans agenda activists are deliberately using cult techniques or whether the young people caught up in its spell are actual card-holding members of a trans-cult. The characteristics, tactics, and methodologies of cultism are present nonetheless. Although I do believe there is conscious “gaming” going on, it does not have to be conscious to be a factor. Similarly, I am not suggesting the much discussed and abused “dysphoria” is not involved, only that it is increasingly induced and almost always exaggerated.

More disturbingly, what we see in extreme transgender activism is what in previous generations would have been considered unthinkable: modern social outcasts, instead of being turned into the government and pop culture-abhorring Goths or punk rockers of the past, are instead now being repurposed to actually support official government lines and agendas regarding gender, the breakdown of families, and the erasure of sexual distinctions and associated protections. The aggression of the outcasts, which used to be turned against the state, is now being turned against enemies of the state and anybody that opposes its new reality warping transgender theories and laws. Trans-cultists are quickly beginning to resemble the brainwashed children in George Orwell’s 1984.

Even the right to sexual orientation is now under attack, from inside an organization that used to fight and argue for just that right. Many radical trans activists are suggesting sexual preference/orientation is in fact a form of bigotry and that people dating trans have no right to know whether they are really male or female and that to deny them intimacy if they have the wrong genitals is akin to racism. Thus we saw Caitlyn (formerly Bruce) Jenner on national television suggesting that asking about one’s sex status “was not an appropriate question to ask.” Anybody suggesting transgender persons born male are not in fact “real women” and even “female,” are called transphobic (including lesbians). That such an attack would come from within a community whose entire existence was based upon fighting for the right to sexual orientation is not only bizarre, it almost seems like a deliberate attempt to fracture the community from within.

As mentioned earlier, it would be easy to laugh this stuff off if it were not for the fact that government, Hollywood, mainstream media, and almost every institution of power is promoting this insanity. This fact should be a red flag for everybody. Why, when trans persons comprise such a tiny minority, is there such a sudden and urgent rush across most of the Western world to push the trans agenda?

As Stella Morabito has written, the trans agenda “seems far more organized, focused, faster-and-more-furious than any propaganda campaign in history. (Which means it can’t withstand much scrutiny.)” Why? What is this all about? The people at the top of our society, particularly the advisers behind the scenes, are not stupid. So why are they allowing the distortion of reality and the brainwashing of children? Why are they promoting the absurd notion that men can be real women? Why do we see Fallon Fox, who was born a male, in the UFC ring beating a real female to a pulp while everybody pretends like this is okay and normal? The people at the top know this is an outrage.

While those in the seats of power would tell you to just trust them and not speculate on the reasons why the trans agenda is so important, I would remind the reader of the words of Jacques Ellul:

The propagandist naturally cannot reveal the true intentions of the principal for whom he acts… That would be to submit the projects to public discussion, to the scrutiny of public opinion, and thus to prevent their success… Propaganda must serve instead as a veil for such projects, masking true intention.

It is my contention that young people today are being brainwashed and turned into Orwellian servants of the state for a much larger agenda. It has been suggested by others that the assault on sexual distinctions and on the perception of basic biological reality is part of a game to give absolute power to the state in the interpretation of reality itself. If the state can force people to admit that men are women, they can force them to believe anything. War is peace. Slavery is freedom. Men are women. All of these are equally contradictory. In this sense it almost appears as though we have a pre-fascist attempt to turn the whole of society itself into a brainwashed cult much as we saw in Nazi Germany. Such statements might seem fantastic, but again we just need to look to the history of the 20 th century for lessons on why distrust of government and elite power is not only a good thing, but absolutely critical in protecting society against the rise of totalitarianism. Erich Fromm observed the rise of fascism first hand and noted:

When Fascism came into power, most people were unprepared, both theoretically and practically. They were unable to believe that man could exhibit such propensities for evil, such lust for power…or such a yearning for [the] submission [of the population].

At this point in human history there should be no question, it should not even be controversial, that political leaders and their supporters can become power mad and capable of the most insidious, carefully planned and executed, far-reaching evils imaginable. To ignore this capacity is irresponsible. One only needs, for instance, to watch the documentary Trudeau: Justin & Pierre to see that the seemingly saintly Canadian Prime Minister Justin Trudeau is actually a megalomaniac that spews a kind of “divine right of kings” philosophy in the sense that he believes he was born to lead the nation (one of his supporters had to implore him to be a little more humble). Justin’s father, Pierre, was the first prime minister in Canadian history to suspend civil rights and send troops into the streets. His son, Justin, has been at the forefront of pushing new reality-denying laws. When he was asked to allow inserting amendments into Bill C16 (the transgender rights bill) that would prevent males from entering women’s changerooms at pools and in schools, Trudeau specifically rejected the amendments. This shows he is in fact expecting males to enter women’s changerooms. Trudeau is thus the world’s leading champion of the trans agenda.

At a macro-level, if we assume the “masters of mankind” (as Noam Chomsky has called elite political figures), are trying to create a brainwashed totalitarian state, it would need to work to cause general chaos. “We need chaos before things can get better,” said Joseph Goebbels about the confusion reigning in Germany following World War I. “The dollar is climbing like an acrobat. I’m secretly delighted.” What Goebbels was hoping for, the betterment he expected, was the creation of a murderous control state the likes of which the world had never seen. Fascists have always sought to control the interpretation of reality, and the break down of social structures facilitates that. Thus anti-authoritarian scholar Hannah Arendt, commenting on the seizure of power by megalomaniacal leaders, wrote:

Before mass leaders seize the power to fit reality to their lies, their propaganda is marked by its extreme contempt for facts as such, for in their opinion fact depends entirely on the power of the man who can fabricate it.

The breakdown of society is necessary for the complete restructuring of society. Similarly, the breakdown of the individual ego is necessary for cults to restructure or redefine the individual. As Margaret Singer pointed out, happy people do not join cults unless they want to be part of the controlling hierarchy: “if the social structure has not broken down, very few people will follow.” This is true at both the macro and the micro level, hence the Nazi cult rose in Germany only after the breakdown of society, but with its leaders waiting and applauding the chaos behind the scenes. The chaos facilitated mass conversion of an otherwise indifferent population.

If we follow through the logical conclusions and outcomes of all these transgender laws being passed everywhere, it would appear to be about breaking down traditional morals and redefining humanity itself. As Morabito wrote, “The scope of the endgame is enormous: to legally and universally impose upon every human being a new definition — or rather, a non-definition — of what it means to be human.” Human casualties in the form of brainwashed, sterilized, mutilated children, and an assault on women’s right to privacy and their own sports and programs, are all incidentals or collateral damage in the mass social engineering and brainwashing “Game” that is the modern transgender agenda.

(I would like to end this essay by reminding everybody that this essay is not really about transgender people per se – I myself am transgender. The transgender youths that fall under the spell of the new transgender cult ideology, are really just tools being used in a much larger social engineering agenda. Do not focus on the tool being used, focus on the hands that are wielding it.)

Please note: My policy is not to screen republication requests. Thus I do not vouch for or necessarily agree or disagree with the views of anybody that may or may not publish my essays online. If people want to know what I believe they need to look at my words or ask me, do not assume that because somebody has published my essay somewhere that I agree with them. Thank you.


Qin Shi Huang: The ruthless emperor who burned books

There are two Chinese leaders whose final resting place is thronged by tourists - Mao Zedong and Qin Shi Huang, the emperor of terracotta soldier fame. But they also have another thing in common - Qin taught Mao a lesson in how to persecute intellectuals.

Chairman Mao Zedong has been dead for nearly 40 years but his body is still preserved in a mausoleum in Tiananmen Square.

The square is the symbolic heart of Chinese politics - red flags and lanterns flank the portrait of Mao on Tiananmen Gate where he proclaimed the People's Republic in 1949.

But the red emperor owed the idea of this vast country to an empire builder who lived 2,000 years earlier.

"We wouldn't have a China without Qin Shi Huang," says Harvard University's Peter Bol. "I think it's that simple."

China at the time was a land of many states.

In many ways - climate, lifestyle, diet - someone from northern Scotland and southern Spain have as much in common as someone from China's frozen north and the tropical south.

Before Qin, China's multiple states were diverging, rather than converging, says Bol.

"They have different calendars, their writing was starting to vary… the road widths were different, so the axle width is different in different places."

He was king of the small state of Qin by the age of 13, and started as he meant to go on - removing one possible threat to his throne by having his mother's lover executed, along with his entire clan.

A hundred years later the famous historian Sima Qian said of the young king:

"With his puffed-out chest like a hawk and voice of a jackal, Qin is a man of scant mercy who has the heart of a wolf. When he is in difficulty he readily humbles himself before others, but when he has got his way, then he thinks nothing of eating others alive.

"If the Qin should ever get his way with the world, then the whole world will end up his prisoner."

Qin Shi Huang built a formidable fighting machine. His army is easy to imagine because he left us the famous terracotta warriors in Xian.

"The Qin was really the first state to really go into total mobilisation for war," says Peter Bol.

"It really saw the work of its population being fighting and soldiering to win wars and expand."

One by one, Qin Shi Huang defeated neighbouring states, swallowed their territory into his growing empire and enslaved and castrated their citizens.

"Every time he captured people from another country, he castrated them in order to mark them and made them into slaves," says Hong Kong University's Xun Zhou.

"There were lots and lots of eunuchs in his court. He was a ruthless tyrant."

But still, no Qin, no China.

"From Mongolia down to Hong Kong, and from the sea right the way across to Sichuan - it's an enormous territory," says Frances Wood, curator of the Chinese collection at the British Library.

"It's the equivalent of the whole Roman Empire added together, if you like. And you've got one man ruling all of it."

Peter Bol credits Qin Shi Huang not only with creating China, but with establishing the world's first truly centralised bureaucratic empire.

"He set out to unify the procedures and customs and policies of all the states," says Bol.

"Writing is reunified. And the fact that Chinese writing remains unified after this point has everything to do with Qin Shi Huang. The axle widths are now all the same, so all the roads may now be passable.

"He also goes around to famous mountains, where they erect steles, stone monuments, which say that the Emperor's realm is now totally unified.

"His idea was that every area should have an able administrator, who was armed with rule books and who would look after the people. The people all knew what the rules were," says Wood.

"He collected taxes, he administered justice and he had trained bureaucrats all over China. I think that's an extraordinary achievement."

Despite this, it is the stories of his bloodletting that historian Xun Zhou grew up with.

"He got rid of anybody who showed opposition or didn't agree with him. He was paranoid. He was constantly in fear of how he could control this vast new territory with so many cultures and so many different groups of people," she says.

And he feared the inkbrush as much as the sword.

"The scholars were talking behind his back," says Xun Zhou. "And of course being a paranoid person, he didn't like that. So he ordered the arrest of over 400 scholars and buried them."

Qin Shi Huang had no truck with China's traditions of Confucian scholarship - his fear of the intellectual was deep-rooted.

"Ideologically speaking, the Qin make the argument, 'We don't want to hear people criticise the present by referring to the past,'" says Peter Bol.

"The past is irrelevant. History is irrelevant. And so you have the burning of books, you have the burying of scholars, of scholarly critics."

Bol sees parallels with today's China. Like Qin Shi Huang, the Communist Party tolerates debate about tactics - but not about the general direction of travel, he says.

"They argue that it is the only possible approach to governing China."

Historian Xun Zhou agrees. "In Communist China, we adopted the imperial model. The emperor is absolute. And the only way to rule such a vast empire is ruthlessness," she says.

In fact in 1958, Mao himself made the connection between himself and Qin Shi Huang.

"He buried 460 scholars alive - we have buried 46,000 scholars alive," he said in a speech to party cadres. "You [intellectuals] revile us for being Qin Shi Huangs. You are wrong. We have surpassed Qin Shi Huang a hundredfold."

Every night, Mao's body inside its crystal coffin reportedly goes down into its earthquake-proof vault in an elevator, and every morning it is brought back up again.

It is probably something Qin Shi Huang would have appreciated. But I am not sure he would have been impressed with Mao's mausoleum.

His includes a life-size terracotta army, a full orchestra with instruments and a river landscape with cranes, swans and geese - and archaeologists have barely begun the excavation.

"In a sense the man has disappeared behind the tomb," says Frances Wood.

"And of course the size of the buried army, the size of the tomb enclosure - which seems to expand daily - does rather overcome anything that one knows about him in reality. You've got this great physical presence now."

Both Qin Shi Huang and Mao live on powerfully in China's imagination, but China is bigger than its emperors.

When Qin Shi Huang died, his dynasty lasted only months. It was the idea of China which survived. And when Mao died, his successors said the radiance of his thought would live forever.

But the Mao suits are gone and despite the crowds at his mausoleum, Maoism is barely mentioned today.

Translation of Records of the Grand Historian by Burton Watson.


Contents

The crowd manipulator engages, controls, or influences crowds without the use of physical force, although his goal may be to instigate the use of force by the crowd or by local authorities. Prior to the American War of Independence, Samuel Adams provided Bostonians with "elaborate costumes, props, and musical instruments to lead protest songs in harborside demonstrations and parades through Boston's streets." If such crowds provoked British authorities to violence, as they did during the Boston Massacre on March 5, 1770, Adams would write, produce, and disperse sensationalized accounts of the incidents to stir discontent and create unity among the American colonies. [6] The American way of manipulation may be classified as a tool of soft power, which is "the ability to get what you want through attraction rather than coercion or payments". [7] Harvard professor Joseph Nye coined the term in the 1980s, although he did not create the concept. The techniques used to win the minds of crowds were examined and developed notably by Quintilian in his training book, Institutio oratoria and by Aristotle in Rhetoric. Known origins of crowd manipulation go as far back as the 5th century BC, where litigants in Syracuse sought to improve their persuasiveness in court. [8] [9]

The verb "manipulate" can convey negativity, but it does not have to do so. According to Merriam Webster's Dictionary, for example, to "manipulate" means "to control or play upon by artful, unfair, or insidious means especially to one's own advantage." [10] This definition allows, then, for the artful and honest use of control for one's advantage. Moreover, the actions of a crowd need not be criminal in nature. Nineteenth-century social scientist Gustave Le Bon wrote:

It is crowds rather than isolated individuals that may be induced to run the risk of death to secure the triumph of a creed or an idea, that may be fired with enthusiasm for glory and honour, that are led on--almost without bread and without arms, as in the age of the Crusades—to deliver the tomb of Christ from the infidel, or, as in [1793], to defend the fatherland. Such heroism is without doubt somewhat unconscious, but it is of such heroism that history is made. Were peoples only to be credited with the great actions performed in cold blood, the annals of the world would register but few of them. [11]

Edward Bernays, the so-called "Father of Public Relations", believed that public manipulation was not only moral, but a necessity. He argued that "a small, invisible government who understands the mental processes and social patterns of the masses, rules public opinion by consent." This is necessary for the division of labor and to prevent chaos and confusion. "The voice of the people expresses the mind of the people, and that mind is made up for it by the group leaders in whom it believes and by those persons who understand the manipulation of public opinion", wrote Bernays. [12] He also wrote, "We are governed, our minds are molded, our tastes formed, our ideas suggested, largely by men we have never heard of. This is a logical result of the way in which our democratic society is organized."

Others argue that some techniques are not inherently evil, but instead are philosophically neutral vehicles. Lifelong political activist and former Ronald Reagan White House staffer Morton C. Blackwell explained in a speech titled, "People, Parties, and Power":

Being right in the sense of being correct is not sufficient to win. Political technology determines political success. Learn how to organize and how to communicate. Most political technology is philosophically neutral. You owe it to your philosophy to study how to win. [13]

In brief, manipulators with different ideologies can employ successfully the same techniques to achieve ends that may be good or bad. Crowd manipulation techniques offers individuals and groups a philosophically neutral means to maximize the effect of their messages.

In order to manipulate a crowd, one should first understand what is meant by a crowd, as well as the principles that govern its behavior.

The word "crowd", according to Merriam-Webster's Dictionary, refers to both "a large number of persons especially when collected together" (as in a crowded shopping mall) and "a group of people having something in common [as in a habit, interest, or occupation]." [14] Philosopher G.A. Tawny defined a crowd as "a numerous collection of people who face a concrete situation together and are more or less aware of their bodily existence as a group. Their facing the situation together is due to common interests and the existence of common circumstances which give a single direction to their thoughts and actions." Tawney discussed in his work "The Nature of Crowds" two main types of crowds:

Crowds may be classified according to the degree of definiteness and constancy of this consciousness. When it is very definite and constant the crowd may be called homogeneous, and when not so definite and constant, heterogeneous. All mobs belong to the homogeneous class, but not all homogeneous crowds are mobs. . Whether a given crowd belong to the one group or the other may be a debatable question, and the same crowd may imperceptibly pass from one to the other. [15]

In a 2001 study, the Institute for Non-Lethal Defense Studies at Pennsylvania State University defined a crowd more specifically as "a gathering of a multitude of individuals and small groups that have temporarily assembled. These small groups are usually comprised of friends, family members, or acquaintances."

A crowd may display behavior that differs from the individuals who compose it. Several theories have emerged in the 19th century and early 20th century to explain this phenomenon. These collective works contribute to the "classic theory" of crowd psychology. In 1968, however, social scientist Dr. Carl Couch of the University of Liverpool refuted many of the stereotypes associated with crowd behavior as described by classic theory. His criticisms are supported widely in the psychology community but are still being incorporated as a "modern theory" into psychological texts. [16] A modern model, based on the "individualistic" concept of crowd behavior developed by Floyd Allport in 1924, is the Elaborated Social Identity Model (ESIM). [17]

Classic theory Edit

French philosopher and historian Hippolyte Taine provided in the wake of the Franco Prussian War of 1871 the first modern account of crowd psychology. Gustave Le Bon developed this framework in his 1895 book, Psychologie des Foules. He proposed that French crowds during the 19th century were essentially excitable, irrational mobs easily influenced by wrongdoers. [18] He postulated that the heterogeneous elements which make up this type of crowd essentially form a new being, a chemical reaction of sorts in which the crowd's properties change. He wrote:

Under certain given circumstances, and only under those circumstances, an agglomeration of men presents new characteristics very different from those of the individuals composing it. The sentiments and ideas of all the persons in the gathering take one and the same direction, and their conscious personality vanishes. A collective mind is formed, doubtless transitory, but presenting very clearly defined characteristics.

Le Bon observed several characteristics of what he called the "organized" or "psychological" crowd, including:

  1. submergence or the disappearance of a conscious personality and the appearance of an unconscious personality (aka "mental unity"). This process is aided by sentiments of invincible power and anonymity which allow one to yield to instincts which he would have kept under restraint (i.e. Individuality is weakened and the unconscious "gains the upper hand")
  2. contagion ("In a crowd every sentiment and act is contagious, and contagious to such a degree that an individual readily sacrifices his personal interest to the collective interest.") and
  3. suggestibility as the result of a hypnotic state. "All feelings and thoughts are bent in the direction determined by the hypnotizer" and the crowd tends to turn these thoughts into acts. [11]

In sum, the classic theory contends that:

  • "[Crowds] are unified masses whose behaviors can be categorized as active, expressive, acquisitive or hostile."
  • "[Crowd] participants [are] given to spontaneity, irrationality, loss of self-control, and a sense of anonymity." [19]

Modern theory Edit

Critics of the classic theory contend that it is seriously flawed in that it decontextualises crowd behavior, lacks sustainable empirical support, is biased, and ignores the influence of policing measures on the behavior of the crowd. [20]

In 1968, Dr. Carl J. Couch examined and refuted many classic-theory stereotypes in his article, "Collective Behavior: An Examination of Some Stereotypes." Since then, other social scientists have validated much of his critique. Knowledge from these studies of crowd psychology indicate that:

  • "Crowds are not homogeneous entities" but are composed "of a minority of individuals and a majority of small groups of people who are acquainted with one another."
  • "Crowd participants are [neither] unanimous in their motivation" nor to one another. Participants "seldom act in unison, and if they do, that action does not last long."
  • "Crowds do not cripple individual cognition" and "are not uniquely distinguished by violence or disorderly actions."
  • "Individual attitudes and personality characteristics", as well as "socioeconomic, demographic and political variables are poor predictors of riot intensity and individual participation."

According to the aforementioned 2001 study conducted by Penn State University's Institute for Non-Lethal Defense Technologies, crowds undergo a process that has a "beginning, middle, and ending phase." Specifically:

  • The assembling process
    • This phase includes the temporary assembly of individuals for a specific amount of time. Evidence suggests that assembly occurs most frequently by means of an "organized mobilization method" but can also occur by "impromptu process" such as word of mouth by non-official organizers.
    • In this phase, individuals are assembled and participate in both individual and "collective actions." Rarely do all individuals in a crowd participate, and those who do participate do so by choice. Participation furthermore appears to vary based on the type and purpose of the gathering, with religious services experiencing "greater participation" (i.e. 80-90%).
    • In the final phase, the crowd's participants disperse from a "common location" to "one or more alternate locations."

    A "riot" occurs when "one or more individuals within a gathering engage in violence against person or property." According to U.S. and European research data from 1830 to 1930 and from the 1960 to the present, "less than 10 percent of protest demonstrations have involved violence against person or property", with the "celebration riot" as the most frequent type of riot in the United States. [21]

    Elaborated social identity model (ESIM) Edit

    A modern model has also been developed by Steve Reicher, John Drury, and Clifford Stott [22] which contrasts significantly from the "classic theory" of crowd behavior. According to Clifford Stott of the University of Leeds:

    The ESIM has at its basis the proposition that a component part of the self concept determining human social behaviour derives from psychological membership of particular social categories (i.e., an identity of a unique individual), crowd participants also have a range of social identities which can become salient within the psychological system referred to as the 'self.' Collective action becomes possible when a particular social identity is simultaneously salient and therefore shared among crowd participants.

    Stott's final point differs from the "submergence" quality of crowds proposed by Le Bon, in which the individual's consciousness gives way to the unconsciousness of the crowd. ESIM also considers the effect of policing on the behavior of the crowd. It warns that "the indiscriminate use of force would create a redefined sense of unity in the crowd in terms of the illegitimacy of and opposition to the actions of the police." This could essentially draw the crowd into conflict despite the initial hesitancy of the individuals in the crowd. [23]

    Crowd manipulation involves several elements, including: context analysis, site selection, propaganda, authority, and delivery.

    Context analysis Edit

    History suggests that the socioeconomic and political context and location influence dramatically the potential for crowd manipulation. Such time periods in America included:

    • Prelude to the American Revolution (1763–1775), when Britain imposed heavy taxes and various restrictions upon its thirteen North American colonies [24]
    • Roaring Twenties (1920–1929), when the advent of mass production made it possible for everyday citizens to purchase previously considered luxury items at affordable prices. Businesses that utilized assembly-line manufacturing were challenged to sell large numbers of identical products [25]
    • The Great Depression (1929–1939), when a devastating stock market crash disrupted the American economy, caused widespread unemployment and
    • The Cold War (1945–1989), when Americans faced the threat of nuclear war and participated in the Korean War, the greatly unpopular Vietnam War, the Civil Rights Movement, the Cuban Missile Crisis.

    Internationally, time periods conducive to crowd manipulation included the Interwar Period (i.e. following the collapse of the Austria-Hungarian, Russian, Ottoman, and German empires) and Post-World War II (i.e. decolonization and collapse of the British, German, French, and Japanese empires). [26] The prelude to the collapse of the Soviet Union provided ample opportunity for messages of encouragement. The Solidarity Movement began in the 1970s thanks in part to leaders like Lech Walesa and U.S. Information Agency programming. [27] In 1987, U.S. President Ronald Reagan capitalized on the sentiments of the West Berliners as well as the freedom-starved East Berliners to demand that General Secretary of the Communist Party of the Soviet Union Mikhail Gorbachev "tear down" the Berlin Wall. [28] During the 2008 presidential elections, candidate Barack Obama capitalized on the sentiments of many American voters frustrated predominantly by the recent economic downturn and the continuing wars in Iraq and Afghanistan. His simple messages of "Hope", "Change", and "Yes We Can" were adopted quickly and chanted by his supporters during his political rallies. [29]

    Historical context and events may also encourage unruly behavior. Such examples include the:

    • 1968 Columbia, SC Civil Rights Protest
    • 1992 London Poll Tax Protest and
    • 1992 L.A. Riots (sparked by the acquittal of police officers involved in the assault of Rodney King). [30]

    In order to capitalize fully upon historical context, it is essential to conduct a thorough audience analysis to understand the desires, fears, concerns, and biases of the target crowd. This may be done through scientific studies, focus groups, and polls. [25]

    Site selection Edit

    Where a crowd assembles also provides opportunities to manipulate thoughts, feelings, and emotions. Location, weather, lighting, sound, and even the shape of an arena all influence a crowd's willingness to participate.

    Symbolic and tangible backdrops like the Brandenburg Gate, used by Presidents John F. Kennedy, Ronald Reagan, and Bill Clinton in 1963, 1987, and 1994, respectively, can evoke emotions before the crowd manipulator opens his or her mouth to speak. [31] [32] George W. Bush's "Bullhorn Address" at Ground Zero following the 2001 terrorist attack on the World Trade Center is another example of how venue can amplify a message. In response to a rescue worker's shout, "I can't hear you", President Bush shouted back, "I can hear you! I can hear you! The rest of the world hears you! And the people – and the people who knocked these buildings down will hear all of us soon!" The crowd erupted in cheers and patriotic chants. [33]

    Propaganda Edit

    The crowd manipulator and the propagandist may work together to achieve greater results than they would individually. According to Edward Bernays, the propagandist must prepare his target group to think about and anticipate a message before it is delivered. Messages themselves must be tested in advance since a message that is ineffective is worse than no message at all. [34] Social scientist Jacques Ellul called this sort of activity "pre-propaganda", and it is essential if the main message is to be effective. Ellul wrote in Propaganda: The Formation of Men's Attitudes:

    Direct propaganda, aimed at modifying opinions and attitudes, must be preceded by propaganda that is sociological in character, slow, general, seeking to create a climate, an atmosphere of favorable preliminary attitudes. No direct propaganda can be effective without pre-propaganda, which, without direct or noticeable aggression, is limited to creating ambiguities, reducing prejudices, and spreading images, apparently without purpose. …

    In Jacques Ellul's book, Propaganda: The Formation of Men's Attitudes, it states that sociological propaganda can be compared to plowing, direct propaganda to sowing you cannot do the one without doing the other first. [35] Sociological propaganda is a phenomenon where a society seeks to integrate the maximum number of individuals into itself by unifying its members' behavior according to a pattern, spreading its style of life abroad, and thus imposing itself on other groups. Essentially sociological propaganda aims to increase conformity with the environment that is of a collective nature by developing compliance with or defense of the established order through long term penetration and progressive adaptation by using all social currents. The propaganda element is the way of life with which the individual is permeated and then the individual begins to express it in film, writing, or art without realizing it. This involuntary behavior creates an expansion of society through advertising, the movies, education, and magazines. "The entire group, consciously or not, expresses itself in this fashion and to indicate, secondly that its influence aims much more at an entire style of life." [36] This type of propaganda is not deliberate but springs up spontaneously or unwittingly within a culture or nation. This propaganda reinforces the individual's way of life and represents this way of life as best. Sociological propaganda creates an indisputable criterion for the individual to make judgments of good and evil according to the order of the individual's way of life. Sociological propaganda does not result in action, however, it can prepare the ground for direct propaganda. From then on, the individual in the clutches of such sociological propaganda believes that those who live this way are on the side of the angels, and those who don't are bad. [37]

    Bernays expedited this process by identifying and contracting those who most influence public opinion (key experts, celebrities, existing supporters, interlacing groups, etc.).

    After the mind of the crowd is plowed and the seeds of propaganda are sown, a crowd manipulator may prepare to harvest his crop. [34]

    Authority Edit

    The manipulator may be an orator, a group, a musician, an athlete, or some other person who moves a crowd to the point of agreement before he makes a specific call to action. Aristotle believed that the ethos, or credibility, of the manipulator contributes to his persuasiveness.

    Prestige is a form of "domination exercised on our mind by an individual, a work, or an idea." The manipulator with great prestige paralyses the critical faculty of his crowd and commands respect and awe. Authority flows from prestige, which can be generated by "acquired prestige" (e.g. job title, uniform, judge's robe) and "personal prestige" (i.e. inner strength). Personal prestige is like that of the "tamer of a wild beast" who could easily devour him. Success is the most important factor affecting personal prestige. Le Bon wrote, "From the minute prestige is called into question, it ceases to be prestige." Thus, it would behoove the manipulator to prevent this discussion and to maintain a distance from the crowd lest his faults undermine his prestige. [38]

    Delivery Edit

    The manipulator's ability to sway a crowd depends especially on his or her visual, vocal, and verbal delivery. Winston Churchill and Adolf Hitler made personal commitments to become master rhetoricians.

    Churchill Edit

    At 22, Winston Churchill documented his conclusions about speaking to crowds. He titled it "The Scaffolding of Rhetoric" and it outlined what he believed to be the essentials of any effective speech. Among these essentials are:

    • "Correctness of diction", or proper word choice to convey the exact meaning of the orator
    • "Rhythm", or a speech's sound appeal through "long, rolling and sonorous" sentences
    • "Accumulation of argument", or the orator's "rapid succession of waves of sound and vivid pictures" to bring the crowd to a thundering ascent
    • "Analogy", or the linking of the unknown to the familiar and
    • "Wild extravagance", or the use of expressions, however extreme, which embody the feelings of the orator and his audience. [39]

    Hitler Edit

    Adolf Hitler believed he could apply the lessons of propaganda he learned painfully from the Allies during World War I and apply those lessons to benefit Germany thereafter. The following points offer helpful insight into his thinking behind his on-stage performances:

    • Appeal to the masses: "[Propaganda] must be addressed always and exclusively to the masses", rather than the "scientifically trained intelligentsia."
    • Target the emotions: "[Propaganda] must be aimed at the emotions and only to a very limited degree at the so-called intellect."
    • Keep your message simple: "It is a mistake to make propaganda many-sided…The receptivity of the great masses is very limited, their intelligence is small, but their power of forgetting is enormous."
    • Prepare your audience for the worst-case scenario: "[Prepare] the individual soldier for the terrors of war, and thus [help] to preserve him from disappointments. After this, the most terrible weapon that was used against him seemed only to confirm what his propagandists had told him it likewise reinforced his faith in the truth of his government's assertions, while on the other hand it increased his rage and hatred against the vile enemy."
    • Make no half statements: "…emphasize the one right which it has set out to argue for. Its task is not to make an objective study of the truth, in so far as it favors the enemy, and then set it before the masses with academic fairness its task is to serve our own right, always and unflinchingly."
    • Repeat your message constantly: "[Propagandist technique] must confine itself to a few points and repeat them over and over. Here, as so often in this world, persistence is the first and most important requirement for success." [40][41] (Gustave Le Bon believed that messages that are affirmed and repeated are often perceived as truth and spread by means of contagion. "Man, like animals, has a natural tendency to imitation. Imitation is a necessity for him, provided always that the imitation is quite easy", wrote Le Bon. [42] In his 1881 essay "L'Homme et Societes", he wrote "It is by examples not by arguments that crowds are guided." He stressed that in order to influence, one must not be too far removed his audience nor his example unattainable by them. If it is, his influence will be nil. [43]

    The Nazi Party in Germany used propaganda to develop a cult of personality around Hitler. Historians such as Ian Kershaw emphasise the psychological impact of Hitler's skill as an orator. [44] Neil Kressel reports, "Overwhelmingly . Germans speak with mystification of Hitler's 'hypnotic' appeal". [45] Roger Gill states: "His moving speeches captured the minds and hearts of a vast number of the German people: he virtually hypnotized his audiences". [46] Hitler was especially effective when he could absorb the feedback from a live audience, and listeners would also be caught up in the mounting enthusiasm. [47] He looked for signs of fanatic devotion, stating that his ideas would then remain "like words received under an hypnotic influence." [48] [49]


    America in the 1920’s

    The powerful economic might of America from 1920 to October 1929 is frequently overlooked or simply shadowed by the more exciting topics such as Prohibition and the gangsters, the Jazz Age with its crazies and the Klu Klux Klan. However, the strength of America was generated and driven by its vast economic power.

    In this decade, America became the wealthiest country in the world with no obvious rival. Yet by 1930 she had hit a depression that was to have world-wide consequences. But in the good times almost everybody seemed to have a reasonably well paid job and almost everybody seemed to have a lot of spare cash to spend.

    One of the reasons for this was the introduction of hire-purchase whereby you put a deposit on an item that you wanted and paid installments on that item, with interest, so that you paid back more than the price for the item but did not have to make one payment in one go. Hire-purchase was easy to get and people got into debt without any real planning for the future. In the 1920’s it just seemed to be the case that if you wanted something then you got it.

    But simply buying something had a major economic impact. Somebody had to make what was bought. This was the era before robot technology and most work was labour intensive i.e. people did the work. The person who made that product would get paid and he (as it usually was in the 1920’s) would not save all that money. He, too, would spend some of it and someone somewhere else would have to make that and so he would get paid. And so the cycle continued. This was the money flow belief of John Maynard Keynes. If people were spending, then people had to be employed to make things. They get paid, spent their money and so the cycle continued.

    A good example was the motor car industry. The 3 big producers were Ford, Chrysler and General Motors.

    A boom in the car industry came from Ford’s with the legendary Ford Model -T.

    This was a car for the people. It was cheap mass production had dropped its price to just $295 in 1928. The same car had cost $1200 in 1909. By 1928, just about 20% of all Americans had cars. The impact of Ford meant that others had to produce their own cheap car to compete. The benefits went to the consumer. Hire-purchase made cars such as these very affordable. But there were major spin-offs from this one industry as 20% of all American steel went to the car industry 80% of all rubber 75% of all plate glass and 65% of all leather. 7 billion gallons of petrol were used each year and, of course, motels, garages, restaurants etc. all sprung up and all these outlets employed people and these people got paid.

    To cope with the new cars new roads were built which employed a lot of people. But not everybody was happy with cars. Critics referred to cars as “prostitution on wheels” as young couples courted in them and gangsters started to use the more powerful models as getaway cars after robberies. But cars were definitely here to stay.

    Not only were cars popular. Radios (10 million sold by 1929), hoover’s, fridge’s and telephones sold in huge numbers.

    By 1928 even the president, Hoover, was claiming that America had all but rid itself of poverty. The nation was fulfilling a previous president’s pronouncement: “The business in America is business” – Calvin Coolidge.

    But 2 groups did not prosper at all :

    1) The African Americans were forced to do menial labour for very poor wages in the southern states. They lived lives of misery in total poverty. The KKK made this misery worse. In the northern states, decent jobs went to the white population and discrimination was just as common in the north as it was in the South (though the Klan was barely in existence in the north and the violence that existed in the South barely existed in the north) and many black families lived in ghettoes in the cities in very poor conditions. In the 1920’s the black population did not share in the economic boom. Their only real outlet was jazz and dancing though this was done to entertain the richer white population, and sport, especially boxing.

    2) The share croppers of the south and mid-Americas. These people rented out land from landlords or got a mortgage together to buy land to farm. When they could not afford the rent or mortgage payments they were evicted from the land. There was such a massive boost in food production that prices tumbled as farmers desperately tried to sell their produce and failed. The European market was out of the question. Europe had retaliated at tariffs on their products going into the American market by putting tariffs on American goods destined for the European market thus making them far more expensive – this included grain. Many farmers in the mid-west lost their homes. Unmarried male farmers became the legendary hobos – men who roamed the mid-American states on trains looking for part-time work.

    These two groups were frequently forgotten in the “Jazz Age“. To many people, they were “out of sight and out of mind”. It appeared that everybody had money – even factory workers and shoe-shine boys on city streets. In fact, people had spare money with nothing to do with it. They invested whatever they could in the Stock Market in Wall Street, New York. There were huge fortunes to be made here and many invested money they could ill afford to lose. However, the lure was too great and everybody knew that there was money to be made.

    Stockbrokers were at fault as they were happy to accept a ‘margin’ to buy shares for a person this was accepting just 10% of the cost of the shares that were to be purchased for a customer. The rest was to be collected when the price of shares went up – as they would, of course…. By 1929, over 1 million people owned shares in America.

    In October 1929, the Wall Street Crash occurred. Its impact was felt worldwide.


    Propaganda war: Weaponizing the internet

    MANILA, Philippines – On Saturday, September 3, 2016, the day after the Davao bombing, at least one anonymous Facebook account began to share a March 26, 2016 Rappler story, "Man with bomb nabbed at Davao checkpoint."

    It was quickly picked up and shared by Facebook political advocacy pages for President Rodrigo Duterte. Other websites took the entire dated story and reposted on their sites, like newstrendph.com, which is linked to Duterte News Global (the post has since been taken down). Other Facebook pages, such as Digong Duterte and Duterte Warrior, became active participants in this disinformation campaign. Soon after, these pages manually altered their times of postings.

    This is disinformation because it led readers to think the man with the bomb was captured that day, September 3, when President Duterte declared a state of lawlessness in the aftermath of the bombing. Readers were duped into sharing a lie because the context changed the old headline.

    That lie served a dual purpose: it led you to believe the government’s draconian measure was justified and that it acted just in the nick of time but, it also hit the credibility of a trusted news source - which was the way these pages represented the story once Rappler alerted our community about it.

    It was such an effective campaign that despite the developing news about the Davao bombing, this old story trended number 1 and stayed in the top 10 stories in Rappler for more than 48 hours.

    Take another example: a post by Peter Tiu Lavina, Duterte's campaign spokesman, who attacked critics of the government's "war on drugs" with his statement about a 9-year-old girl who was raped and murdered.

    These are only some of the many disinformation campaigns we’ve seen since the election period: social media campaigns meant to shape public opinion, tear down reputations, and cripple traditional media institutions.

    This strategy of "death by a thousand cuts" uses the strength of the internet and exploits the algorithms that power social media to sow confusion and doubt.

    This series takes apart this new phenomenon triggered by technology and information’s exponential growth:

    Part 1 looks at the paid propaganda taking over social media

    Part 2 takes apart the new information ecosystem, its impact on human behavior, and how its weaknesses could be exploited and,

    Part 3 focuses on 26 fake accounts on Facebook, which together extend to a network that influences at least 3 million other accounts.

    Weaponizing the internet

    It’s a strategy of "death by a thousand cuts" – a chipping away at facts, using half-truths that fabricate an alternative reality by merging the power of bots and fake accounts on social media to manipulate real people.

    A bot is a program written to give an automated response to posts on social media, creating the perception that there’s a tidal wave of public opinion. Since this is machine-driven, it can manufacture thousands of posts per minute.

    A fake account is a manufactured online identity, sometimes known as a troll depending on the account’s behavior. Not all trolls are part of a paid propaganda campaign, but for now let’s focus on the paid initiatives, which can pay a troll up to P100,000/month.

    Often, dozens of these fake accounts work together along with anonymous pages, strengthening each other’s reach for Facebook’s algorithms. These networks can work with or without bots.

    A small group of 3 operators, a source tells Rappler, can earn as much as P5 million a month.

    Because they often disregard truth and manipulate emotions, these networks easily game Facebook’s algorithm.

    In the Philippines and around the world, political advocacy pages, made specifically for Facebook, are cleverly positioned and engineered to take over your news feed.

    That allows these propaganda accounts to create a social movement that is widening the cracks in Philippine society by exploiting economic, regional, and political divides.

    It unleashed a flood of anger against Duterte critics that has created a chilling effect.

    “It was specifically brought into sharp relief during these past elections, where the amount of hatred and vitriol on the internet was just intolerable,” Vince Lazatin, Executive Director of Transparency & Accountability Network, said during a recent panel on Technology and the Public Debate. “It silenced people into submission. The trolls have found a way to weaponize the internet.”

    It’s not clear whether these accounts used for the campaign are working with official government channels today.

    What is clear is they share the same key message: a fanatic defense of Duterte, who’s portrayed as the father of the nation deserving the support of all Filipinos.

    This possible consolidation of the Duterte campaign machinery with state communications channels is dangerous.

    We only need to look to China, which fakes nearly 450 million social media comments a year, according to the Washington Post.

    This is the first time this sophisticated political propaganda machinery has been used in the Philippines.

    FUD - Fear, uncertainty, doubt

    Yet, this isn’t the first time social media has been used to manipulate public opinion here.

    The first groups to actively use the power of social media, including its dark side tactics, were corporations and their allies. They used a strategy popularized in the computer industry in the US known as FUD – which stands for Fear, Uncertainty and Doubt – a disinformation strategy that spreads negative or false information to fuel fear.

    FUD is commonly used in sales, marketing, public relations – and now – politics and propaganda.

    As early as October 5, 2014, Rappler alerted the public about how interest groups are mobilizing fictitious social media resources at scale to disrupt online conversations.

    It used a combination of bots and fake accounts to essentially take over and shut down telco promotional campaign #SmartFreeInternet.

    In a nutshell, if you use the hashtag, it signals a bot to message your account – to sow fear and doubt to trigger anger – a classic FUD campaign. That’s coupled with fake accounts which continue the campaign. (The greenish-blue line are bots, which attacked at such a high frequency that it effectively shut down the red Smart campaign.)

    Here’s a map of the conversation, laying bare a familiar communist strategy: “surround the city from the countryside” – effectively shutting out the Smart Twitter account from its targeted millenials.

    First social media elections

    Social media came of age for politics during the election campaign for the May 2016 elections.

    Long before Duterte decided to run, we had long noticed that Davao City had one of the most engaged social media community in the Philippines.

    Now we would see humans augmented by machines in both engagement and online polls.

    The first time we saw Twitter bots in politics seemed to happen by accident.

    Four days after he declared his presidency, from midnight to 2am on November 25, 2015, more than 30,000 tweets mentioning Rodrigo Duterte were posted, at times reaching more than 700 tweets per minute. That’s more than the number of tweets posted when he declared he would run, and more than all the tweets about any presidential candidate over the previous 29 days.

    Thinking Machines did an analysis of the campaign using bots and discovered that politics had intersected with entertainment. An examination of the bot-like Twitter accounts showed their timelines full of KathNiel. (Read: KathNiel, Twitter bots, polls: Quality, not just buzz)

    What about online surveys which are used to gauge public opinion? Machines can influence that as well.

    In December, 2015, Rappler investigated technical manipulation of our online survey, and discovered that 99% of votes from Russia, Korea and China were for Mar Roxas (although there was a small number of these manufactured votes for Duterte as well). Deleting the fake votes changed the winner from Roxas to Duterte. (Read: Who gamed the Rappler election poll?)

    Duterte social media campaign

    Social media was a crucial factor in electing this president.

    Former activist and ex-ABS-CBN sales chief Nic Gabunada headed Duterte’s social media efforts. He told Rappler in a May 31 interview that he built the network with P10 million and up to 500 volunteers, who tapped their own networks.

    They were organized into 4 main groups: OFWs or overseas Filipino workers, Luzon, Visayas and Mindanao. He said each volunteer handled between 300 to 6,000 members, but that the largest group had 800,000 members. (Read: Duterte’s P10Msocial media campaign: Organic, volunteer-drive)

    It was a decentralized campaign: each group created its own content, but the campaign narrative and key daily messages were centrally determined and cascaded for execution. Gabunada emphasized these posts were done by real people, not bots.

    Analysts agreed that the 2016 elections were the most engaged in Philippine history, but they also pointed out that the period also highlighted some of the angriest and vicious political discourse that’s transforming our democracy.

    By March, two students at UP Los Baños had been threatened by an online mob.

    In a scene reminiscent of the Boston bombing witch hunt, Duterte supporters tracked cell phone numbers and harassed and threatened the students they perceived to be disrespectful of Duterte.

    At one point, they created a Facebook page demanding death for the student it named. (READ: #AnimatED: Online mob creates social media wasteland)

    Within 48 hours, the Duterte camp asked his supporters to “take the moral high ground” online. (READ: Duterte to supporters: Be civil, intelligent, decent, compassionate)

    In April, a young woman who posted she was campaigning against Duterte was deluged with threats and harassment. (READ: 'Sana ma-rape ka’: Netizens bully anti-Duterte voter)

    Shortly before election day, she tested laws governing cyberbullying by filing 34 complaints in court.

    Boycott & attack media

    The day after he won, Duterte called for healing and his campaign team supported and trended his message using the hashtag #HealingStartsNow.

    The soon-to-be president made numerous controversial statements in late night press conferences, including what could be seen as a justification for journalist killings and his wolf-whistle of a GMA7 reporter.

    By the beginning of June, Duterte announced he would boycott media and channeled all statements and press conferences through state television network, PTV, and RTVM.

    He didn’t break that boycott of private media until August 1.

    In those two months, the campaign machinery pivoted to propaganda and threats, first attacking ABS-CBN, then Inquirer (largely because of its Kill List keeping track of extrajudicial killings).

    GMA7 and Rappler took the hot seat after Duterte wolf-whistled at a GMA7 reporter, Mariz Umali, at a press conference, and Rappler reporter Pia Ranada-Robles questioned him on it.

    DEATH THREAT. Rappler reporter Pia Ranada was attacked for asking the president questions on catcalling.

    DEATH THREAT. A Facebook user tells Rappler that he won't wonder if one of our administrators or managers will be murdered.

    The social media attacks were vicious and personal. They built on their campaign messages, continuing to rail against the Liberal Party and building fear for a “yellow army.”

    Anonymous and fake accounts rallied real people to create and spread memes with simple messages that contain a grain of truth, the most efficient for FUD:

    When the leader of a nation refuses direct access to journalists, controls the narrative top down through established state groups, and is echoed bottom up by social media initiatives, it creates a chilling effect on 2 fronts:

    On September 19, the National Union of Journalists of the Philippines called on the government to investigate social media attacks against journalists Gretchen Malalad and Jamela Alindogan-Caudron.

    On September 22, President Duterte asked his supporters to stop threatening journalists.

    But his statement has done little to stem the propaganda attacks.

    Over the weekend, Reuters reporters Manny Mogato and Karen Lema were targeted after reporting President Duterte's remarks about Hitler.

    Shape perception, rewrite history

    These all impact public perception. Fallacious reasoning, leaps in logic, poisoning the well – these are only some of the propaganda techniques that have helped shift public opinion on key issues.

    Take for example what was once a prevailing acceptance of human rights and the idea of “innocent until proven guilty.” Today there seems to be a wide acceptance of murder, especially of drug pushers, and any attempt to question that is portrayed as part of a conspiracy theory.

    It’s part of the reason many silently accept that in just 11 weeks, 3,546 people have died in the government’s “war on drugs.” (These figures from the PNP were later revised to 3,145 on September 14, 2016).

    After all, when someone criticizes the police or government on Facebook, immediate attacks are posted, including “someone should rape your daughter,” “how many people were raped by pushers,” “why not talk about those killed by drugs,” “mayaman kasi kayo,” and many more.

    Could it also be used to rewrite history?

    That was the charge against the Official Gazette on the 99th birthday of Ferdinand Marcos, who held power for nearly 21 years.

    Using a quote that subtly links Marcos to Duterte’s campaign for change, the caption’s revisions sparked outrage, especially after a conflict of interest surfaced: it was posted by a former Marcos staff member.

    Understanding what’s happening is a first step.

    Working together to separate fact from fiction is another step.

    Regardless of your political leaning, social media is a powerful tool, and if abused, the first casualty is the truth – which will have a direct impact on the quality of Philippine democracy. – Rappler.com

    Maria A. Ressa

    Maria Ressa has been a journalist in Asia for nearly 35 years. As Rappler's co-founder, executive editor and CEO, she has endured constant political harassment and arrests by the Duterte government. For her courage and work on disinformation and ⟺ke news,' Maria was named Time Magazine’s 2018 Person of the Year, was among its 100 Most Influential People of 2019, and has also been named one of Time's Most Influential Women of the Century. She was also part of BBC's 100 most inspiring and influential women of 2019 and Prospect magazine's world's top 50 thinkers, and has won many awards for her contributions to journalism and human rights. Before founding Rappler, Maria focused on investigating terrorism in Southeast Asia. She opened and ran CNN's Manila Bureau for nearly a decade before opening the network's Jakarta Bureau, which she ran from 1995 to 2005. She wrote Seeds of Terror: An Eyewitness Account of al-Qaeda’s Newest Center of Operations in Southeast Asia and From Bin Laden to Facebook: 10 Days of Abduction, 10 Years of Terrorism.


    Watch the video: MASS OF THE AGES: Episode 1 Discover the Traditional Latin Mass 4K (May 2022).