Cars now run on data. We hacked one to find out what it knows about you.
Washington Post tech columnist Geoffrey A. Fowler cracked open a Chevrolet to find an always-on Internet connection and data from his smartphone. (Jonathan Baran/The Washington Post)
Behind the wheel, it’s nothing but you, the open road — and your car quietly recording your every move.
On a recent drive, a 2017 Chevrolet collected my precise location. It stored my phone’s ID and the people I called. It judged my acceleration and braking style, beaming back reports to its maker General Motors over an always-on Internet connection.
Cars have become the most sophisticated computers many of us own, filled with hundreds of sensors. Even older models know an awful lot about you. Many copy over personal data as soon as you plug in a smartphone.
But for the thousands you spend to buy a car, the data it produces doesn’t belong to you. My Chevy’s dashboard didn’t say what the car was recording. It wasn’t in the owner’s manual. There was no way to download it.
To glimpse my car data, I had to hack my way in.
We’re at a turning point for driving surveillance: In the 2020 model year, most new cars sold in the United States will come with built-in Internet connections, including 100 percent of Fords, GMs and BMWs and all but one model Toyota and Volkswagen. (This independent cellular service is often included free or sold as an add-on.) Cars are becoming smartphones on wheels, sending and receiving data from apps, insurance firms and pretty much wherever their makers want. Some brands even reserve the right to use the data to track you down if you don’t pay your bills.
When I buy a car, I assume the data I produce is owned by me — or at least is controlled by me. Many automakers do not. They act like how and where we drive, also known as telematics, isn’t personal information.
Cars now run on the new oil: your data. It is fundamental to a future of transportation where vehicles drive themselves and we hop into whatever one is going our way. Data isn’t the enemy. Connected cars already do good things like improve safety and send you service alerts that are much more helpful than a check-engine light in the dash.
But we’ve been down this fraught road before with smart speakers, smart TVs, smartphones and all the other smart things we now realize are playing fast and loose with our personal lives. Once information about our lives gets shared, sold or stolen, we lose control.
There are no federal laws regulating what carmakers can collect or do with our driving data. And carmakers lag in taking steps to protect us and draw lines in the sand. Most hide what they’re collecting and sharing behind privacy policies written in the kind of language only a lawyer’s mother could love.
Car data has a secret life. To find out what a car knows about me, I borrowed some techniques from crime scene investigators.
What your car knows
Jim Mason hacks into cars for a living, but usually just to better understand crashes and thefts. The Caltech-trained engineer works in Oakland, Calif., for a firm called ARCCA that helps reconstruct accidents. He agreed to help conduct a forensic analysis of my privacy.
I chose a Chevrolet as our test subject because its maker GM has had the longest of any automaker to figure out data transparency. It began connecting cars with its OnStar service in 1996, initially to summon emergency assistance. Today GM has more than 11 million 4G LTE data-equipped vehicles on the road, including free basic service and extras you pay for. I found a volunteer, Doug, who let us peer inside his two-year-old Chevy Volt.
I met Mason at an empty warehouse, where he began by explaining one important bit of car anatomy. Modern vehicles don’t just have one computer. There are multiple, interconnected brains that can generate up to 25 gigabytes of data per hour from sensors all over the car. Even with Mason’s gear, we could only access some of these systems.
This kind of hacking isn’t a security risk for most of us — it requires hours of physical access to a vehicle. Mason brought a laptop, special software, a box of circuit boards, and dozens of sockets and screwdrivers.
We focused on the computer with the most accessible data: the infotainment system. You might think of it as the car’s touch-screen audio controls, yet many systems interact with it, from navigation to a synced-up smartphone. The only problem? This computer is buried beneath the dashboard.
After an hour of prying and unscrewing, our Chevy’s interior looked like it had been lobotomized. But Mason had extracted the infotainment computer, about the size of a small lunchbox. He clipped it into a circuit board, which fed into his laptop. The data didn’t copy over in our first few attempts. “There is a lot of trial and error,” said Mason.
(Don’t try this at home. Seriously — we had to take the car into a repair shop to get the infotainment computer reset.)
It was worth the trouble when Mason showed me my data. There on a map was the precise location where I’d driven to take apart the Chevy. There were my other destinations, like the hardware store I’d stopped at to buy some tape.
Among the trove of data points were unique identifiers for my and Doug’s phones, and a detailed log of phone calls from the previous week. There was a long list of contacts, right down to people’s address, emails and even photos.
For a broader view, Mason also extracted the data from a Chevrolet infotainment computer that I bought used on eBay for $375. It contained enough data to reconstruct the Upstate New York travels and relationships of a total stranger. We know he or she frequently called someone listed as “Sweetie,” whose photo we also have. We could see the exact Gulf station where they bought gas, the restaurant where they ate (called Taste China) and the unique identifiers for their Samsung Galaxy Note phones.
Infotainment systems can collect even more. Mason has hacked into Fords that record locations once every few minutes, even when you don’t use the navigation system. He’s seen German cars with 300-gigabyte hard drives — five times as much as a basic iPhone 11. The Tesla Model 3 can collect video snippets from the car’s many cameras. Coming next: face data, used to personalize the vehicle and track driver attention.
In our Chevy, we probably glimpsed just a fraction of what GM knows. We didn’t see what was uploaded to GM’s computers, because we couldn’t access the live OnStar cellular connection. (Researchers have done those kinds of hacks before to prove connected vehicles can be remotely controlled.)
GM spokesman David Caldwell declined to offer specifics on Doug’s Chevy but said the data GM collects generally falls into three categories: vehicle location, vehicle performance and driver behavior. “Much of this data is highly technical, not linkable to individuals and doesn’t leave the vehicle itself,” he said.
The company, he said, collects real-time data to monitor vehicle performance to improve safety and to help design future products and services.
But there were clues to what more GM knows on its website and app. It offers a Smart Driver score — a measure of good driving — based on how hard you brake and turn and how often you drive late at night. They’ll share that with insurance companies, if you want. With paid OnStar service, I could, on demand, locate the car’s exact location. It also offers in-vehicle WiFi and remote key access for Amazon package deliveries. An OnStar Marketplace connects the vehicle directly with third-party apps for Domino’s, IHOP, Shell and others.
It’s likely GM and other automakers keep just a slice of the data cars generate. But think of that as a temporary phenomenon. Coming 5G cellular networks promise to link cars to the Internet with ultra-fast, ultra-high-capacity connections. As wireless connections get cheaper and data becomes more valuable, anything the car knows about you is fair game.
GM’s view, echoed by many other automakers, is that we gave them permission for all of this. “Nothing happens without customer consent,” said GM’s Caldwell.
When my volunteer Doug bought his Chevy, he didn’t even realize OnStar basic service came standard. (I don’t blame him — who really knows what all they’re initialing on a car purchase contract?) There is no button or menu inside the Chevy to shut off OnStar or other data collection, though GM says it has added one to newer vehicles. Customers can press the console OnStar button and ask a representative to remotely disconnect.
What’s the worry? From conversations with industry insiders, I know many automakers haven’t totally figured out what to do with the growing amounts of driving data we generate. But that’s hardly stopping them from collecting it.
Five years ago, 20 automakers signed on to volunteer privacy standards, pledging to “provide customers with clear, meaningful information about the types of information collected and how it is used,” as well as “ways for customers to manage their data.” But when I called eight of the largest automakers, not even one offered a dashboard for customers to look at, download and control their data.
Automakers haven’t had a data reckoning yet, but they’re due for one. GM ran an experiment in which it tracked the radio music tastes of 90,000 volunteer drivers to look for patterns with where they traveled. According to the Detroit Free Press, GM told marketers that the data might help them persuade a country music fan who normally stopped at Tim Horton’s to go to McDonald’s instead.
GM would not tell me exactly what data it collected for that program but said “personal information was not involved” because it was anonymized data. (Privacy advocates have warned that location data is personal because it can be re-identified with individuals because we follow such unique patterns.)
Automakers say they put data security first. But I suspect they’re just not used to customers demanding transparency. They also probably want to have sole control over the data, given that the industry’s existential threats — self-driving and ride-hailing technologies — are built on it.
But not opening up brings problems, too. Automakers are battling with repair shops in Massachusetts about a proposal that would require car companies to grant owners — and mechanics — access to telematics data. The Auto Care Association says locking out independent shops could give consumers fewer choices and make us end up paying more for service. The automakers say it’s a security and privacy risk.
In 2020, the California Consumer Privacy Act will require any company that collects personal data about the state’s residents to provide access to the data and give people the ability to opt out of its sharing. GM said it would comply with the law but didn’t say how.
Are any carmakers better? Among the privacy policies I read, Toyota’s stood out for drawing a few clear lines in the sand about data sharing. It says it won’t share “personal information” with data resellers, social networks or ad networks — but still carves out the right to share what it calls “vehicle data” with business partners.
Until automakers put even a fraction of the effort they put into TV commercials into giving us control over our data, I’d be wary about using in-vehicle apps or signing up for additional data services. At least smartphone apps like Google Maps let you turn off and delete location history.
And Mason’s hack brought home a scary reality: Simply plugging a smartphone into a car could put your data at risk. If you’re selling your car or returning a lease or rental, take the time to delete the data saved on its infotainment system. An app called Privacy4Cars offers model-by-model directions. Mason gives out gifts of car-lighter USB plugs, which let you charge a phone without connecting it to the car computer. (You can buy inexpensive ones online.)
If you’re buying a new vehicle, tell the dealer you want to know about connected services — and how to turn them off. Few offer an Internet “kill switch,” but they may at least allow you turn off location tracking.
Or, for now at least, you can just buy an old car. Mason, for one, drives a conspicuously non-connected 1992 Toyota.
the perfect tool for the perfect murder
These being said, we’re dealing here with the perfect tool for the perfect murder. Speaking of which, we will be commemorating soon 10 years since the death of Michael Hastings, in 2013. #NeverForget
Here’s DARPA talking about hacking cars just months before Michael Hasting’s suspicious death:
Nowadays, with the Pentagon, the WEF and the Bilderbergers freaking out about the demise of their low-IQ fake-news media and the advent of independent journalism, this report alone is enough to get us targeted by a bunch of agencies that commonly use Pegasus and likely more advanced technology we haven’t even found out about.
You can’t hope much from a truther who drives computerized cars. Since 2013.
To be continued? Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
For years, the Pentagon tried to convince the public that they work on your dream secretary. Can you believe that? Funny how much those plans looked just like today’s Google and Facebook. But it’s not just the looks, it’s also the money, the timeline and the personal connections. Funnier how the funding scheme was often similar to the one used for Wuhan, with proxy organizations used as middlemen.
IT’S A MEMORY aid! A robotic assistant! An epidemic detector! An all-seeing, ultra-intrusive spying program!
The Pentagon is about to embark on a stunningly ambitious research project designed to gather every conceivable bit of information about a person’s life, index all the information and make it searchable.
What national security experts and civil libertarians want to know is, why would the Defense Department want to do such a thing?
The embryonic LifeLog program would dump everything an individual does into a giant database: every e-mail sent or received, every picture taken, every Web page surfed, every phone call made, every TV show watched, every magazine read.
All of this — and more — would combine with information gleaned from a variety of sources: a GPS transmitter to keep tabs on where that person went, audio-visual sensors to capture what he or she sees or says, and biomedical monitors to keep track of the individual’s health.
This gigantic amalgamation of personal information could then be used to “trace the ‘threads’ of an individual’s life,” to see exactly how a relationship or events developed, according to a briefing from the Defense Advanced Projects Research Agency, LifeLog’s sponsor.
Someone with access to the database could “retrieve a specific thread of past transactions, or recall an experience from a few seconds ago or from many years earlier … by using a search-engine interface.”
On the surface, the project seems like the latest in a long line of DARPA’s “blue sky” research efforts, most of which never make it out of the lab. But DARPA is currently asking businesses and universities for research proposals to begin moving LifeLog forward. And some people, such as Steven Aftergood, a defense analyst with the Federation of American Scientists, are worried.News of the future, now.
With its controversial Total Information Awareness database project, DARPA already is planning to track all of an individual’s “transactional data” — like what we buy and who gets our e-mail.
While the parameters of the project have not yet been determined, Aftergood said he believes LifeLog could go far beyond TIA’s scope, adding physical information (like how we feel) and media data (like what we read) to this transactional data.
“LifeLog has the potential to become something like ‘TIA cubed,'” he said.
In the private sector, a number of LifeLog-like efforts already are underway to digitally archive one’s life — to create a “surrogate memory,” as minicomputer pioneer Gordon Bell calls it.
Bell, now with Microsoft, scans all his letters and memos, records his conversations, saves all the Web pages he’s visited and e-mails he’s received and puts them into an electronic storehouse dubbed MyLifeBits.
DARPA’s LifeLog would take this concept several steps further by tracking where people go and what they see.
That makes the project similar to the work of University of Toronto professor Steve Mann. Since his teen years in the 1970s, Mann, a self-styled “cyborg,” has worn a camera and an array of sensors to record his existence. He claims he’s convinced 20 to 30 of his current and former students to do the same. It’s all part of an experiment into “existential technology” and “the metaphysics of free will.”
DARPA isn’t quite so philosophical about LifeLog. But the agency does see some potential battlefield uses for the program.
“The technology could allow the military to develop computerized assistants for war fighters and commanders that can be more effective because they can easily access the user’s past experiences,” DARPA spokeswoman Jan Walker speculated in an e-mail.
It also could allow the military to develop more efficient computerized training systems, she said: Computers could remember how each student learns and interacts with the training system, then tailor the lessons accordingly.
John Pike, director of defense think tank GlobalSecurity.org, said he finds the explanations “hard to believe.”
“It looks like an outgrowth of Total Information Awareness and other DARPA homeland security surveillance programs,” he added in an e-mail.
Sure, LifeLog could be used to train robotic assistants. But it also could become a way to profile suspected terrorists, said Cory Doctorow, with the Electronic Frontier Foundation. In other words, Osama bin Laden’s agent takes a walk around the block at 10 each morning, buys a bagel and a newspaper at the corner store and then calls his mother. You do the same things — so maybe you’re an al Qaeda member, too!
“The more that an individual’s characteristic behavior patterns — ‘routines, relationships and habits’ — can be represented in digital form, the easier it would become to distinguish among different individuals, or to monitor one,” Aftergood, the Federation of American Scientists analyst, wrote in an e-mail.
In its LifeLog report, DARPA makes some nods to privacy protection, like when it suggests that “properly anonymized access to LifeLog data might support medical research and the early detection of an emerging epidemic.”
But before these grand plans get underway, LifeLog will start small. Right now, DARPA is asking industry and academics to submit proposals for 18-month research efforts, with a possible 24-month extension. (DARPA is not sure yet how much money it will sink into the program.)
The researchers will be the centerpiece of their own study.
Like a game show, winning this DARPA prize eventually will earn the lucky scientists a trip for three to Washington, D.C. Except on this excursion, every participating scientist’s e-mail to the travel agent, every padded bar bill and every mad lunge for a cab will be monitored, categorized and later dissected.
Bending a bit to privacy concerns, the Pentagon changes some of the experiments to be conducted for LifeLog, its effort to record every tidbit of information and encounter in daily life. No video recording of unsuspecting people, for example.
MONDAY IS THE deadline for researchers to submit bids to build the Pentagon’s so-called LifeLog project, an experiment to create an all-encompassing über-diary.
But while teams of academics and entrepreneurs are jostling for the 18- to 24-month grants to work on the program, the Defense Department has changed the parameters of the project to respond to a tide of privacy concerns.
Lifelog is the Defense Advanced Research Projects Agency’s effort to gather every conceivable element of a person’s life, dump it all into a database, and spin the information into narrative threads that trace relationships, events and experiences.
It’s an attempt, some say, to make a kind of surrogate, digitized memory.
“My father was a stroke victim, and he lost the ability to record short-term memories,” said Howard Shrobe, an MIT computer scientist who’s leading a team of professors and researchers in a LifeLog bid. “If you ever saw the movie Memento, he had that. So I’m interested in seeing how memory works after seeing a broken one. LifeLog is a chance to do that.”
Researchers who receive LifeLog grants will be required to test the system on themselves. Cameras will record everything they do during a trip to Washington, D.C., and global-positioning satellite locators will track where they go. Biomedical sensors will monitor their health. All the e-mail they send, all the magazines they read, all the credit card payments they make will be indexed and made searchable.
By capturing experiences, Darpa claims that LifeLog could help develop more realistic computerized training programs and robotic assistants for battlefield commanders.
Defense analysts and civil libertarians, on the other hand, worry that the program is another piece in an ongoing Pentagon effort to keep tabs on American citizens. LifeLog could become the ultimate profiling tool, they fear.
A firestorm of criticism ignited after LifeLog first became public in May. Some potential bidders for the LifeLog contract dropped out as a result.
“I’m interested in LifeLog, but I’m going to shy away from it,” said Les Vogel, a computer science researcher in Maui, Hawaii. “Who wants to get in the middle of something that gets that much bad press?”
New York Times columnist William Safire noted that while LifeLog researchers might be comfortable recording their lives, the people that the LifeLoggers are “looking at, listening to, sniffing or conspiring with to blow up the world” might not be so thrilled about turning over some of their private interchanges to the Pentagon.
In response, Darpa changed the LifeLog proposal request. Now: “LifeLog researchers shall not capture imagery or audio of any person without that person’s a priori express permission. In fact, it is desired that capture of imagery or audio of any person other than the user be avoided even if a priori permission is granted.”
While the Pentagon’s project to record and catalog a person’s life scares privacy advocates, researchers see it as a step in the process of getting computers to think like humans.
TO PENTAGON RESEARCHERS, capturing and categorizing every aspect of a person’s life is only the beginning.
LifeLog — the controversial Defense Department initiative to track everything about an individual — is just one step in a larger effort, according to a top Pentagon research director. Personalized digital assistants that can guess our desires should come first. And then, just maybe, we’ll see computers that can think for themselves.
Computer scientists have dreamed for decades of building machines with minds of their own. But these hopes have been overwhelmed again and again by the messy, dizzying complexities of the real world.
In recent months, the Defense Advanced Research Projects Agency has launched a series of seemingly disparate programs — all designed, the agency says, to help computers deal with the complexities of life, so they finally can begin to think.
“Our ultimate goal is to build a new generation of computer systems that are substantially more robust, secure, helpful, long-lasting and adaptive to their users and tasks. These systems will need to reason, learn and respond intelligently to things they’ve never encountered before,” said Ron Brachman, the recently installed chief of Darpa’s Information Processing Technology Office, or IPTO. A former senior executive at AT&T Labs, Brachman was elected president of the American Association for Artificial Intelligence last year.
LifeLog is the best-known of these projects. The controversial program intends to record everything about a person — what he sees, where he goes, how he feels — and dump it into a database. Once captured, the information is supposed to be spun into narrative threads that trace relationships, events and experiences.
For years, researchers have been able to get programs to make sense of limited, tightly proscribed situations. Navigating outside of the lab has been much more difficult. Until recently, even getting a robot to walk across the room on its own was a tricky task.
“LifeLog is about forcing computers into the real world,” said leading artificial intelligence researcher Doug Lenat, who’s bidding on the project.
What LifeLog is not, Brachman asserts, is a program to track terrorists. By capturing so much information about an individual, and by combing relationships and traits out of that data, LifeLog appears to some civil libertarians to be an almost limitless tool for profiling potential enemies of the state. Concerns over the Terrorism Information Awareness database effort have only heightened sensitivities.
“These technologies developed by the military have obvious, easy paths to Homeland Security deployments,” said Lee Tien, with the Electronic Frontier Foundation.
Brachman said it is “up to military leaders to decide how to use our technology in support of their mission,” but he repeatedly insisted that IPTO has “absolutely no interest or intention of using any of our technology for profiling.”
What Brachman does want to do is create a computerized assistant that can learn about the habits and wishes of its human boss. And the first step toward this goal is for machines to start seeing, and remembering, life like people do.
Human beings don’t dump their experiences into some formless database or tag them with a couple of keywords. They divide their lives into discreet installments — “college,” “my first date,” “last Thursday.” Researchers call this “episodic memory.”
LifeLog is about trying to install episodic memory into computers, Brachman said. It’s about getting machines to start “remembering experiences in the commonsensical way we do — a vacation in Bermuda, a taxi ride to the airport.”
IPTO recently handed out $29 million in research grants to create a Perceptive Assistant that Learns, or PAL, that can draw on these episodes and improve itself in the process. If people keep missing conferences during rush hour, PAL should learn to schedule meetings when traffic isn’t as thick. If PAL’s boss keeps sending angry notes to spammers, the software secretary eventually should just start flaming on its own.
In the 1980s, artificial intelligence researchers promised to create programs that could do just that. Darpa even promoted a thinking “pilot’s associate — a kind of R2D2,” said Alex Roland, author of The Race for Machine Intelligence: Darpa, DoD, and the Strategic Computing Initiative.
But the field “fell on its face,” according to University of Washington computer scientist Henry Kautz. Instead of trying to teach computers how to reason on their own, “we said, ‘Well, if we just keep adding more rules, we could cover every case imaginable.'”
It’s an impossible task, of course. Every circumstance is different, and there will never be enough to stipulations to cover them all.
A few computer programs, with enough training from their human masters, can make some assumptions about new situations on their own, however. Amazon.com’s system for recommending books and music is one of these.
But these efforts are limited, too. Everyone’s received downright kooky suggestions from that Amazon program.
Overcoming these limitations requires a combination of logical approaches. That’s a goal behind IPTO’s new call for research into computers that can handle real-world reasoning.
It’s one of several problems Brachman said are “absolutely imperative” to solve as quickly as possible.
Although computer systems are getting more complicated every day, this complexity “may be actually reversing the information revolution,” he noted in a recent presentation (PDF). “Systems have grown more rigid, more fragile and increasingly open to attack.”
What’s needed, he asserts, is a computer network that can teach itself new capabilities, without having to be reprogrammed every time. Computers should be able to adapt to how its users like to work, spot when they’re being attacked and develop responses to these assaults. Think of it like the body’s immune system — or like a battlefield general.
But to act more like a person, a computer has to soak up its own experiences, like a human being does. It has to create a catalog of its existence. A LifeLog, if you will.
THE PENTAGON CANCELED its so-called LifeLog project, an ambitious effort to build a database tracking a person’s entire existence.
Run by Darpa, the Defense Department’s research arm, LifeLog aimed to gather in a single place just about everything an individual says, sees or does: the phone calls made, the TV shows watched, the magazines read, the plane tickets bought, the e-mail sent and received. Out of this seemingly endless ocean of information, computer scientists would plot distinctive routes in the data, mapping relationships, memories, events and experiences.
LifeLog’s backers said the all-encompassing diary could have turned into a near-perfect digital memory, giving its users computerized assistants with an almost flawless recall of what they had done in the past. But civil libertarians immediately pounced on the project when it debuted last spring, arguing that LifeLog could become the ultimate tool for profiling potential enemies of the state.
Researchers close to the project say they’re not sure why it was dropped late last month. Darpa hasn’t provided an explanation for LifeLog’s quiet cancellation. “A change in priorities” is the only rationale agency spokeswoman Jan Walker gave to Wired News.
However, related Darpa efforts concerning software secretaries and mechanical brains are still moving ahead as planned.
LifeLog is the latest in a series of controversial programs that have been canceled by Darpa in recent months. The Terrorism Information Awareness, or TIA, data-mining initiative was eliminated by Congress — although many analysts believe its research continues on the classified side of the Pentagon’s ledger. The Policy Analysis Market (or FutureMap), which provided a stock market of sorts for people to bet on terror strikes, was almost immediately withdrawn after its details came to light in July.
“I’ve always thought (LifeLog) would be the third program (after TIA and FutureMap) that could raise eyebrows if they didn’t make it clear how privacy concerns would be met,” said Peter Harsha, director of government affairs for the Computing Research Association.
“Darpa’s pretty gun-shy now,” added Lee Tien, with the Electronic Frontier Foundation, which has been critical of many agency efforts. “After TIA, they discovered they weren’t ready to deal with the firestorm of criticism.”
That’s too bad, artificial-intelligence researchers say. LifeLog would have addressed one of the key issues in developing computers that can think: how to take the unstructured mess of life, and recall it as discreet episodes — a trip to Washington, a sushi dinner, construction of a house.
“Obviously we’re quite disappointed,” said Howard Shrobe, who led a team from the Massachusetts Institute of Technology Artificial Intelligence Laboratory which spent weeks preparing a bid for a LifeLog contract. “We were very interested in the research focus of the program … how to help a person capture and organize his or her experience. This is a theme with great importance to both AI and cognitive science.”
To Tien, the project’s cancellation means “it’s just not tenable for Darpa to say anymore, ‘We’re just doing the technology, we have no responsibility for how it’s used.'”
Private-sector research in this area is proceeding. At Microsoft, for example, minicomputer pioneer Gordon Bell’s program, MyLifeBits, continues to develop ways to sort and store memories.
David Karger, Shrobe’s colleague at MIT, thinks such efforts will still go on at Darpa, too.
“I am sure that such research will continue to be funded under some other title,” wrote Karger in an e-mail. “I can’t imagine Darpa ‘dropping out’ of such a key research area.”
Google: seeded by the Pentagon
By dr. Nafeez Ahmed
In 1994 — the same year the Highlands Forum was founded under the stewardship of the Office of the Secretary of Defense, the ONA, and DARPA — two young PhD students at Stanford University, Sergey Brin and Larry Page, made their breakthrough on the first automated web crawling and page ranking application. That application remains the core component of what eventually became Google’s search service. Brin and Page had performed their work with funding from the Digital Library Initiative (DLI), a multi-agency programme of the National Science Foundation (NSF), NASA and DARPA.
Throughout the development of the search engine, Sergey Brin reported regularly and directly to two people who were not Stanford faculty at all: Dr. Bhavani Thuraisingham and Dr. Rick Steinheiser. Both were representatives of a sensitive US intelligence community research programme on information security and data-mining.
Thuraisingham is currently the Louis A. Beecherl distinguished professor and executive director of the Cyber Security Research Institute at the University of Texas, Dallas, and a sought-after expert on data-mining, data management and information security issues. But in the 1990s, she worked for the MITRE Corp., a leading US defense contractor, where she managed the Massive Digital Data Systems initiative, a project sponsored by the NSA, CIA, and the Director of Central Intelligence, to foster innovative research in information technology.
“We funded Stanford University through the computer scientist Jeffrey Ullman, who had several promising graduate students working on many exciting areas,” Prof. Thuraisingham told me. “One of them was Sergey Brin, the founder of Google. The intelligence community’s MDDS program essentially provided Brin seed-funding, which was supplemented by many other sources, including the private sector.”
This sort of funding is certainly not unusual, and Sergey Brin’s being able to receive it by being a graduate student at Stanford appears to have been incidental. The Pentagon was all over computer science research at this time. But it illustrates how deeply entrenched the culture of Silicon Valley is in the values of the US intelligence community.
In an extraordinary document hosted by the website of the University of Texas, Thuraisingham recounts that from 1993 to 1999, “the Intelligence Community [IC] started a program called Massive Digital Data Systems (MDDS) that I was managing for the Intelligence Community when I was at the MITRE Corporation.” The program funded 15 research efforts at various universities, including Stanford. Its goal was developing “data management technologies to manage several terabytes to petabytes of data,” including for “query processing, transaction management, metadata management, storage management, and data integration.”
At the time, Thuraisingham was chief scientist for data and information management at MITRE, where she led team research and development efforts for the NSA, CIA, US Air Force Research Laboratory, as well as the US Navy’s Space and Naval Warfare Systems Command (SPAWAR) and Communications and Electronic Command (CECOM). She went on to teach courses for US government officials and defense contractors on data-mining in counter-terrorism.
In her University of Texas article, she attaches the copy of an abstract of the US intelligence community’s MDDS program that had been presented to the “Annual Intelligence Community Symposium” in 1995. The abstract reveals that the primary sponsors of the MDDS programme were three agencies: the NSA, the CIA’s Office of Research & Development, and the intelligence community’s Community Management Staff (CMS) which operates under the Director of Central Intelligence. Administrators of the program, which provided funding of around 3–4 million dollars per year for 3–4 years, were identified as Hal Curran (NSA), Robert Kluttz (CMS), Dr. Claudia Pierce (NSA), Dr. Rick Steinheiser (ORD — standing for the CIA’s Office of Research and Devepment), and Dr. Thuraisingham herself.
Thuraisingham goes on in her article to reiterate that this joint CIA-NSA program partly funded Sergey Brin to develop the core of Google, through a grant to Stanford managed by Brin’s supervisor Prof. Jeffrey D. Ullman:
“In fact, the Google founder Mr. Sergey Brin was partly funded by this program while he was a PhD student at Stanford. He together with his advisor Prof. Jeffrey Ullman and my colleague at MITRE, Dr. Chris Clifton [Mitre’s chief scientist in IT], developed the Query Flocks System which produced solutions for mining large amounts of data stored in databases. I remember visiting Stanford with Dr. Rick Steinheiser from the Intelligence Community and Mr. Brin would rush in on roller blades, give his presentation and rush out. In fact the last time we met in September 1998, Mr. Brin demonstrated to us his search engine which became Google soon after.”
Brin and Page officially incorporated Google as a company in September 1998, the very month they last reported to Thuraisingham and Steinheiser. ‘Query Flocks’ was also part of Google’s patented ‘PageRank’ search system, which Brin developed at Stanford under the CIA-NSA-MDDS programme, as well as with funding from the NSF, IBM and Hitachi. That year, MITRE’s Dr. Chris Clifton, who worked under Thuraisingham to develop the ‘Query Flocks’ system, co-authored a paper with Brin’s superviser, Prof. Ullman, and the CIA’s Rick Steinheiser. Titled ‘Knowledge Discovery in Text,’ the paper was presented at an academic conference.
“The MDDS funding that supported Brin was significant as far as seed-funding goes, but it was probably outweighed by the other funding streams,” said Thuraisingham. “The duration of Brin’s funding was around two years or so. In that period, I and my colleagues from the MDDS would visit Stanford to see Brin and monitor his progress every three months or so. We didn’t supervise exactly, but we did want to check progress, point out potential problems and suggest ideas. In those briefings, Brin did present to us on the query flocks research, and also demonstrated to us versions of the Google search engine.”
Brin thus reported to Thuraisingham and Steinheiser regularly about his work developing Google.
UPDATE 2.05PM GMT [2nd Feb 2015]:
Since publication of this article, Prof. Thuraisingham has amended her article referenced above. The amended version includes a new modified statement, followed by a copy of the original version of her account of the MDDS. In this amended version, Thuraisingham rejects the idea that CIA funded Google, and says instead:
“In fact Prof. Jeffrey Ullman (at Stanford) and my colleague at MITRE Dr. Chris Clifton together with some others developed the Query Flocks System, as part of MDDS, which produced solutions for mining large amounts of data stored in databases. Also, Mr. Sergey Brin, the cofounder of Google, was part of Prof. Ullman’s research group at that time. I remember visiting Stanford with Dr. Rick Steinheiser from the Intelligence Community periodically and Mr. Brin would rush in on roller blades, give his presentation and rush out. During our last visit to Stanford in September 1998, Mr. Brin demonstrated to us his search engine which I believe became Google soon after…
There are also several inaccuracies in Dr. Ahmed’s article (dated January 22, 2015). For example, the MDDS program was not a ‘sensitive’ program as stated by Dr. Ahmed; it was an Unclassified program that funded universities in the US. Furthermore, Sergey Brin never reported to me or to Dr. Rick Steinheiser; he only gave presentations to us during our visits to the Department of Computer Science at Stanford during the 1990s. Also, MDDS never funded Google; it funded Stanford University.”
Here, there is no substantive factual difference in Thuraisingham’s accounts, other than to assert that her statement associating Sergey Brin with the development of ‘query flocks’ is mistaken. Notably, this acknowledgement is derived not from her own knowledge, but from this very article quoting a comment from a Google spokesperson.
However, the bizarre attempt to disassociate Google from the MDDS program misses the mark. Firstly, the MDDS never funded Google, because during the development of the core components of the Google search engine, there was no company incorporated with that name. The grant was instead provided to Stanford University through Prof. Ullman, through whom some MDDS funding was used to support Brin who was co-developing Google at the time. Secondly, Thuraisingham then adds that Brin never “reported” to her or the CIA’s Steinheiser, but admits he “gave presentations to us during our visits to the Department of Computer Science at Stanford during the 1990s.” It is unclear, though, what the distinction is here between reporting, and delivering a detailed presentation — either way, Thuraisingham confirms that she and the CIA had taken a keen interest in Brin’s development of Google. Thirdly, Thuraisingham describes the MDDS program as “unclassified,” but this does not contradict its “sensitive” nature. As someone who has worked for decades as an intelligence contractor and advisor, Thuraisingham is surely aware that there are many ways of categorizing intelligence, including ‘sensitive but unclassified.’ A number of former US intelligence officials I spoke to said that the almost total lack of public information on the CIA and NSA’s MDDS initiative suggests that although the progam was not classified, it is likely instead that its contents was considered sensitive, which would explain efforts to minimise transparency about the program and the way it fed back into developing tools for the US intelligence community. Fourthly, and finally, it is important to point out that the MDDS abstract which Thuraisingham includes in her University of Texas document states clearly not only that the Director of Central Intelligence’s CMS, CIA and NSA were the overseers of the MDDS initiative, but that the intended customers of the project were “DoD, IC, and other government organizations”: the Pentagon, the US intelligence community, and other relevant US government agencies.
In other words, the provision of MDDS funding to Brin through Ullman, under the oversight of Thuraisingham and Steinheiser, was fundamentally because they recognized the potential utility of Brin’s work developing Google to the Pentagon, intelligence community, and the federal government at large.
The MDDS programme is actually referenced in several papers co-authored by Brin and Page while at Stanford, specifically highlighting its role in financially sponsoring Brin in the development of Google. In their 1998 paper published in the Bulletin of the IEEE Computer Society Technical Committeee on Data Engineering, they describe the automation of methods to extract information from the web via “Dual Iterative Pattern Relation Extraction,” the development of “a global ranking of Web pages called PageRank,” and the use of PageRank “to develop a novel search engine called Google.” Through an opening footnote, Sergey Brin confirms he was “Partially supported by the Community Management Staff’s Massive Digital Data Systems Program, NSF grant IRI-96–31952” — confirming that Brin’s work developing Google was indeed partly-funded by the CIA-NSA-MDDS program.
This NSF grant identified alongside the MDDS, whose project report lists Brin among the students supported (without mentioning the MDDS), was different to the NSF grant to Larry Page that included funding from DARPA and NASA. The project report, authored by Brin’s supervisor Prof. Ullman, goes on to say under the section ‘Indications of Success’ that “there are some new stories of startups based on NSF-supported research.” Under ‘Project Impact,’ the report remarks: “Finally, the google project has also gone commercial as Google.com.”
Thuraisingham’s account, including her new amended version, therefore demonstrates that the CIA-NSA-MDDS program was not only partly funding Brin throughout his work with Larry Page developing Google, but that senior US intelligence representatives including a CIA official oversaw the evolution of Google in this pre-launch phase, all the way until the company was ready to be officially founded. Google, then, had been enabled with a “significant” amount of seed-funding and oversight from the Pentagon: namely, the CIA, NSA, and DARPA.
The DoD could not be reached for comment.
When I asked Prof. Ullman to confirm whether or not Brin was partly funded under the intelligence community’s MDDS program, and whether Ullman was aware that Brin was regularly briefing the CIA’s Rick Steinheiser on his progress in developing the Google search engine, Ullman’s responses were evasive: “May I know whom you represent and why you are interested in these issues? Who are your ‘sources’?” He also denied that Brin played a significant role in developing the ‘query flocks’ system, although it is clear from Brin’s papers that he did draw on that work in co-developing the PageRank system with Page.
When I asked Ullman whether he was denying the US intelligence community’s role in supporting Brin during the development of Google, he said: “I am not going to dignify this nonsense with a denial. If you won’t explain what your theory is, and what point you are trying to make, I am not going to help you in the slightest.”
The MDDS abstract published online at the University of Texas confirms that the rationale for the CIA-NSA project was to “provide seed money to develop data management technologies which are of high-risk and high-pay-off,” including techniques for “querying, browsing, and filtering; transaction processing; accesses methods and indexing; metadata management and data modelling; and integrating heterogeneous databases; as well as developing appropriate architectures.” The ultimate vision of the program was to “provide for the seamless access and fusion of massive amounts of data, information and knowledge in a heterogeneous, real-time environment” for use by the Pentagon, intelligence community and potentially across government.
These revelations corroborate the claims of Robert Steele, former senior CIA officer and a founding civilian deputy director of the Marine Corps Intelligence Activity, whom I interviewed for The Guardian last year on open source intelligence. Citing sources at the CIA, Steele had said in 2006 that Steinheiser, an old colleague of his, was the CIA’s main liaison at Google and had arranged early funding for the pioneering IT firm. At the time, Wired founder John Batelle managed to get this official denial from a Google spokesperson in response to Steele’s assertions:
“The statements related to Google are completely untrue.”
This time round, despite multiple requests and conversations, a Google spokesperson declined to comment.
UPDATE: As of 5.41PM GMT [22nd Jan 2015], Google’s director of corporate communication got in touch and asked me to include the following statement:
“Sergey Brin was not part of the Query Flocks Program at Stanford, nor were any of his projects funded by US Intelligence bodies.”
This is what I wrote back:
My response to that statement would be as follows: Brin himself in his own paper acknowledges funding from the Community Management Staff of the Massive Digital Data Systems (MDDS) initiative, which was supplied through the NSF. The MDDS was an intelligence community program set up by the CIA and NSA. I also have it on record, as noted in the piece, from Prof. Thuraisingham of University of Texas that she managed the MDDS program on behalf of the US intelligence community, and that her and the CIA’s Rick Steinheiser met Brin every three months or so for two years to be briefed on his progress developing Google and PageRank. Whether Brin worked on query flocks or not is neither here nor there.
In that context, you might want to consider the following questions:
1) Does Google deny that Brin’s work was part-funded by the MDDS via an NSF grant?
2) Does Google deny that Brin reported regularly to Thuraisingham and Steinheiser from around 1996 to 1998 until September that year when he presented the Google search engine to them?
Total Information Awareness
A call for papers for the MDDS was sent out via email list on November 3rd 1993 from senior US intelligence official David Charvonia, director of the research and development coordination office of the intelligence community’s CMS. The reaction from Tatu Ylonen (celebrated inventor of the widely used secure shell [SSH] data protection protocol) to his colleagues on the email list is telling: “Crypto relevance? Makes you think whether you should protect your data.” The email also confirms that defense contractor and Highlands Forum partner, SAIC, was managing the MDDS submission process, with abstracts to be sent to Jackie Booth of the CIA’s Office of Research and Development via a SAIC email address.
By 1997, Thuraisingham reveals, shortly before Google became incorporated and while she was still overseeing the development of its search engine software at Stanford, her thoughts turned to the national security applications of the MDDS program. In the acknowledgements to her book, Web Data Mining and Applications in Business Intelligence and Counter-Terrorism (2003), Thuraisingham writes that she and “Dr. Rick Steinheiser of the CIA, began discussions with Defense Advanced Research Projects Agency on applying data-mining for counter-terrorism,” an idea that resulted directly from the MDDS program which partly funded Google. “These discussions eventually developed into the current EELD (Evidence Extraction and Link Detection) program at DARPA.”
So the very same senior CIA official and CIA-NSA contractor involved in providing the seed-funding for Google were simultaneously contemplating the role of data-mining for counter-terrorism purposes, and were developing ideas for tools actually advanced by DARPA.
Today, as illustrated by her recent oped in the New York Times, Thuraisingham remains a staunch advocate of data-mining for counter-terrorism purposes, but also insists that these methods must be developed by government in cooperation with civil liberties lawyers and privacy advocates to ensure that robust procedures are in place to prevent potential abuse. She points out, damningly, that with the quantity of information being collected, there is a high risk of false positives.
In 1993, when the MDDS program was launched and managed by MITRE Corp. on behalf of the US intelligence community, University of Virginia computer scientist Dr. Anita K. Jones — a MITRE trustee — landed the job of DARPA director and head of research and engineering across the Pentagon. She had been on the board of MITRE since 1988. From 1987 to 1993, Jones simultaneously served on SAIC’s board of directors. As the new head of DARPA from 1993 to 1997, she also co-chaired the Pentagon’s Highlands Forum during the period of Google’s pre-launch development at Stanford under the MDSS.
Thus, when Thuraisingham and Steinheiser were talking to DARPA about the counter-terrorism applications of MDDS research, Jones was DARPA director and Highlands Forum co-chair. That year, Jones left DARPA to return to her post at the University of Virgina. The following year, she joined the board of the National Science Foundation, which of course had also just funded Brin and Page, and also returned to the board of SAIC. When she left DoD, Senator Chuck Robb paid Jones the following tribute : “She brought the technology and operational military communities together to design detailed plans to sustain US dominance on the battlefield into the next century.”
On the board of the National Science Foundation from 1992 to 1998 (including a stint as chairman from 1996) was Richard N. Zare. This was the period in which the NSF sponsored Sergey Brin and Larry Page in association with DARPA. In June 1994, Prof. Zare, a chemist at Stanford, participated with Prof. Jeffrey Ullman (who supervised Sergey Brin’s research), on a panel sponsored by Stanford and the National Research Council discussing the need for scientists to show how their work “ties to national needs.” The panel brought together scientists and policymakers, including “Washington insiders.”
DARPA’s EELD program, inspired by the work of Thuraisingham and Steinheiser under Jones’ watch, was rapidly adapted and integrated with a suite of tools to conduct comprehensive surveillance under the Bush administration.
According to DARPA official Ted Senator, who led the EELD program for the agency’s short-lived Information Awareness Office, EELD was among a range of “promising techniques” being prepared for integration “into the prototype TIA system.” TIA stood for Total Information Awareness, and was the main global electronic eavesdropping and data-mining program deployed by the Bush administration after 9/11. TIA had been set up by Iran-Contra conspirator Admiral John Poindexter, who was appointed in 2002 by Bush to lead DARPA’s new Information Awareness Office.
The Xerox Palo Alto Research Center (PARC) was another contractor among 26 companies (also including SAIC) that received million dollar contracts from DARPA (the specific quantities remained classified) under Poindexter, to push forward the TIA surveillance program in 2002 onwards. The research included “behaviour-based profiling,” “automated detection, identification and tracking” of terrorist activity, among other data-analyzing projects. At this time, PARC’s director and chief scientist was John Seely Brown. Both Brown and Poindexter were Pentagon Highlands Forum participants — Brown on a regular basis until recently.
TIA was purportedly shut down in 2003 due to public opposition after the program was exposed in the media, but the following year Poindexter participated in a Pentagon Highlands Group session in Singapore, alongside defense and security officials from around the world. Meanwhile, Ted Senator continued to manage the EELD program among other data-mining and analysis projects at DARPA until 2006, when he left to become a vice president at SAIC. He is now a SAIC/Leidos technical fellow.
Google, DARPA and the money trail
Long before the appearance of Sergey Brin and Larry Page, Stanford University’s computer science department had a close working relationship with US military intelligence. A letter dated November 5th 1984 from the office of renowned artificial intelligence (AI) expert, Prof Edward Feigenbaum, addressed to Rick Steinheiser, gives the latter directions to Stanford’s Heuristic Programming Project, addressing Steinheiser as a member of the “AI Steering Committee.” A list of attendees at a contractor conference around that time, sponsored by the Pentagon’s Office of Naval Research (ONR), includes Steinheiser as a delegate under the designation “OPNAV Op-115” — which refers to the Office of the Chief of Naval Operations’ program on operational readiness, which played a major role in advancing digital systems for the military.
From the 1970s, Prof. Feigenbaum and his colleagues had been running Stanford’s Heuristic Programming Project under contract with DARPA, continuing through to the 1990s. Feigenbaum alone had received around over $7 million in this period for his work from DARPA, along with other funding from the NSF, NASA, and ONR.
Brin’s supervisor at Stanford, Prof. Jeffrey Ullman, was in 1996 part of a joint funding project of DARPA’s Intelligent Integration of Information program. That year, Ullman co-chaired DARPA-sponsored meetings on data exchange between multiple systems.
In September 1998, the same month that Sergey Brin briefed US intelligence representatives Steinheiser and Thuraisingham, tech entrepreneurs Andreas Bechtolsheim and David Cheriton invested $100,000 each in Google. Both investors were connected to DARPA.
As a Stanford PhD student in electrical engineering in the 1980s, Bechtolsheim’s pioneering SUN workstation project had been funded by DARPA and the Stanford computer science department — this research was the foundation of Bechtolsheim’s establishment of Sun Microsystems, which he co-founded with William Joy.
As for Bechtolsheim’s co-investor in Google, David Cheriton, the latter is a long-time Stanford computer science professor who has an even more entrenched relationship with DARPA. His bio at the University of Alberta, which in November 2014 awarded him an honorary science doctorate, says that Cheriton’s “research has received the support of the US Defense Advanced Research Projects Agency (DARPA) for over 20 years.”
In the meantime, Bechtolsheim left Sun Microsystems in 1995, co-founding Granite Systems with his fellow Google investor Cheriton as a partner. They sold Granite to Cisco Systems in 1996, retaining significant ownership of Granite, and becoming senior Cisco executives.
An email obtained from the Enron Corpus (a database of 600,000 emails acquired by the Federal Energy Regulatory Commission and later released to the public) from Richard O’Neill, inviting Enron executives to participate in the Highlands Forum, shows that Cisco and Granite executives are intimately connected to the Pentagon. The email reveals that in May 2000, Bechtolsheim’s partner and Sun Microsystems co-founder, William Joy — who was then chief scientist and corporate executive officer there — had attended the Forum to discuss nanotechnology and molecular computing.
In 1999, Joy had also co-chaired the President’s Information Technology Advisory Committee, overseeing a report acknowledging that DARPA had:
“… revised its priorities in the 90’s so that all information technology funding was judged in terms of its benefit to the warfighter.”
Throughout the 1990s, then, DARPA’s funding to Stanford, including Google, was explicitly about developing technologies that could augment the Pentagon’s military intelligence operations in war theatres.
The Joy report recommended more federal government funding from the Pentagon, NASA, and other agencies to the IT sector. Greg Papadopoulos, another of Bechtolsheim’s colleagues as then Sun Microsystems chief technology officer, also attended a Pentagon Highlands’ Forum meeting in September 2000.
In November, the Pentagon Highlands Forum hosted Sue Bostrom, who was vice president for the internet at Cisco, sitting on the company’s board alongside Google co-investors Bechtolsheim and Cheriton. The Forum also hosted Lawrence Zuriff, then a managing partner of Granite, which Bechtolsheim and Cheriton had sold to Cisco. Zuriff had previously been an SAIC contractor from 1993 to 1994, working with the Pentagon on national security issues, specifically for Marshall’s Office of Net Assessment. In 1994, both the SAIC and the ONA were, of course, involved in co-establishing the Pentagon Highlands Forum. Among Zuriff’s output during his SAIC tenure was a paper titled ‘Understanding Information War’, delivered at a SAIC-sponsored US Army Roundtable on the Revolution in Military Affairs.
After Google’s incorporation, the company received $25 million in equity funding in 1999 led by Sequoia Capital and Kleiner Perkins Caufield & Byers. According to Homeland Security Today, “A number of Sequoia-bankrolled start-ups have contracted with the Department of Defense, especially after 9/11 when Sequoia’s Mark Kvamme met with Defense Secretary Donald Rumsfeld to discuss the application of emerging technologies to warfighting and intelligence collection.” Similarly, Kleiner Perkins had developed “a close relationship” with In-Q-Tel, the CIA venture capitalist firm that funds start-ups “to advance ‘priority’ technologies of value” to the intelligence community.
John Doerr, who led the Kleiner Perkins investment in Google obtaining a board position, was a major early investor in Becholshtein’s Sun Microsystems at its launch. He and his wife Anne are the main funders behind Rice University’s Center for Engineering Leadership (RCEL), which in 2009 received $16 million from DARPA for its platform-aware-compilation-environment (PACE) ubiquitous computing R&D program. Doerr also has a close relationship with the Obama administration, which he advised shortly after it took power to ramp up Pentagon funding to the tech industry. In 2013, at the Fortune Brainstorm TECH conference, Doerr applauded “how the DoD’s DARPA funded GPS, CAD, most of the major computer science departments, and of course, the Internet.”
From inception, in other words, Google was incubated, nurtured and financed by interests that were directly affiliated or closely aligned with the US military intelligence community: many of whom were embedded in the Pentagon Highlands Forum.
Google captures the Pentagon
In 2003, Google began customizing its search engine under special contract with the CIA for its Intelink Management Office, “overseeing top-secret, secret and sensitive but unclassified intranets for CIA and other IC agencies,” according to Homeland Security Today. That year, CIA funding was also being “quietly” funneled through the National Science Foundation to projects that might help create “new capabilities to combat terrorism through advanced technology.”
The following year, Google bought the firm Keyhole, which had originally been funded by In-Q-Tel. Using Keyhole, Google began developing the advanced satellite mapping software behind Google Earth. Former DARPA director and Highlands Forum co-chair Anita Jones had been on the board of In-Q-Tel at this time, and remains so today.
Then in November 2005, In-Q-Tel issued notices to sell $2.2 million of Google stocks. Google’s relationship with US intelligence was further brought to light when an IT contractor told a closed Washington DC conference of intelligence professionals on a not-for-attribution basis that at least one US intelligence agency was working to “leverage Google’s [user] data monitoring” capability as part of an effort to acquire data of “national security intelligence interest.”
A photo on Flickr dated March 2007 reveals that Google research director and AI expert Peter Norvig attended a Pentagon Highlands Forum meeting that year in Carmel, California. Norvig’s intimate connection to the Forum as of that year is also corroborated by his role in guest editing the 2007 Forum reading list.
The photo below shows Norvig in conversation with Lewis Shepherd, who at that time was senior technology officer at the Defense Intelligence Agency, responsible for investigating, approving, and architecting “all new hardware/software systems and acquisitions for the Global Defense Intelligence IT Enterprise,” including “big data technologies.” Shepherd now works at Microsoft. Norvig was a computer research scientist at Stanford University in 1991 before joining Bechtolsheim’s Sun Microsystems as senior scientist until 1994, and going on to head up NASA’s computer science division.
Norvig shows up on O’Neill’s Google Plus profile as one of his close connections. Scoping the rest of O’Neill’s Google Plus connections illustrates that he is directly connected not just to a wide range of Google executives, but also to some of the biggest names in the US tech community.
Those connections include Michele Weslander Quaid, an ex-CIA contractor and former senior Pentagon intelligence official who is now Google’s chief technology officer where she is developing programs to “best fit government agencies’ needs”; Elizabeth Churchill, Google director of user experience; James Kuffner, a humanoid robotics expert who now heads up Google’s robotics division and who introduced the term ‘cloud robotics’; Mark Drapeau, director of innovation engagement for Microsoft’s public sector business; Lili Cheng, general manager of Microsoft’s Future Social Experiences (FUSE) Labs; Jon Udell, Microsoft ‘evangelist’; Cory Ondrejka, vice president of engineering at Facebook; to name just a few.
In 2010, Google signed a multi-billion dollar no-bid contract with the NSA’s sister agency, the National Geospatial-Intelligence Agency (NGA). The contract was to use Google Earth for visualization services for the NGA. Google had developed the software behind Google Earth by purchasing Keyhole from the CIA venture firm In-Q-Tel.
Then a year after, in 2011, another of O’Neill’s Google Plus connections, Michele Quaid — who had served in executive positions at the NGA, National Reconnaissance Office and the Office of the Director of National Intelligence — left her government role to become Google ‘innovation evangelist’ and the point-person for seeking government contracts. Quaid’s last role before her move to Google was as a senior representative of the Director of National Intelligence to the Intelligence, Surveillance, and Reconnaissance Task Force, and a senior advisor to the undersecretary of defense for intelligence’s director of Joint and Coalition Warfighter Support (J&CWS). Both roles involved information operations at their core. Before her Google move, in other words, Quaid worked closely with the Office of the Undersecretary of Defense for Intelligence, to which the Pentagon’s Highlands Forum is subordinate. Quaid has herself attended the Forum, though precisely when and how often I could not confirm.
In March 2012, then DARPA director Regina Dugan — who in that capacity was also co-chair of the Pentagon Highlands Forum — followed her colleague Quaid into Google to lead the company’s new Advanced Technology and Projects Group. During her Pentagon tenure, Dugan led on strategic cyber security and social media, among other initiatives. She was responsible for focusing “an increasing portion” of DARPA’s work “on the investigation of offensive capabilities to address military-specific needs,” securing $500 million of government funding for DARPA cyber research from 2012 to 2017.
By November 2014, Google’s chief AI and robotics expert James Kuffner was a delegate alongside O’Neill at the Highlands Island Forum 2014 in Singapore, to explore ‘Advancement in Robotics and Artificial Intelligence: Implications for Society, Security and Conflict.’ The event included 26 delegates from Austria, Israel, Japan, Singapore, Sweden, Britain and the US, from both industry and government. Kuffner’s association with the Pentagon, however, began much earlier. In 1997, Kuffner was a researcher during his Stanford PhD for a Pentagon-funded project on networked autonomous mobile robots, sponsored by DARPA and the US Navy.
Dr Nafeez Ahmed is an investigative journalist, bestselling author and international security scholar. A former Guardian writer, he writes the ‘System Shift’ column for VICE’s Motherboard, and is also a columnist for Middle East Eye. He is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his Guardian work.
Nafeez has also written for The Independent, Sydney Morning Herald, The Age, The Scotsman, Foreign Policy, The Atlantic, Quartz, Prospect, New Statesman, Le Monde diplomatique, New Internationalist, Counterpunch, Truthout, among others. He is the author of A User’s Guide to the Crisis of Civilization: And How to Save It (2010), and the scifi thriller novel ZERO POINT, among other books. His work on the root causes and covert operations linked to international terrorism officially contributed to the 9/11 Commission and the 7/7 Coroner’s Inquest.
A rich history of the government’s science funding
There was already a long history of collaboration between America’s best scientists and the intelligence community, from the creation of the atomic bomb and satellite technology to efforts to put a man on the moon.The internet itself was created because of an intelligence effort.
In fact, the internet itself was created because of an intelligence effort: In the 1970s, the agency responsible for developing emerging technologies for military, intelligence, and national security purposes—the Defense Advanced Research Projects Agency (DARPA)—linked four supercomputers to handle massive data transfers. It handed the operations off to the National Science Foundation (NSF) a decade or so later, which proliferated the network across thousands of universities and, eventually, the public, thus creating the architecture and scaffolding of the World Wide Web.
Silicon Valley was no different. By the mid 1990s, the intelligence community was seeding funding to the most promising supercomputing efforts across academia, guiding the creation of efforts to make massive amounts of information useful for both the private sector as well as the intelligence community.
They funded these computer scientists through an unclassified, highly compartmentalized program that was managed for the CIA and the NSA by large military and intelligence contractors. It was called the Massive Digital Data Systems (MDDS) project.
The Massive Digital Data Systems (MDDS) project
MDDS was introduced to several dozen leading computer scientists at Stanford, CalTech, MIT, Carnegie Mellon, Harvard, and others in a white paper that described what the CIA, NSA, DARPA, and other agencies hoped to achieve. The research would largely be funded and managed by unclassified science agencies like NSF, which would allow the architecture to be scaled up in the private sector if it managed to achieve what the intelligence community hoped for.
“Not only are activities becoming more complex, but changing demands require that the IC [Intelligence Community] process different types as well as larger volumes of data,” the intelligence community said in its 1993 MDDS white paper. “Consequently, the IC is taking a proactive role in stimulating research in the efficient management of massive databases and ensuring that IC requirements can be incorporated or adapted into commercial products. Because the challenges are not unique to any one agency, the Community Management Staff (CMS) has commissioned a Massive Digital Data Systems [MDDS] Working Group to address the needs and to identify and evaluate possible solutions.”
Over the next few years, the program’s stated aim was to provide more than a dozen grants of several million dollars each to advance this research concept. The grants were to be directed largely through the NSF so that the most promising, successful efforts could be captured as intellectual property and form the basis of companies attracting investments from Silicon Valley. This type of public-to-private innovation system helped launch powerful science and technology companies like Qualcomm, Symantec, Netscape, and others, and funded the pivotal research in areas like Doppler radar and fiber optics, which are central to large companies like AccuWeather, Verizon, and AT&T today. Today, the NSF provides nearly 90% of all federal funding for university-based computer-science research.
The CIA and NSA’s end goal
The research arms of the CIA and NSA hoped that the best computer-science minds in academia could identify what they called “birds of a feather:” Just as geese fly together in large V shapes, or flocks of sparrows make sudden movements together in harmony, they predicted that like-minded groups of humans would move together online. The intelligence community named their first unclassified briefing for scientists the “birds of a feather” briefing, and the “Birds of a Feather Session on the Intelligence Community Initiative in Massive Digital Data Systems” took place at the Fairmont Hotel in San Jose in the spring of 1995.The intelligence community named their first unclassified briefing for scientists the “birds of a feather” briefing.
Their research aim was to track digital fingerprints inside the rapidly expanding global information network, which was then known as the World Wide Web. Could an entire world of digital information be organized so that the requests humans made inside such a network be tracked and sorted? Could their queries be linked and ranked in order of importance? Could “birds of a feather” be identified inside this sea of information so that communities and groups could be tracked in an organized way?
By working with emerging commercial-data companies, their intent was to track like-minded groups of people across the internet and identify them from the digital fingerprints they left behind, much like forensic scientists use fingerprint smudges to identify criminals. Just as “birds of a feather flock together,” they predicted that potential terrorists would communicate with each other in this new global, connected world—and they could find them by identifying patterns in this massive amount of new information. Once these groups were identified, they could then follow their digital trails everywhere.
Sergey Brin and Larry Page, computer-science boy wonders
In 1995, one of the first and most promising MDDS grants went to a computer-science research team at Stanford University with a decade-long history of working with NSF and DARPA grants. The primary objective of this grant was “query optimization of very complex queries that are described using the ‘query flocks’ approach.” A second grant—the DARPA-NSF grant most closely associated with Google’s origin—was part of a coordinated effort to build a massive digital library using the internet as its backbone. Both grants funded research by two graduate students who were making rapid advances in web-page ranking, as well as tracking (and making sense of) user queries: future Google cofounders Sergey Brin and Larry Page.
The research by Brin and Page under these grants became the heart of Google: people using search functions to find precisely what they wanted inside a very large data set. The intelligence community, however, saw a slightly different benefit in their research: Could the network be organized so efficiently that individual users could be uniquely identified and tracked?
This process is perfectly suited for the purposes of counter-terrorism and homeland security efforts: Human beings and like-minded groups who might pose a threat to national security can be uniquely identified online before they do harm. This explains why the intelligence community found Brin’s and Page’s research efforts so appealing; prior to this time, the CIA largely used human intelligence efforts in the field to identify people and groups that might pose threats. The ability to track them virtually (in conjunction with efforts in the field) would change everything.
It was the beginning of what in just a few years’ time would become Google. The two intelligence-community managers charged with leading the program met regularly with Brin as his research progressed, and he was an author on several other research papers that resulted from this MDDS grant before he and Page left to form Google.
The grants allowed Brin and Page to do their work and contributed to their breakthroughs in web-page ranking and tracking user queries. Brin didn’t work for the intelligence community—or for anyone else. Google had not yet been incorporated. He was just a Stanford researcher taking advantage of the grant provided by the NSA and CIA through the unclassified MDDS program.
Left out of Google’s story
The MDDS research effort has never been part of Google’s origin story, even though the principal investigator for the MDDS grant specifically named Google as directly resulting from their research: “Its core technology, which allows it to find pages far more accurately than other search engines, was partially supported by this grant,” he wrote. In a published research paper that includes some of Brin’s pivotal work, the authors also reference the NSF grant that was created by the MDDS program.
Instead, every Google creation story only mentions just one federal grant: the NSF/DARPA “digital libraries” grant, which was designed to allow Stanford researchers to search the entire World Wide Web stored on the university’s servers at the time. “The development of the Google algorithms was carried on a variety of computers, mainly provided by the NSF-DARPA-NASA-funded Digital Library project at Stanford,” Stanford’s Infolab says of its origin, for example. NSF likewise only references the digital libraries grant, not the MDDS grant as well, in its own history of Google’s origin. In the famous research paper, “The Anatomy of a Large-Scale Hypertextual Web Search Engine,” which describes the creation of Google, Brin and Page thanked the NSF and DARPA for its digital library grant to Stanford. But the grant from the intelligence community’s MDDS program—specifically designed for the breakthrough that Google was built upon—has faded into obscurity.
Did the CIA directly fund the work of Brin and Page, and therefore create Google? No. But were Brin and Page researching precisely what the NSA, the CIA, and the intelligence community hoped for, assisted by their grants? Absolutely.The CIA and NSA funded an unclassified, compartmentalized program designed from its inception to spur something that looks almost exactly like Google.
To understand this significance, you have to consider what the intelligence community was trying to achieve as it seeded grants to the best computer-science minds in academia: The CIA and NSA funded an unclassified, compartmentalized program designed from its inception to spur the development of something that looks almost exactly like Google. Brin’s breakthrough research on page ranking by tracking user queries and linking them to the many searches conducted—essentially identifying “birds of a feather”—was largely the aim of the intelligence community’s MDDS program. And Google succeeded beyond their wildest dreams.
The intelligence community’s enduring legacy within Silicon Valley
Digital privacy concerns over the intersection between the intelligence community and commercial technology giants have grown in recent years. But most people still don’t understand the degree to which the intelligence community relies on the world’s biggest science and tech companies for its counter-terrorism and national-security work.
Civil-liberty advocacy groups have aired their privacy concerns for years, especially as they now relate to the Patriot Act. “Hastily passed 45 days after 9/11 in the name of national security, the Patriot Act was the first of many changes to surveillance laws that made it easier for the government to spy on ordinary Americans by expanding the authority to monitor phone and email communications, collect bank and credit reporting records, and track the activity of innocent Americans on the Internet,” says the ACLU. “While most Americans think it was created to catch terrorists, the Patriot Act actually turns regular citizens into suspects.”
When asked, the biggest technology and communications companies—from Verizon and AT&T to Google, Facebook, and Microsoft—say that they never deliberately and proactively offer up their vast databases on their customers to federal security and law enforcement agencies: They say that they only respond to subpoenas or requests that are filed properly under the terms of the Patriot Act.
But even a cursory glance through recent public records shows that there is a treadmill of constant requests that could undermine the intent behind this privacy promise. According to the data-request records that the companies make available to the public, in the most recent reporting period between 2016 and 2017, local, state and federal government authorities seeking information related to national security, counter-terrorism or criminal concerns issued more than 260,000 subpoenas, court orders, warrants, and other legal requests to Verizon, more than 250,000 such requests to AT&T, and nearly 24,000 subpoenas, search warrants, or court orders to Google. Direct national security or counter-terrorism requests are a small fraction of this overall group of requests, but the Patriot Act legal process has now become so routinized that the companies each have a group of employees who simply take care of the stream of requests.
In this way, the collaboration between the intelligence community and big, commercial science and tech companies has been wildly successful. When national security agencies need to identify and track people and groups, they know where to turn – and do so frequently. That was the goal in the beginning. It has succeeded perhaps more than anyone could have imagined at the time.
FFW to 2020
From DARPA to Google: How the Military Kickstarted AV Development
Sebastian Thrun was entertaining the idea of self-driving cars for many years. Born and raised in Germany, he was fascinated with the power and performance of German cars. Things changed in 1986, when he was 18, when his best friend died in a car crash because the driver, another friend, was going too fast on his new Audi Quattro.
As a student at the University of Bonn, Thrun developed several autonomous robotic systems that earned him international recognition. At the time, Thrun was convinced that self-driving cars would soon make transportation safer, avoiding crashes like the one that took his friend’s life.
In 1998, he became an assistant professor and co-director of the Robot Learning Laboratory at Carnegie Mellon University. In July 2003, Thrun left Carnegie Mellon for Stanford University, soon after the first DARPA Grand Challenge was announced. Before accepting the new position, he asked Red Whittaker, the leader of the CMU robotics department, to join the team developing the vehicle for the DARPA race. Whittaker declined. After moving to California, Thrun joined the Stanford Racing Team.
On Oct. 8, 2005, the Stanford Racing Team won $2 million for being the first team to complete the 132-mile DARPA Grand Challenge course in California’s Mojave Desert. Their robot car, “Stanley,” finished in just under 6 hours and 54 minutes and averaged over 19 mph on the course.
Google’s Page wanted to develop self-driving cars
Two years after the third Grand Challenge, Google co-founder Larry Page called Thrun, wanting to turn the experience of the DARPA races into a product for the masses.
When Page first approached Thrun about building a self-driving car that people could use on the real roads, Thrun told him it couldn’t be done.
But Page had a vision, and he would not abandon his quest for an autonomous vehicle.
Thrun recalled that a short time later, Page came back to him and said, “OK, you say it can’t be done. You’re the expert. I trust you. So I can explain to Sergey [Brin] why it can’t be done, can you give me a technical reason why it can’t be done?”
Finally, Thrun accepted Page’s offer and, in 2009, started Project Chauffeur, which began as the Google self-driving car project.
The Google 101,000-Mile Challenge
To develop the technology for Google’s self-driving car, Thrun called Urmson and offered him the position of chief technical officer of the project.
To encourage the team to build a vehicle, and its systems, to drive on any public road, Page created two challenges, with big cash rewards for the entire team: a 1,000-mile challenge to show that Project Chauffeur’s car could drive in several situations, including highways and the streets of San Francisco, and another 100,000-mile challenge to show that driverless cars could be a reality in a few years.
By the middle of 2011, Project Chauffeur engineers completed the two challenges.
In 2016, the Google self-driving car project became Waymo, a “spinoff under Alphabet as a self-driving technology company with a mission to make it safe and easy for people and things to move around.”
Urmson led Google’s self-driving car project for nearly eight years. Under his leadership, Google vehicles accumulated 1.8 million miles of test driving.
In 2018, Waymo One, the first fully self-driving vehicle taxi service, began in Phoenix, Arizona.
From Waymo to Aurora
In 2016, after finishing development of the production-ready version of Waymo’s self-driving technology, Urmson left Google to start Aurora Innovation, a startup backed by Amazon, aiming to provide the full-stack solution for self-driving vehicles.
Urmson believes that in 20 years, we’ll see much of the transportation infrastructure move over to automation. – Arrow.com
TO BE CONTINUED
Here’s a peek into the next episode:
Facebook Hired a Former DARPA Head To Lead An Ambitious New Research Lab
If you need another sign that Facebook’s world-dominating ambitions are just getting started, here’s one: the Menlo Park, Calif. company has hired a former DARPA chief to lead its new research lab.
Facebook CEO Mark Zuckerberg announced April 14 that Regina Dugan will guide Building 8, a new research group developing hardware projects that advance the company’s efforts in virtual reality, augmented reality, artificial intelligence and global connectivity.
Dugan served as the head of the Pentagon’s Defense Advanced Research Projects Agency from 2009 and 2012. Most recently, she led Google’s Advanced Technology and Projects Lab, a highly experimental arm of the company responsible for developing new hardware and software products on a strict two-year timetable.
To be continued? Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them
I’m reporting from captivity in Agadir, a Moroccan city which I’ve just found out it’s the birth-place of Moncef Slaoui, the newly appointed head of “Operation Warp Speed” , Trump’s mass-vaccination campaign. And the that’s the least disturbing thing I have to tell you. I probably need protection now, I’m warned, authorities here are not big fans of free speech, people have been arrested for much less. But this isn’t much of a life anyway; all worth it if you spread this knowledge like fire. May the public eye be my protection, if any.
US President Donald Trump selected Moroccan immunology expert Moncef Slaoui to be the head of his administration’s COVID-19 vaccine development team, working on “Operation Warp Speed.”
The Moroccan expert, 60, will serve as the US government’s “therapeutics czar” to help coordinate the development of vaccines and treatments. The role is shared between the US Department of Health and Human Services and the Department of Defense.
Slaoui will be assisted by Army Gen. Gustave Perna, the commander of United States Army Materiel Command.
Slaoui earned a Ph.D. in molecular biology and immunology from the Free University of Brussels, Belgium, and completed his postdoctoral studies at Harvard Medical School and Tufts University School of Medicine, Boston.
He was the former head of the vaccines division at GlaxoSmithKline (GSK), where he oversaw the development of various vaccines: Rotarix, Synflorix, and Cervarix. In 2007, he announced plans to establish a neurosciences research group in Shanghai that would employ a thousand scientists and cost $100 million; it failed miserably and ceased operations in August 2017.
In 2008, Slaoui led the $720 million acquisition of Sirtris Pharmaceuticals, which folded in 2013. In 2012, he oversaw GSK’s purchase of Human Genome Sciences for over $3 billion. In 2015 he won European approval for the world’s first malaria vaccine (Mosquirix).
When he retired from the drugmaker in 2017, GSK was still working on the vaccine for Ebola.
GSK is now working on a COVID-19 vaccine with Sanofi, the French multinational pharma giant.
“Not long after leaving GSK, the enthusiastic and outgoing Slaoui started joining biotech boards, with welcomes at SutroVax, mRNA player Moderna as well as the public outfit Intellia $NTLA, one of a handful of CRISPR/Ca9 gene editing startups dominating the field. Then, a little over a month ago, he dropped off the Intellia crew, citing a conflict but not explaining it.” – Endpoints
But his biggest business move was becoming a partner at Medicxi Capital, a biotechnology venture capital firm in the Philadelphia, Pennsylvania area.
Medicxi has built a highly experienced team and, through the scientific advisory boards (SABs) of each of its funds, has access to some of the most respected names in the pharma industry. As well as Medicxi’s senior team, now including Dr Slaoui, external members and observers on the SABs to Medicxi’s funds included (2017):
Dr Vasant (Vas) Narasimhan, Global Head of Drug Development, Chief Medical Officer and Chief Executive Officer Elect
Dr Evan Beckman, Global Head of Translational Medicine at NIBR
Nigel Sheail, Head of Business Development and Licensing
From Verily Life Sciences:
Dr Andy Conrad, Chief Executive Officer
Dr Robert Califf, Advisor and former US FDA Commissioner
Dr Patrick Vallance President, R&D
Dr Paul-Peter Tak Senior Vice President R&D Pipeline, Global Development Leader and Chief Immunology Officer
From Johnson & Johnson:
Dr. Paul Stoffels, Executive Vice President, Chief Scientific Officer
Dr. Bill Hait, Global Head, Janssen Research & Development
Dr Patrick Verheyen Global Head, Janssen Business Development
Michèle Ollier, co-founder and Partner at Medicxi, said: “Moncef has made a tremendous contribution through his role on our SABs and we look forward to his continued energetic and insightful contribution as a Partner at Medicxi. Our SAB meetings are challenging, insightful and inspiring, and contribute hugely to how we steer and advise our portfolio companies.”
Medicxi is based in London, Geneva and Jersey. The Company’s mission is to invest across the full healthcare continuum. Medicxi was established by the former Index Ventures life sciences team. Medicxi manages the legacy life science portfolio of Index Ventures as well as the new funds launched as Medicxi, Medicxi Ventures 1 (MV1) and Medicxi Growth 1 (MG1) focusing on early-stage and late-stage investments in life sciences.
GSK, Johnson & Johnson and Novartis, three of the world’s largest pharmaceutical companies back Medicxi along with Verily, an Alphabet company. These companies, whilst participating in the SABs of the funds, do not receive any preferential rights to the portfolio companies.
Medicxi’s team has been investing in life sciences for over 20 years and has backed many successful companies, including Genmab (NASDAQ Copenhagen: GEN), PanGenetics (sold to AbbVie), Molecular Partners (SWX: MOLN), XO1 (sold to Janssen) Egalet (NASDAQ: EGLT), Minerva Neurosciences (NASDAQ: NERV) and Versartis (NASDAQ: VSAR).
Since 2017, Slaoui has been also sitting on the board of Moderna, a biotechnology company also pursuing a COVID-19 vaccine, based in Cambridge, Massachusetts.
The other problem, because they always come in pairs: Trump awarded Moderna almost $0.5Billion from public money a few days before nominating Slaoui. CNN reported on May 18th:
“Valera’s efforts (Moderna subsidiary) have resulted in the demonstration of preclinical efficacy of Moderna’s mRNA-based vaccines in multiple viral disease models, Moderna said.
In the partnership with the Gates Foundation, Valera will apply its mRNA vaccine platform as well as Moderna’s drug platform Messenger RNA Therapeutics™. Designed to produces human proteins, antibodies, and entirely novel protein constructs inside patient cells, the therapeutics are secreted or active intracellularly.” – Genetic Engineering & Biotechnology News
What, you think that’s bad? What if I told you this is the last one in a very long series of collaborations between the two?
Why is no one talking about this chapter of Moncef Slaoui’s career? Well, I am:
I find most relevant this transhumanist project Slaoui worked on with Google from 2016, as reported by Bloomberg:
<<The recent partnership between GlaxoSmithKline (GSK) and Alphabet (Google) further opens the door for development in the biotechnology industry’s experimental “bioelectronics” segment. Chairman of Vaccines at GlaxoSmithKline Dr. Moncef Slaoui thinks the partnership could create an entirely new industry. “I think this is a whole new industry as big as the pharmaceutical industry … there’s a whole new world that we’re opening here which is dealing with electrical signals to connect with our biology and changes functioning,” Slaoui told CNBC’s Meg Tirrell on “Squawk Box” Monday morning. Calling Alphabet’s Verily Life Services a “really exciting partner,” Slaoui says GlaxoSmithKline shares “a very common vision of integrating electronics and big data analytics and technologies with medicines and biology.” “They bring to us the engineering capabilities, the electronics, the low power technologies and the wireless technologies that are critical to miniaturize these devices, power them and extract information from them,” Slaoui noted.>>
“GSK has been interested in this field for years, and in 2013 announced a $1 million prize for innovative bioelectronics research. In a press statement, GSK’s Moncef Slaoui said: “Many of the processes of the human body are controlled by electrical signals firing between the nervous system and the body’s organs, which may become distorted in many chronic diseases.” He said bioelectronic seeks to “correct the irregular [electrical] patterns found in disease states, using miniaturized devices attached to individual nerves.” – The Verge
And having in mind the technological terror, the transhumanist/eugenicist obsessions brought by the coronavirus policy-makers today, one quote from the same source above hits home. This whole business falls right in the arms of anyone associating the coronavirus pandemic with human microchipping. Slaoui cited animal models as the indicator that bioelectronics can treat chronic diseases with a number of different devices.
The devices themselves are very small, about “the size of a rice grain”, and can “either stimulate or black the electric signals that our brains sense through our nerves to control the functioning of our organs… The limitations are around power as power requires energy and energy means heat and heat doesn’t go well with biology.
It gets weirder
Slaoui rejected reports in late March of his involvement with a US government task force for COVID-19 vaccine development and denied as recently as May 11 any intention to work with the Trump administration. WHY?? Morocco World News reported in March 31st: “The doctor said he has no working arrangements with the US government in a statement to Moroccan French-language newspaper L’Economiste. Several local news outlets claimed that the former chairman of pharmaceutical giant GlaxoSmithKline (GSK) is part of a task force that is researching a vaccine to clamp down on the spread of the virus. The international expert is currently a member of the board of directors of American biotechnology company Moderna. Slaoui explained that he is part of the company’s research and development committee. The committee has received support from federal organizations to help fund the development of a COVID-19 vaccine. “
White House senior adviser Jared Kushner, the son-in-law of President Trump, was among the officials who interviewed Slaoui for the role.
Jared Kushner went there. Jared Kushner personally picked America’s new “Vaccine Czar” Moncef Slaoui precisely one year after the meeting. Jared Kushner is a Zionist. Jared Kushner is Trump’s son law.
To avoid a conflict of interest, Slaoui resigned from the board of the Massachusetts-based biotech firm Moderna, which had been developing a vaccine for the coronavirus. He stepped down but he didn’t give up his stakes in Moderna, as the Daily Beast reports:
“Slaoui’s ownership of 156,000 Moderna stock options, disclosed in required federal financial filings, sparked concerns about a conflict of interest. Democratic Massachusetts Senator Elizabeth Warren called Slaoui out over the matter on Twitter: “It is a huge conflict of interest for the White House’s new vaccine czar to own $10 million of stock in a company receiving government funding to develop a COVID-19 vaccine. Dr. Slaoui should divest immediately.” The company’s shares skyrocketed last month after news broke of the $483 million in federal funding to work on a coronavirus vaccine. Slaoui could not immediately be reached for comment on the matter.”
Slaoui also sits on the boards of SutroVax, the Biotechnology Innovation Organization, the International AIDS Vaccine Initiative, and the PhRMA Foundation
The Moroccan expert’s main contenders for the position of chief advisor at “Operation Warp Speed” were Algeria’s Elias Zerhouni and US’ Arthur Levinson.
Zerhouni, born in 1951, is an Algerian scientist, radiologist, and biomedical engineer. The expert has held several important positions in a number of institutions, ranging from medical schools to pharmaceutical companies and government task forces.
In 2009, under the Obama administration, Zerhouni served as the first science envoy in the US and worked towards fostering scientific and technological collaboration with other countries.
Between 2011 and 2018, as a final stage in his career, Zerhouni was the President for Global Research and Development at, well, Sanofi.
The third main candidate in the race for Trump’s COVID-19 operation, Arthur Levinson, is an American businessman specialized in biotechnology.
Levinson has served as senior advisor for several companies and institutions, including Swiss healthcare multinational Hoffmann-La Roche, Amyris Biotechnologies, the Memorial Sloan Kettering Cancer Center, the California Institute for Quantitative Biosciences, and Princeton University.
The American businessman is currently the chairman of tech giant Apple and CEO of biotechnology company Calico.
“We will get to [vaccines] eventually, but we’re not there yet. If we want to lift the lockdowns, we need to fully respect them first”
Outlook on the pandemic
During an interview on April 12, Slaoui said he expects life to begin its return to normal at the beginning of 2021 after global leaders rein in the pandemic, adding that he considers his prediction “optimistic.”
He is confident “that due to the high number of COVID-19 cases, clinical studies will reach results quickly.” He believes that “by the end of May or by early June, we will know if some of these drugs work.”
“I am very optimistic that we’ll have several vaccines for COVID-19. However, the problem is not having a vaccine. The problem is producing enough to protect eight billion people,” he continued.
Which is weird, because US Government has just announced same day spending $138mil. to turbo-boost vaccine production, and I’ve published a massive investigation piece on that.
In another interview with Moroccan television channel 2M on April 13, Slaoui forecast that the COVID-19 pandemic will heavily scar the global population.
“I believe that by 2021 our reality will not be completely back to normal but it will be improved,” he argued.
Moncef Slaoui said if the virus continues to spread, there will be no way to control it other than to create a vaccine and administer it on a massive scale.
“We will get to [vaccines] eventually, but we’re not there yet. If we want to lift the lockdowns, we need to fully respect them first,” he explained.
He said countries can phase out lockdowns when there is a proven COVID-19 treatment.
Slaoui acknowledged that there are now hundreds of clinical studies underway in many countries.
He expressed optimism that due to the increasing number of COVID-19 cases, clinical studies will achieve preliminary results quickly. “I believe that by the end of May or by early June, we will know if some of these drugs work.”
Government watchdog Public Citizen on Thursday “condemned the Trump administration’s reported appointment of a former pharmaceutical executive to the White House’s task force aimed at swiftly developing a Covid-19 vaccine as another example of the White House putting management of the pandemic in the hands of private industry. “
“If the Trump administration approaches vaccine development as it has Covid-19 prevention, testing, and treatment, the world may be in for years of more extraordinary pain,” Maybarduk added. “The dangers of global vaccine rationing are profound. No one corporation has the capacity to deliver a vaccine to all the world’s people.”
In March, Trump’s FDA came under fire for awarding monopoly status to Gilead Sciences for Remdesivir, a drug it was developing for Covid-19 treatment. The company backed off its claim after a pressure campaign led by Public Citizen.
“The U.S. government must commit to sharing clinical trial data, patents, and know-how among manufacturers and with the world, to quickly achieve the mountainous scale of production that humanity needs,” the group said.
Everything makes even more sense if you also read the first part of this series of articles dedicated to the covid mafia and Operation Warp Speed
To be continued? Our work and existence, as media and people, is funded solely by our most generous readers and we want to keep this way. Help SILVIEW.media survive and grow, please donate here, anything helps. Thank you!
! Articles can always be subject of later editing as a way of perfecting them